diff --git a/.gitignore b/.gitignore deleted file mode 100644 index 58135916..00000000 --- a/.gitignore +++ /dev/null @@ -1,25 +0,0 @@ -.DS_Store -*.xpr - -# Packages -.venv -*.egg -*.egg-info - -# Testenvironment -.tox/ - -# Build directories -target/ -publish-docs/ -generated/ -build/ -/build-*.log.gz - -# Transifex Client Setting -.tx - -# Editors -*~ -.*.swp -.bak diff --git a/.gitreview b/.gitreview deleted file mode 100644 index ded74aee..00000000 --- a/.gitreview +++ /dev/null @@ -1,4 +0,0 @@ -[gerrit] -host=review.openstack.org -port=29418 -project=openstack/operations-guide.git diff --git a/LICENSE b/LICENSE deleted file mode 100644 index a0b0e120..00000000 --- a/LICENSE +++ /dev/null @@ -1,58 +0,0 @@ -License - -THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED. - -BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND CONDITIONS. - -1. Definitions - -"Adaptation" means a work based upon the Work, or upon the Work and other pre-existing works, such as a translation, adaptation, derivative work, arrangement of music or other alterations of a literary or artistic work, or phonogram or performance and includes cinematographic adaptations or any other form in which the Work may be recast, transformed, or adapted including in any form recognizably derived from the original, except that a work that constitutes a Collection will not be considered an Adaptation for the purpose of this License. For the avoidance of doubt, where the Work is a musical work, performance or phonogram, the synchronization of the Work in timed-relation with a moving image ("synching") will be considered an Adaptation for the purpose of this License. -"Collection" means a collection of literary or artistic works, such as encyclopedias and anthologies, or performances, phonograms or broadcasts, or other works or subject matter other than works listed in Section 1(g) below, which, by reason of the selection and arrangement of their contents, constitute intellectual creations, in which the Work is included in its entirety in unmodified form along with one or more other contributions, each constituting separate and independent works in themselves, which together are assembled into a collective whole. A work that constitutes a Collection will not be considered an Adaptation (as defined above) for the purposes of this License. -"Distribute" means to make available to the public the original and copies of the Work or Adaptation, as appropriate, through sale or other transfer of ownership. -"License Elements" means the following high-level license attributes as selected by Licensor and indicated in the title of this License: Attribution, Noncommercial, ShareAlike. -"Licensor" means the individual, individuals, entity or entities that offer(s) the Work under the terms of this License. -"Original Author" means, in the case of a literary or artistic work, the individual, individuals, entity or entities who created the Work or if no individual or entity can be identified, the publisher; and in addition (i) in the case of a performance the actors, singers, musicians, dancers, and other persons who act, sing, deliver, declaim, play in, interpret or otherwise perform literary or artistic works or expressions of folklore; (ii) in the case of a phonogram the producer being the person or legal entity who first fixes the sounds of a performance or other sounds; and, (iii) in the case of broadcasts, the organization that transmits the broadcast. -"Work" means the literary and/or artistic work offered under the terms of this License including without limitation any production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression including digital form, such as a book, pamphlet and other writing; a lecture, address, sermon or other work of the same nature; a dramatic or dramatico-musical work; a choreographic work or entertainment in dumb show; a musical composition with or without words; a cinematographic work to which are assimilated works expressed by a process analogous to cinematography; a work of drawing, painting, architecture, sculpture, engraving or lithography; a photographic work to which are assimilated works expressed by a process analogous to photography; a work of applied art; an illustration, map, plan, sketch or three-dimensional work relative to geography, topography, architecture or science; a performance; a broadcast; a phonogram; a compilation of data to the extent it is protected as a copyrightable work; or a work performed by a variety or circus performer to the extent it is not otherwise considered a literary or artistic work. -"You" means an individual or entity exercising rights under this License who has not previously violated the terms of this License with respect to the Work, or who has received express permission from the Licensor to exercise rights under this License despite a previous violation. -"Publicly Perform" means to perform public recitations of the Work and to communicate to the public those public recitations, by any means or process, including by wire or wireless means or public digital performances; to make available to the public Works in such a way that members of the public may access these Works from a place and at a place individually chosen by them; to perform the Work to the public by any means or process and the communication to the public of the performances of the Work, including by public digital performance; to broadcast and rebroadcast the Work by any means including signs, sounds or images. -"Reproduce" means to make copies of the Work by any means including without limitation by sound or visual recordings and the right of fixation and reproducing fixations of the Work, including storage of a protected performance or phonogram in digital form or other electronic medium. -2. Fair Dealing Rights. Nothing in this License is intended to reduce, limit, or restrict any uses free from copyright or rights arising from limitations or exceptions that are provided for in connection with the copyright protection under copyright law or other applicable laws. - -3. License Grant. Subject to the terms and conditions of this License, Licensor hereby grants You a worldwide, royalty-free, non-exclusive, perpetual (for the duration of the applicable copyright) license to exercise the rights in the Work as stated below: - -to Reproduce the Work, to incorporate the Work into one or more Collections, and to Reproduce the Work as incorporated in the Collections; -to create and Reproduce Adaptations provided that any such Adaptation, including any translation in any medium, takes reasonable steps to clearly label, demarcate or otherwise identify that changes were made to the original Work. For example, a translation could be marked "The original work was translated from English to Spanish," or a modification could indicate "The original work has been modified."; -to Distribute and Publicly Perform the Work including as incorporated in Collections; and, -to Distribute and Publicly Perform Adaptations. -The above rights may be exercised in all media and formats whether now known or hereafter devised. The above rights include the right to make such modifications as are technically necessary to exercise the rights in other media and formats. Subject to Section 8(f), all rights not expressly granted by Licensor are hereby reserved, including but not limited to the rights described in Section 4(e). - -4. Restrictions. The license granted in Section 3 above is expressly made subject to and limited by the following restrictions: - -You may Distribute or Publicly Perform the Work only under the terms of this License. You must include a copy of, or the Uniform Resource Identifier (URI) for, this License with every copy of the Work You Distribute or Publicly Perform. You may not offer or impose any terms on the Work that restrict the terms of this License or the ability of the recipient of the Work to exercise the rights granted to that recipient under the terms of the License. You may not sublicense the Work. You must keep intact all notices that refer to this License and to the disclaimer of warranties with every copy of the Work You Distribute or Publicly Perform. When You Distribute or Publicly Perform the Work, You may not impose any effective technological measures on the Work that restrict the ability of a recipient of the Work from You to exercise the rights granted to that recipient under the terms of the License. This Section 4(a) applies to the Work as incorporated in a Collection, but this does not require the Collection apart from the Work itself to be made subject to the terms of this License. If You create a Collection, upon notice from any Licensor You must, to the extent practicable, remove from the Collection any credit as required by Section 4(d), as requested. If You create an Adaptation, upon notice from any Licensor You must, to the extent practicable, remove from the Adaptation any credit as required by Section 4(d), as requested. -You may Distribute or Publicly Perform an Adaptation only under: (i) the terms of this License; (ii) a later version of this License with the same License Elements as this License; (iii) a Creative Commons jurisdiction license (either this or a later license version) that contains the same License Elements as this License (e.g., Attribution-NonCommercial-ShareAlike 3.0 US) ("Applicable License"). You must include a copy of, or the URI, for Applicable License with every copy of each Adaptation You Distribute or Publicly Perform. You may not offer or impose any terms on the Adaptation that restrict the terms of the Applicable License or the ability of the recipient of the Adaptation to exercise the rights granted to that recipient under the terms of the Applicable License. You must keep intact all notices that refer to the Applicable License and to the disclaimer of warranties with every copy of the Work as included in the Adaptation You Distribute or Publicly Perform. When You Distribute or Publicly Perform the Adaptation, You may not impose any effective technological measures on the Adaptation that restrict the ability of a recipient of the Adaptation from You to exercise the rights granted to that recipient under the terms of the Applicable License. This Section 4(b) applies to the Adaptation as incorporated in a Collection, but this does not require the Collection apart from the Adaptation itself to be made subject to the terms of the Applicable License. -You may not exercise any of the rights granted to You in Section 3 above in any manner that is primarily intended for or directed toward commercial advantage or private monetary compensation. The exchange of the Work for other copyrighted works by means of digital file-sharing or otherwise shall not be considered to be intended for or directed toward commercial advantage or private monetary compensation, provided there is no payment of any monetary compensation in con-nection with the exchange of copyrighted works. -If You Distribute, or Publicly Perform the Work or any Adaptations or Collections, You must, unless a request has been made pursuant to Section 4(a), keep intact all copyright notices for the Work and provide, reasonable to the medium or means You are utilizing: (i) the name of the Original Author (or pseudonym, if applicable) if supplied, and/or if the Original Author and/or Licensor designate another party or parties (e.g., a sponsor institute, publishing entity, journal) for attribution ("Attribution Parties") in Licensor's copyright notice, terms of service or by other reasonable means, the name of such party or parties; (ii) the title of the Work if supplied; (iii) to the extent reasonably practicable, the URI, if any, that Licensor specifies to be associated with the Work, unless such URI does not refer to the copyright notice or licensing information for the Work; and, (iv) consistent with Section 3(b), in the case of an Adaptation, a credit identifying the use of the Work in the Adaptation (e.g., "French translation of the Work by Original Author," or "Screenplay based on original Work by Original Author"). The credit required by this Section 4(d) may be implemented in any reasonable manner; provided, however, that in the case of a Adaptation or Collection, at a minimum such credit will appear, if a credit for all contributing authors of the Adaptation or Collection appears, then as part of these credits and in a manner at least as prominent as the credits for the other contributing authors. For the avoidance of doubt, You may only use the credit required by this Section for the purpose of attribution in the manner set out above and, by exercising Your rights under this License, You may not implicitly or explicitly assert or imply any connection with, sponsorship or endorsement by the Original Author, Licensor and/or Attribution Parties, as appropriate, of You or Your use of the Work, without the separate, express prior written permission of the Original Author, Licensor and/or Attribution Parties. -For the avoidance of doubt: - -Non-waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme cannot be waived, the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License; -Waivable Compulsory License Schemes. In those jurisdictions in which the right to collect royalties through any statutory or compulsory licensing scheme can be waived, the Licensor reserves the exclusive right to collect such royalties for any exercise by You of the rights granted under this License if Your exercise of such rights is for a purpose or use which is otherwise than noncommercial as permitted under Section 4(c) and otherwise waives the right to collect royalties through any statutory or compulsory licensing scheme; and, -Voluntary License Schemes. The Licensor reserves the right to collect royalties, whether individually or, in the event that the Licensor is a member of a collecting society that administers voluntary licensing schemes, via that society, from any exercise by You of the rights granted under this License that is for a purpose or use which is otherwise than noncommercial as permitted under Section 4(c). -Except as otherwise agreed in writing by the Licensor or as may be otherwise permitted by applicable law, if You Reproduce, Distribute or Publicly Perform the Work either by itself or as part of any Adaptations or Collections, You must not distort, mutilate, modify or take other derogatory action in relation to the Work which would be prejudicial to the Original Author's honor or reputation. Licensor agrees that in those jurisdictions (e.g. Japan), in which any exercise of the right granted in Section 3(b) of this License (the right to make Adaptations) would be deemed to be a distortion, mutilation, modification or other derogatory action prejudicial to the Original Author's honor and reputation, the Licensor will waive or not assert, as appropriate, this Section, to the fullest extent permitted by the applicable national law, to enable You to reasonably exercise Your right under Section 3(b) of this License (right to make Adaptations) but not otherwise. -5. Representations, Warranties and Disclaimer - -UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING AND TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW, LICENSOR OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE, INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS, WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THIS EXCLUSION MAY NOT APPLY TO YOU. - -6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. - -7. Termination - -This License and the rights granted hereunder will terminate automatically upon any breach by You of the terms of this License. Individuals or entities who have received Adaptations or Collections from You under this License, however, will not have their licenses terminated provided such individuals or entities remain in full compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will survive any termination of this License. -Subject to the above terms and conditions, the license granted here is perpetual (for the duration of the applicable copyright in the Work). Notwithstanding the above, Licensor reserves the right to release the Work under different license terms or to stop distributing the Work at any time; provided, however that any such election will not serve to withdraw this License (or any other license that has been, or is required to be, granted under the terms of this License), and this License will continue in full force and effect unless terminated as stated above. -8. Miscellaneous - -Each time You Distribute or Publicly Perform the Work or a Collection, the Licensor offers to the recipient a license to the Work on the same terms and conditions as the license granted to You under this License. -Each time You Distribute or Publicly Perform an Adaptation, Licensor offers to the recipient a license to the original Work on the same terms and conditions as the license granted to You under this License. -If any provision of this License is invalid or unenforceable under applicable law, it shall not affect the validity or enforceability of the remainder of the terms of this License, and without further action by the parties to this agreement, such provision shall be reformed to the minimum extent necessary to make such provision valid and enforceable. -No term or provision of this License shall be deemed waived and no breach consented to unless such waiver or consent shall be in writing and signed by the party to be charged with such waiver or consent. -This License constitutes the entire agreement between the parties with respect to the Work licensed here. There are no understandings, agreements or representations with respect to the Work not specified here. Licensor shall not be bound by any additional provisions that may appear in any communication from You. This License may not be modified without the mutual written agreement of the Licensor and You. -The rights granted under, and the subject matter referenced, in this License were drafted utilizing the terminology of the Berne Convention for the Protection of Literary and Artistic Works (as amended on September 28, 1979), the Rome Convention of 1961, the WIPO Copyright Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996 and the Universal Copyright Convention (as revised on July 24, 1971). These rights and subject matter take effect in the relevant jurisdiction in which the License terms are sought to be enforced according to the corresponding provisions of the implementation of those treaty provisions in the applicable national law. If the standard suite of rights granted under applicable copyright law includes additional rights not granted under this License, such additional rights are deemed to be included in the License; this License is not intended to restrict the license of any rights under applicable law. \ No newline at end of file diff --git a/README.rst b/README.rst index 3d4a3997..cd0f8453 100644 --- a/README.rst +++ b/README.rst @@ -1,67 +1,13 @@ -OpenStack Operations Guide -++++++++++++++++++++++++++ +This project is no longer maintained. -This content is read-only now, any changes to operations guide will be done -in the openstack-manuals repository where the guide lives in doc/ops-guide. +The contents of this repository are still available in the Git +source code management system. To see the contents of this +repository before it reached its end of life, please check out the +previous commit with "git checkout HEAD^1". -This repository contains the source files for the OpenStack Operations Guide. +The content has been merged into the openstack-manuals repository at +http://git.openstack.org/cgit/openstack/openstack-manuals/ -You can read this guide at `docs.openstack.org/ops `_. - -It was originally authored during a book sprint in February 2013. Read more -about Book Sprints at http://www.booksprints.net. - -Additionally, a tools directory contains tools for testing this guide. - -Prerequisites -============= - -`Apache Maven `_ must be installed to build the -documentation. - -To install Maven 3 for Ubuntu 12.04 and later,and Debian wheezy and later:: - - apt-get install maven - -On Fedora 20 and later:: - - yum install maven - -Contributing -============ - -This book is undergoing a custom edit with O'Reilly publishing and we welcome -contributions to make it as accurate as possible. Our target is the Havana release. - -The style guide to follow is at `chimera.labs.oreilly.com `_. - -Our community welcomes all people interested in open source cloud computing, -and encourages you to join the `OpenStack Foundation `_. -The best way to get involved with the community is to talk with others online -or at a meetup and offer contributions through our processes, the `OpenStack -wiki `_, blogs, or on IRC at ``#openstack`` -on ``irc.freenode.net``. - -Testing of changes and building of the manual -============================================= - -Install the python tox package and run "tox" from the top-level -directory to use the same tests that are done as part of our Jenkins -gating jobs. - -If you like to run individual tests, run: - - * ``tox -e checkniceness`` - to run the niceness tests - * ``tox -e checksyntax`` - to run syntax checks - * ``tox -e checkdeletions`` - to check that no deleted files are referenced - * ``tox -e checkbuild`` - to actually build the manual - * ``tox -e buildlang -- $LANG`` - to build the manual for language $LANG - -tox will use the openstack-doc-tools package for execution of these -tests. - -Installing OpenStack -==================== - -Refer to http://docs.openstack.org to see where these documents are published -and to learn more about the OpenStack project. +For any further questions, please email +openstack-docs@lists.openstack.org or join #openstack-doc on +Freenode. diff --git a/doc-test.conf b/doc-test.conf deleted file mode 100644 index 7960e193..00000000 --- a/doc-test.conf +++ /dev/null @@ -1,4 +0,0 @@ -[DEFAULT] -repo_name = operations-guide - -#file_exception = st-training-guides.xml diff --git a/doc-tools-check-languages.conf b/doc-tools-check-languages.conf deleted file mode 100644 index 4e05662c..00000000 --- a/doc-tools-check-languages.conf +++ /dev/null @@ -1,26 +0,0 @@ -# Example configuration for the languages 'ja' and 'fr'. - -# Directories to set up -declare -A DIRECTORIES=( - ["ja"]="openstack-ops glossary" -) - -# Books to build -declare -A BOOKS=( - ["ja"]="openstack-ops" -) - -# Where does the top-level pom live? -# Set to empty to not copy it. -POM_FILE="" - -# Location of doc dir -DOC_DIR="doc/" - -# Books with special handling -# Values need to match content in project-config/jenkins/scripts/common_translation_update.sh -declare -A SPECIAL_BOOKS=( - ["ops-guide"]="skip" - # These are translated in openstack-manuals - ["common"]="skip" -) diff --git a/doc/common/README.txt b/doc/common/README.txt deleted file mode 100644 index f46538ad..00000000 --- a/doc/common/README.txt +++ /dev/null @@ -1,7 +0,0 @@ -Important note about this directory -=================================== - -Because this directory is synced from openstack-manuals, make any changes in -openstack-manuals/doc/common. After changes to the synced files merge to -openstack-manuals/doc/common, a patch is automatically proposed for this -directory. diff --git a/doc/common/app_support.rst b/doc/common/app_support.rst deleted file mode 100644 index 79ca3ad3..00000000 --- a/doc/common/app_support.rst +++ /dev/null @@ -1,256 +0,0 @@ -.. ## WARNING ########################################################## -.. This file is synced from openstack/openstack-manuals repository to -.. other related repositories. If you need to make changes to this file, -.. make the changes in openstack-manuals. After any change merged to, -.. openstack-manuals, automatically a patch for others will be proposed. -.. ##################################################################### - -================= -Community support -================= - -The following resources are available to help you run and use OpenStack. -The OpenStack community constantly improves and adds to the main -features of OpenStack, but if you have any questions, do not hesitate to -ask. Use the following resources to get OpenStack support, and -troubleshoot your installations. - -Documentation -~~~~~~~~~~~~~ - -For the available OpenStack documentation, see -`docs.openstack.org `__. - -To provide feedback on documentation, join and use the -openstack-docs@lists.openstack.org mailing list at `OpenStack -Documentation Mailing -List `__, -or `report a -bug `__. - -The following books explain how to install an OpenStack cloud and its -associated components: - -* `Installation Guide for openSUSE Leap 42.1 and SUSE Linux Enterprise - Server 12 SP1 - `__ - -* `Installation Guide for Red Hat Enterprise Linux 7 and CentOS 7 - `__ - -* `Installation Guide for Ubuntu 14.04 (LTS) - `__ - -The following books explain how to configure and run an OpenStack cloud: - -* `Architecture Design Guide `__ - -* `Administrator Guide `__ - -* `Configuration Reference `__ - -* `Operations Guide `__ - -* `Networking Guide `__ - -* `High Availability Guide `__ - -* `Security Guide `__ - -* `Virtual Machine Image Guide `__ - -The following books explain how to use the OpenStack dashboard and -command-line clients: - -* `API Guide `__ - -* `End User Guide `__ - -* `Command-Line Interface Reference - `__ - -The following documentation provides reference and guidance information -for the OpenStack APIs: - -* `API Complete Reference - (HTML) `__ - -* `API Complete Reference - (PDF) `__ - -The following guide provides how to contribute to OpenStack documentation: - -* `Documentation Contributor Guide `__ - -ask.openstack.org -~~~~~~~~~~~~~~~~~ - -During the set up or testing of OpenStack, you might have questions -about how a specific task is completed or be in a situation where a -feature does not work correctly. Use the -`ask.openstack.org `__ site to ask questions -and get answers. When you visit the https://ask.openstack.org site, scan -the recently asked questions to see whether your question has already -been answered. If not, ask a new question. Be sure to give a clear, -concise summary in the title and provide as much detail as possible in -the description. Paste in your command output or stack traces, links to -screen shots, and any other information which might be useful. - -OpenStack mailing lists -~~~~~~~~~~~~~~~~~~~~~~~ - -A great way to get answers and insights is to post your question or -problematic scenario to the OpenStack mailing list. You can learn from -and help others who might have similar issues. To subscribe or view the -archives, go to -http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack. If you are -interested in the other mailing lists for specific projects or development, -refer to `Mailing Lists `__. - -The OpenStack wiki -~~~~~~~~~~~~~~~~~~ - -The `OpenStack wiki `__ contains a broad -range of topics but some of the information can be difficult to find or -is a few pages deep. Fortunately, the wiki search feature enables you to -search by title or content. If you search for specific information, such -as about networking or OpenStack Compute, you can find a large amount -of relevant material. More is being added all the time, so be sure to -check back often. You can find the search box in the upper-right corner -of any OpenStack wiki page. - -The Launchpad Bugs area -~~~~~~~~~~~~~~~~~~~~~~~ - -The OpenStack community values your set up and testing efforts and wants -your feedback. To log a bug, you must sign up for a Launchpad account at -https://launchpad.net/+login. You can view existing bugs and report bugs -in the Launchpad Bugs area. Use the search feature to determine whether -the bug has already been reported or already been fixed. If it still -seems like your bug is unreported, fill out a bug report. - -Some tips: - -* Give a clear, concise summary. - -* Provide as much detail as possible in the description. Paste in your - command output or stack traces, links to screen shots, and any other - information which might be useful. - -* Be sure to include the software and package versions that you are - using, especially if you are using a development branch, such as, - ``"Kilo release" vs git commit bc79c3ecc55929bac585d04a03475b72e06a3208``. - -* Any deployment-specific information is helpful, such as whether you - are using Ubuntu 14.04 or are performing a multi-node installation. - -The following Launchpad Bugs areas are available: - -* `Bugs: OpenStack Block Storage - (cinder) `__ - -* `Bugs: OpenStack Compute (nova) `__ - -* `Bugs: OpenStack Dashboard - (horizon) `__ - -* `Bugs: OpenStack Identity - (keystone) `__ - -* `Bugs: OpenStack Image service - (glance) `__ - -* `Bugs: OpenStack Networking - (neutron) `__ - -* `Bugs: OpenStack Object Storage - (swift) `__ - -* `Bugs: Application catalog (murano) `__ - -* `Bugs: Bare metal service (ironic) `__ - -* `Bugs: Clustering service (senlin) `__ - -* `Bugs: Containers service (magnum) `__ - -* `Bugs: Data processing service - (sahara) `__ - -* `Bugs: Database service (trove) `__ - -* `Bugs: Deployment service (fuel) `__ - -* `Bugs: DNS service (designate) `__ - -* `Bugs: Key Manager Service (barbican) `__ - -* `Bugs: Monitoring (monasca) `__ - -* `Bugs: Orchestration (heat) `__ - -* `Bugs: Rating (cloudkitty) `__ - -* `Bugs: Shared file systems (manila) `__ - -* `Bugs: Telemetry - (ceilometer) `__ - -* `Bugs: Telemetry v3 - (gnocchi) `__ - -* `Bugs: Workflow service - (mistral) `__ - -* `Bugs: Messaging service - (zaqar) `__ - -* `Bugs: OpenStack API Documentation - (developer.openstack.org) `__ - -* `Bugs: OpenStack Documentation - (docs.openstack.org) `__ - -The OpenStack IRC channel -~~~~~~~~~~~~~~~~~~~~~~~~~ - -The OpenStack community lives in the #openstack IRC channel on the -Freenode network. You can hang out, ask questions, or get immediate -feedback for urgent and pressing issues. To install an IRC client or use -a browser-based client, go to -`https://webchat.freenode.net/ `__. You can -also use Colloquy (Mac OS X, http://colloquy.info/), mIRC (Windows, -http://www.mirc.com/), or XChat (Linux). When you are in the IRC channel -and want to share code or command output, the generally accepted method -is to use a Paste Bin. The OpenStack project has one at -http://paste.openstack.org. Just paste your longer amounts of text or -logs in the web form and you get a URL that you can paste into the -channel. The OpenStack IRC channel is ``#openstack`` on -``irc.freenode.net``. You can find a list of all OpenStack IRC channels -at https://wiki.openstack.org/wiki/IRC. - -Documentation feedback -~~~~~~~~~~~~~~~~~~~~~~ - -To provide feedback on documentation, join and use the -openstack-docs@lists.openstack.org mailing list at `OpenStack -Documentation Mailing -List `__, -or `report a -bug `__. - -OpenStack distribution packages -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following Linux distributions provide community-supported packages -for OpenStack: - -* **Debian:** https://wiki.debian.org/OpenStack - -* **CentOS, Fedora, and Red Hat Enterprise Linux:** - https://www.rdoproject.org/ - -* **openSUSE and SUSE Linux Enterprise Server:** - https://en.opensuse.org/Portal:OpenStack - -* **Ubuntu:** https://wiki.ubuntu.com/ServerTeam/CloudArchive diff --git a/doc/common/conventions.rst b/doc/common/conventions.rst deleted file mode 100644 index b3cbabb2..00000000 --- a/doc/common/conventions.rst +++ /dev/null @@ -1,47 +0,0 @@ -.. ## WARNING ########################################################## -.. This file is synced from openstack/openstack-manuals repository to -.. other related repositories. If you need to make changes to this file, -.. make the changes in openstack-manuals. After any change merged to, -.. openstack-manuals, automatically a patch for others will be proposed. -.. ##################################################################### - -=========== -Conventions -=========== - -The OpenStack documentation uses several typesetting conventions. - -Notices -~~~~~~~ - -Notices take these forms: - -.. note:: A comment with additional information that explains a part of the - text. - -.. important:: Something you must be aware of before proceeding. - -.. tip:: An extra but helpful piece of practical advice. - -.. caution:: Helpful information that prevents the user from making mistakes. - -.. warning:: Critical information about the risk of data loss or security - issues. - -Command prompts -~~~~~~~~~~~~~~~ - -.. code-block:: console - - $ command - -Any user, including the ``root`` user, can run commands that are -prefixed with the ``$`` prompt. - -.. code-block:: console - - # command - -The ``root`` user must run commands that are prefixed with the ``#`` -prompt. You can also prefix these commands with the :command:`sudo` -command, if available, to run them. diff --git a/doc/common/glossary.rst b/doc/common/glossary.rst deleted file mode 100644 index c53c4160..00000000 --- a/doc/common/glossary.rst +++ /dev/null @@ -1,3950 +0,0 @@ -======== -Glossary -======== - -.. comments - This file is automatically generated, edit the master doc/glossary/glossary-terms.xml to update it. - -This glossary offers a list of terms and definitions to define a -vocabulary for OpenStack-related concepts. - -To add to OpenStack glossary, clone the `openstack/openstack-manuals repository `__ and update the source file -``doc/glossary/glossary-terms.xml`` through the -OpenStack contribution process. - -.. glossary:: - - 6to4 - - A mechanism that allows IPv6 packets to be transmitted - over an IPv4 network, providing a strategy for migrating to - IPv6. - - absolute limit - - Impassable limits for guest VMs. Settings include total RAM - size, maximum number of vCPUs, and maximum disk size. - - access control list - - A list of permissions attached to an object. An ACL specifies - which users or system processes have access to objects. It also - defines which operations can be performed on specified objects. Each - entry in a typical ACL specifies a subject and an operation. For - instance, the ACL entry ``(Alice, delete)`` for a file gives - Alice permission to delete the file. - - access key - - Alternative term for an Amazon EC2 access key. See EC2 access - key. - - account - - The Object Storage context of an account. Do not confuse with a - user account from an authentication service, such as Active Directory, - /etc/passwd, OpenLDAP, OpenStack Identity, and so on. - - account auditor - - Checks for missing replicas and incorrect or corrupted objects - in a specified Object Storage account by running queries against the - back-end SQLite database. - - account database - - A SQLite database that contains Object Storage accounts and - related metadata and that the accounts server accesses. - - account reaper - - An Object Storage worker that scans for and deletes account - databases and that the account server has marked for deletion. - - account server - - Lists containers in Object Storage and stores container - information in the account database. - - account service - - An Object Storage component that provides account services such - as list, create, modify, and audit. Do not confuse with OpenStack - Identity service, OpenLDAP, or similar user-account services. - - accounting - - The Compute service provides accounting information through the - event notification and system usage data facilities. - - ACL - - See access control list. - - active/active configuration - - In a high-availability setup with an active/active - configuration, several systems share the load together and if one - fails, the load is distributed to the remaining systems. - - Active Directory - - Authentication and identity service by Microsoft, based on LDAP. - Supported in OpenStack. - - active/passive configuration - - In a high-availability setup with an active/passive - configuration, systems are set up to bring additional resources online - to replace those that have failed. - - address pool - - A group of fixed and/or floating IP addresses that are assigned - to a project and can be used by or assigned to the VM instances in a - project. - - admin API - - A subset of API calls that are accessible to authorized - administrators and are generally not accessible to end users or the - public Internet. They can exist as a separate service (keystone) or - can be a subset of another API (nova). - - administrator - - The person responsible for installing, configuring, - and managing an OpenStack cloud. - - admin server - - In the context of the Identity service, the worker process that - provides access to the admin API. - - Advanced Message Queuing Protocol (AMQP) - - The open standard messaging protocol used by OpenStack - components for intra-service communications, provided by RabbitMQ, - Qpid, or ZeroMQ. - - Advanced RISC Machine (ARM) - - Lower power consumption CPU often found in mobile and embedded - devices. Supported by OpenStack. - - alert - - The Compute service can send alerts through its notification - system, which includes a facility to create custom notification - drivers. Alerts can be sent to and displayed on the horizon - dashboard. - - allocate - - The process of taking a floating IP address from the address - pool so it can be associated with a fixed IP on a guest VM - instance. - - Amazon Kernel Image (AKI) - - Both a VM container format and disk format. Supported by Image - service. - - Amazon Machine Image (AMI) - - Both a VM container format and disk format. Supported by Image - service. - - Amazon Ramdisk Image (ARI) - - Both a VM container format and disk format. Supported by Image - service. - - Anvil - - A project that ports the shell script-based project named - DevStack to Python. - - Apache - - The Apache Software Foundation supports the Apache community of - open-source software projects. These projects provide software - products for the public good. - - Apache License 2.0 - - All OpenStack core projects are provided under the terms of the - Apache License 2.0 license. - - Apache Web Server - - The most common web server software currently used on the - Internet. - - API endpoint - - The daemon, worker, or service that a client communicates with - to access an API. API endpoints can provide any number of services, - such as authentication, sales data, performance meters, Compute VM - commands, census data, and so on. - - API extension - - Custom modules that extend some OpenStack core APIs. - - API extension plug-in - - Alternative term for a Networking plug-in or Networking API - extension. - - API key - - Alternative term for an API token. - - API server - - Any node running a daemon or worker that provides an API - endpoint. - - API token - - Passed to API requests and used by OpenStack to verify that the - client is authorized to run the requested operation. - - API version - - In OpenStack, the API version for a project is part of the URL. - For example, ``example.com/nova/v1/foobar``. - - applet - - A Java program that can be embedded into a web page. - - Application Programming Interface (API) - - A collection of specifications used to access a service, - application, or program. Includes service calls, required parameters - for each call, and the expected return values. - - Application Catalog service - - OpenStack project that provides an application catalog - service so that users can compose and deploy composite - environments on an application abstraction level while - managing the application lifecycle. The code name of the - project is murano. - - application server - - A piece of software that makes available another piece of - software over a network. - - Application Service Provider (ASP) - - Companies that rent specialized applications that help - businesses and organizations provide additional services - with lower cost. - - Address Resolution Protocol (ARP) - - The protocol by which layer-3 IP addresses are resolved into - layer-2 link local addresses. - - arptables - - Tool used for maintaining Address Resolution Protocol packet - filter rules in the Linux kernel firewall modules. Used along with - iptables, ebtables, and ip6tables in Compute to provide firewall - services for VMs. - - associate - - The process associating a Compute floating IP address with a - fixed IP address. - - Asynchronous JavaScript and XML (AJAX) - - A group of interrelated web development techniques used on the - client-side to create asynchronous web applications. Used extensively - in horizon. - - ATA over Ethernet (AoE) - - A disk storage protocol tunneled within Ethernet. - - attach - - The process of connecting a VIF or vNIC to a L2 network in - Networking. In the context of Compute, this process connects a storage - volume to an instance. - - attachment (network) - - Association of an interface ID to a logical port. Plugs an - interface into a port. - - auditing - - Provided in Compute through the system usage data - facility. - - auditor - - A worker process that verifies the integrity of Object Storage - objects, containers, and accounts. Auditors is the collective term for - the Object Storage account auditor, container auditor, and object - auditor. - - Austin - - The code name for the initial release of - OpenStack. The first design summit took place in - Austin, Texas, US. - - auth node - - Alternative term for an Object Storage authorization - node. - - authentication - - The process that confirms that the user, process, or client is - really who they say they are through private key, secret token, - password, fingerprint, or similar method. - - authentication token - - A string of text provided to the client after authentication. - Must be provided by the user or process in subsequent requests to the - API endpoint. - - AuthN - - The Identity service component that provides authentication - services. - - authorization - - The act of verifying that a user, process, or client is - authorized to perform an action. - - authorization node - - An Object Storage node that provides authorization - services. - - AuthZ - - The Identity component that provides high-level - authorization services. - - Auto ACK - - Configuration setting within RabbitMQ that enables or disables - message acknowledgment. Enabled by default. - - auto declare - - A Compute RabbitMQ setting that determines whether a message - exchange is automatically created when the program starts. - - availability zone - - An Amazon EC2 concept of an isolated area that is used for fault - tolerance. Do not confuse with an OpenStack Compute zone or - cell. - - AWS - - Amazon Web Services. - - AWS CloudFormation template - - AWS CloudFormation allows AWS users to create and manage a - collection of related resources. The Orchestration service - supports a CloudFormation-compatible format (CFN). - - back end - - Interactions and processes that are obfuscated from the user, - such as Compute volume mount, data transmission to an iSCSI target by - a daemon, or Object Storage object integrity checks. - - back-end catalog - - The storage method used by the Identity service catalog service - to store and retrieve information about API endpoints that are - available to the client. Examples include an SQL database, LDAP - database, or KVS back end. - - back-end store - - The persistent data store used to save and retrieve information - for a service, such as lists of Object Storage objects, current state - of guest VMs, lists of user names, and so on. Also, the method that the - Image service uses to get and store VM images. Options include Object - Storage, local file system, S3, and HTTP. - - backup restore and disaster recovery as a service - - The OpenStack project that provides integrated tooling for - backing up, restoring, and recovering file systems, - instances, or database backups. The project name is freezer. - - bandwidth - - The amount of available data used by communication resources, - such as the Internet. Represents the amount of data that is used to - download things or the amount of data available to download. - - barbican - - Code name of the key management service for OpenStack. - - bare - - An Image service container format that indicates that no - container exists for the VM image. - - Bare Metal service - - OpenStack project that provisions bare metal, as opposed to - virtual, machines. The code name for the project is ironic. - - base image - - An OpenStack-provided image. - - Bell-LaPadula model - - A security model that focuses on data confidentiality - and controlled access to classified information. - This model divide the entities into subjects and objects. - The clearance of a subject is compared to the classification of the - object to determine if the subject is authorized for the specific access mode. - The clearance or classification scheme is expressed in terms of a lattice. - - Benchmark service - - OpenStack project that provides a framework for - performance analysis and benchmarking of individual - OpenStack components as well as full production OpenStack - cloud deployments. The code name of the project is rally. - - Bexar - - A grouped release of projects related to - OpenStack that came out in February of 2011. It - included only Compute (nova) and Object Storage (swift). - Bexar is the code name for the second release of - OpenStack. The design summit took place in - San Antonio, Texas, US, which is the county seat for Bexar county. - - binary - - Information that consists solely of ones and zeroes, which is - the language of computers. - - bit - - A bit is a single digit number that is in base of 2 (either a - zero or one). Bandwidth usage is measured in bits per second. - - bits per second (BPS) - - The universal measurement of how quickly data is transferred - from place to place. - - block device - - A device that moves data in the form of blocks. These device - nodes interface the devices, such as hard disks, CD-ROM drives, flash - drives, and other addressable regions of memory. - - block migration - - A method of VM live migration used by KVM to evacuate instances - from one host to another with very little downtime during a - user-initiated switchover. Does not require shared storage. Supported - by Compute. - - Block Storage service - - The OpenStack core project that enables management of volumes, - volume snapshots, and volume types. The project name of Block Storage - is cinder. - - Block Storage API - - An API on a separate endpoint for attaching, - detaching, and creating block storage for compute - VMs. - - BMC - - Baseboard Management Controller. The intelligence in the IPMI - architecture, which is a specialized micro-controller that is embedded - on the motherboard of a computer and acts as a server. Manages the - interface between system management software and platform - hardware. - - bootable disk image - - A type of VM image that exists as a single, bootable - file. - - Bootstrap Protocol (BOOTP) - - A network protocol used by a network client to obtain an IP - address from a configuration server. Provided in Compute through the - dnsmasq daemon when using either the FlatDHCP manager or VLAN manager - network manager. - - Border Gateway Protocol (BGP) - - The Border Gateway Protocol is a dynamic routing protocol - that connects autonomous systems. Considered the - backbone of the Internet, this protocol connects disparate - networks to form a larger network. - - browser - - Any client software that enables a computer or device to access - the Internet. - - builder file - - Contains configuration information that Object Storage uses to - reconfigure a ring or to re-create it from scratch after a serious - failure. - - bursting - - The practice of utilizing a secondary environment to - elastically build instances on-demand when the primary - environment is resource constrained. - - button class - - A group of related button types within horizon. Buttons to - start, stop, and suspend VMs are in one class. Buttons to associate - and disassociate floating IP addresses are in another class, and so - on. - - byte - - Set of bits that make up a single character; there are usually 8 - bits to a byte. - - CA - - Certificate Authority or Certification Authority. In - cryptography, an entity that issues digital certificates. The digital - certificate certifies the ownership of a public key by the named - subject of the certificate. This enables others (relying parties) to - rely upon signatures or assertions made by the private key that - corresponds to the certified public key. In this model of trust - relationships, a CA is a trusted third party for both the subject - (owner) of the certificate and the party relying upon the certificate. - CAs are characteristic of many public key infrastructure (PKI) - schemes. - - cache pruner - - A program that keeps the Image service VM image cache at or - below its configured maximum size. - - Cactus - - An OpenStack grouped release of projects that came out in the - spring of 2011. It included Compute (nova), Object Storage (swift), - and the Image service (glance). - Cactus is a city in Texas, US and is the code name for - the third release of OpenStack. When OpenStack releases went - from three to six months long, the code name of the release - changed to match a geography nearest the previous - summit. - - CADF - - Cloud Auditing Data Federation (CADF) is a - specification for audit event data. CADF is - supported by OpenStack Identity. - - CALL - - One of the RPC primitives used by the OpenStack message queue - software. Sends a message and waits for a response. - - capability - - Defines resources for a cell, including CPU, storage, and - networking. Can apply to the specific services within a cell or a - whole cell. - - capacity cache - - A Compute back-end database table that contains the current - workload, amount of free RAM, and number of VMs running on each host. - Used to determine on which host a VM starts. - - capacity updater - - A notification driver that monitors VM instances and updates the - capacity cache as needed. - - CAST - - One of the RPC primitives used by the OpenStack message queue - software. Sends a message and does not wait for a response. - - catalog - - A list of API endpoints that are available to a user after - authentication with the Identity service. - - catalog service - - An Identity service that lists API endpoints that are available - to a user after authentication with the Identity service. - - ceilometer - - The project name for the Telemetry service, which is an - integrated project that provides metering and measuring facilities for - OpenStack. - - cell - - Provides logical partitioning of Compute resources in a child - and parent relationship. Requests are passed from parent cells to - child cells if the parent cannot provide the requested - resource. - - cell forwarding - - A Compute option that enables parent cells to pass resource - requests to child cells if the parent cannot provide the requested - resource. - - cell manager - - The Compute component that contains a list of the current - capabilities of each host within the cell and routes requests as - appropriate. - - CentOS - - A Linux distribution that is compatible with OpenStack. - - Ceph - - Massively scalable distributed storage system that consists of - an object store, block store, and POSIX-compatible distributed file - system. Compatible with OpenStack. - - CephFS - - The POSIX-compliant file system provided by Ceph. - - certificate authority - - A simple certificate authority provided by Compute for cloudpipe - VPNs and VM image decryption. - - Challenge-Handshake Authentication Protocol (CHAP) - - An iSCSI authentication method supported by Compute. - - chance scheduler - - A scheduling method used by Compute that randomly chooses an - available host from the pool. - - changes since - - A Compute API parameter that downloads changes to the requested - item since your last request, instead of downloading a new, fresh set - of data and comparing it against the old data. - - Chef - - An operating system configuration management tool supporting - OpenStack deployments. - - child cell - - If a requested resource such as CPU time, disk storage, or - memory is not available in the parent cell, the request is forwarded - to its associated child cells. If the child cell can fulfill the - request, it does. Otherwise, it attempts to pass the request to any of - its children. - - cinder - - A core OpenStack project that provides block storage services - for VMs. - - CirrOS - - A minimal Linux distribution designed for use as a test - image on clouds such as OpenStack. - - Cisco neutron plug-in - - A Networking plug-in for Cisco devices and technologies, - including UCS and Nexus. - - cloud architect - - A person who plans, designs, and oversees the creation of - clouds. - - cloud computing - - A model that enables access to a shared pool of configurable - computing resources, such as networks, servers, storage, applications, - and services, that can be rapidly provisioned and released with - minimal management effort or service provider interaction. - - cloud controller - - Collection of Compute components that represent the global state - of the cloud; talks to services, such as Identity authentication, - Object Storage, and node/storage workers through a - queue. - - cloud controller node - - A node that runs network, volume, API, scheduler, and image - services. Each service may be broken out into separate nodes for - scalability or availability. - - Cloud Data Management Interface (CDMI) - - SINA standard that defines a RESTful API for managing objects in - the cloud, currently unsupported in OpenStack. - - Cloud Infrastructure Management Interface (CIMI) - - An in-progress specification for cloud management. Currently - unsupported in OpenStack. - - cloud-init - - A package commonly installed in VM images that performs - initialization of an instance after boot using information that it - retrieves from the metadata service, such as the SSH public key and - user data. - - cloudadmin - - One of the default roles in the Compute RBAC system. Grants - complete system access. - - Cloudbase-Init - - A Windows project providing guest initialization features, - similar to cloud-init. - - cloudpipe - - A compute service that creates VPNs on a per-project - basis. - - cloudpipe image - - A pre-made VM image that serves as a cloudpipe server. - Essentially, OpenVPN running on Linux. - - Clustering service - - The OpenStack project that OpenStack project that implements - clustering services and libraries for the management of - groups of homogeneous objects exposed by other OpenStack - services. The project name of Clustering service is - senlin. - - CMDB - - Configuration Management Database. - - congress - - OpenStack project that provides the Governance service. - - command filter - - Lists allowed commands within the Compute rootwrap - facility. - - Common Internet File System (CIFS) - - A file sharing protocol. It is a public or open variation of the - original Server Message Block (SMB) protocol developed and used by - Microsoft. Like the SMB protocol, CIFS runs at a higher level and uses - the TCP/IP protocol. - - community project - - A project that is not officially endorsed by the OpenStack - Foundation. If the project is successful enough, it might be elevated - to an incubated project and then to a core project, or it might be - merged with the main code trunk. - - compression - - Reducing the size of files by special encoding, the file can be - decompressed again to its original content. OpenStack supports - compression at the Linux file system level but does not support - compression for things such as Object Storage objects or Image service - VM images. - - Compute service - - The OpenStack core project that provides compute services. The - project name of Compute service is nova. - - Compute API - - The nova-api daemon - provides access to nova services. Can communicate with other APIs, - such as the Amazon EC2 API. - - compute controller - - The Compute component that chooses suitable hosts on which to - start VM instances. - - compute host - - Physical host dedicated to running compute nodes. - - compute node - - A node that runs the nova-compute daemon that manages VM - instances that provide a wide - range of services, such as web applications and analytics. - - Compute service - - Name for the Compute component that manages VMs. - - compute worker - - The Compute component that runs on each compute node and manages - the VM instance lifecycle, including run, reboot, terminate, - attach/detach volumes, and so on. Provided by the nova-compute daemon. - - concatenated object - - A set of segment objects that Object Storage combines and sends - to the client. - - conductor - - In Compute, conductor is the process that proxies database - requests from the compute process. Using conductor improves security - because compute nodes do not need direct access to the - database. - - consistency window - - The amount of time it takes for a new Object Storage object to - become accessible to all clients. - - console log - - Contains the output from a Linux VM console in Compute. - - container - - Organizes and stores objects in Object Storage. Similar to the - concept of a Linux directory but cannot be nested. Alternative term - for an Image service container format. - - container auditor - - Checks for missing replicas or incorrect objects in specified - Object Storage containers through queries to the SQLite back-end - database. - - container database - - A SQLite database that stores Object Storage containers and - container metadata. The container server accesses this - database. - - container format - - A wrapper used by the Image service that contains a VM image and - its associated metadata, such as machine state, OS disk size, and so - on. - - container server - - An Object Storage server that manages containers. - - Containers service - - OpenStack project that provides a set of services for - management of application containers in a multi-tenant cloud - environment. The code name of the project name is magnum. - - container service - - The Object Storage component that provides container services, - such as create, delete, list, and so on. - - content delivery network (CDN) - - A content delivery network is a specialized network that is - used to distribute content to clients, typically located - close to the client for increased performance. - - controller node - - Alternative term for a cloud controller node. - - core API - - Depending on context, the core API is either the OpenStack API - or the main API of a specific core project, such as Compute, - Networking, Image service, and so on. - - core service - - An official OpenStack service defined as core by - DefCore Committee. Currently, consists of - Block Storage service (cinder), Compute service (nova), - Identity service (keystone), Image service (glance), - Networking service (neutron), and Object Storage service (swift). - - cost - - Under the Compute distributed scheduler, this is calculated by - looking at the capabilities of each host relative to the flavor of the - VM instance being requested. - - credentials - - Data that is only known to or accessible by a user and - used to verify that the user is who he says he is. - Credentials are presented to the server during - authentication. Examples include a password, secret key, - digital certificate, and fingerprint. - - Cross-Origin Resource Sharing (CORS) - - A mechanism that allows many resources (for example, - fonts, JavaScript) on a web page to be requested from - another domain outside the domain from which the resource - originated. In particular, JavaScript's AJAX calls can use - the XMLHttpRequest mechanism. - - Crowbar - - An open source community project by Dell that aims to provide - all necessary services to quickly deploy clouds. - - current workload - - An element of the Compute capacity cache that is calculated - based on the number of build, snapshot, migrate, and resize operations - currently in progress on a given host. - - customer - - Alternative term for tenant. - - customization module - - A user-created Python module that is loaded by horizon to change - the look and feel of the dashboard. - - daemon - - A process that runs in the background and waits for requests. - May or may not listen on a TCP or UDP port. Do not confuse with a - worker. - - DAC - - Discretionary access control. Governs the ability of subjects to - access objects, while enabling users to make policy decisions and - assign security attributes. The traditional UNIX system of users, - groups, and read-write-execute permissions is an example of - DAC. - - Dashboard - - The web-based management interface for OpenStack. An alternative - name for horizon. - - data encryption - - Both Image service and Compute support encrypted virtual machine - (VM) images (but not instances). In-transit data encryption is - supported in OpenStack using technologies such as HTTPS, SSL, TLS, and - SSH. Object Storage does not support object encryption at the - application level but may support storage that uses disk encryption. - - database ID - - A unique ID given to each replica of an Object Storage - database. - - database replicator - - An Object Storage component that copies changes in the account, - container, and object databases to other nodes. - - Database service - - An integrated project that provide scalable and reliable - Cloud Database-as-a-Service functionality for both - relational and non-relational database engines. The project - name of Database service is trove. - - Data Processing service - - OpenStack project that provides a scalable - data-processing stack and associated management - interfaces. The code name for the project is sahara. - - data store - - A database engine supported by the Database service. - - deallocate - - The process of removing the association between a floating IP - address and a fixed IP address. Once this association is removed, the - floating IP returns to the address pool. - - Debian - - A Linux distribution that is compatible with OpenStack. - - deduplication - - The process of finding duplicate data at the disk block, file, - and/or object level to minimize storage use—currently unsupported - within OpenStack. - - default panel - - The default panel that is displayed when a user accesses the - horizon dashboard. - - default tenant - - New users are assigned to this tenant if no tenant is specified - when a user is created. - - default token - - An Identity service token that is not associated with a specific - tenant and is exchanged for a scoped token. - - delayed delete - - An option within Image service so that an image is deleted after - a predefined number of seconds instead of immediately. - - delivery mode - - Setting for the Compute RabbitMQ message delivery mode; can be - set to either transient or persistent. - - denial of service (DoS) - - Denial of service (DoS) is a short form for - denial-of-service attack. This is a malicious attempt to - prevent legitimate users from using a service. - - deprecated auth - - An option within Compute that enables administrators to create - and manage users through the ``nova-manage`` command as - opposed to using the Identity service. - - designate - - Code name for the DNS service project for OpenStack. - - Desktop-as-a-Service - - A platform that provides a suite of desktop environments - that users access to receive a desktop experience from - any location. This may provide general use, development, or - even homogeneous testing environments. - - developer - - One of the default roles in the Compute RBAC system and the - default role assigned to a new user. - - device ID - - Maps Object Storage partitions to physical storage - devices. - - device weight - - Distributes partitions proportionately across Object Storage - devices based on the storage capacity of each device. - - DevStack - - Community project that uses shell scripts to quickly build - complete OpenStack development environments. - - DHCP - - Dynamic Host Configuration Protocol. A network protocol that - configures devices that are connected to a network so that they can - communicate on that network by using the Internet Protocol (IP). The - protocol is implemented in a client-server model where DHCP clients - request configuration data, such as an IP address, a default route, - and one or more DNS server addresses from a DHCP server. - - DHCP agent - - OpenStack Networking agent that provides DHCP services - for virtual networks. - - Diablo - - A grouped release of projects related to OpenStack that came out - in the fall of 2011, the fourth release of OpenStack. It included - Compute (nova 2011.3), Object Storage (swift 1.4.3), and the Image - service (glance). - Diablo is the code name for the fourth release of - OpenStack. The design summit took place in - the Bay Area near Santa Clara, - California, US and Diablo is a nearby city. - - direct consumer - - An element of the Compute RabbitMQ that comes to life when a RPC - call is executed. It connects to a direct exchange through a unique - exclusive queue, sends the message, and terminates. - - direct exchange - - A routing table that is created within the Compute RabbitMQ - during RPC calls; one is created for each RPC call that is - invoked. - - direct publisher - - Element of RabbitMQ that provides a response to an incoming MQ - message. - - disassociate - - The process of removing the association between a floating IP - address and fixed IP and thus returning the floating IP address to the - address pool. - - disk encryption - - The ability to encrypt data at the file system, disk partition, - or whole-disk level. Supported within Compute VMs. - - disk format - - The underlying format that a disk image for a VM is stored as - within the Image service back-end store. For example, AMI, ISO, QCOW2, - VMDK, and so on. - - dispersion - - In Object Storage, tools to test and ensure dispersion of - objects and containers to ensure fault tolerance. - - distributed virtual router (DVR) - - Mechanism for highly-available multi-host routing when using - OpenStack Networking (neutron). - - Django - - A web framework used extensively in horizon. - - DNS - - Domain Name System. A hierarchical and distributed naming system - for computers, services, and resources connected to the Internet or a - private network. Associates a human-friendly names to IP - addresses. - - DNS record - - A record that specifies information about a particular domain - and belongs to the domain. - - DNS service - - OpenStack project that provides scalable, on demand, self - service access to authoritative DNS services, in a - technology-agnostic manner. The code name for the project is - designate. - - dnsmasq - - Daemon that provides DNS, DHCP, BOOTP, and TFTP services for - virtual networks. - - domain - - An Identity API v3 entity. Represents a collection of - projects, groups and users that defines administrative boundaries for - managing OpenStack Identity entities. - On the Internet, separates a website from other sites. Often, - the domain name has two or more parts that are separated by dots. - For example, yahoo.com, usa.gov, harvard.edu, or - mail.yahoo.com. - Also, a domain is an entity or container of all DNS-related - information containing one or more records. - - Domain Name System (DNS) - - A system by which Internet domain name-to-address and - address-to-name resolutions are determined. - DNS helps navigate the Internet by translating the IP address - into an address that is easier to remember. For example, translating - 111.111.111.1 into www.yahoo.com. - All domains and their components, such as mail servers, utilize - DNS to resolve to the appropriate locations. DNS servers are usually - set up in a master-slave relationship such that failure of the master - invokes the slave. DNS servers might also be clustered or replicated - such that changes made to one DNS server are automatically propagated - to other active servers. - In Compute, the support that enables associating DNS entries - with floating IP addresses, nodes, or cells so that hostnames are - consistent across reboots. - - download - - The transfer of data, usually in the form of files, from one - computer to another. - - DRTM - - Dynamic root of trust measurement. - - durable exchange - - The Compute RabbitMQ message exchange that remains active when - the server restarts. - - durable queue - - A Compute RabbitMQ message queue that remains active when the - server restarts. - - Dynamic Host Configuration Protocol (DHCP) - - A method to automatically configure networking for a host at - boot time. Provided by both Networking and Compute. - - Dynamic HyperText Markup Language (DHTML) - - Pages that use HTML, JavaScript, and Cascading Style Sheets to - enable users to interact with a web page or show simple - animation. - - east-west traffic - - Network traffic between servers in the same cloud or data center. - See also north-south traffic. - - EBS boot volume - - An Amazon EBS storage volume that contains a bootable VM image, - currently unsupported in OpenStack. - - ebtables - - Filtering tool for a Linux bridging firewall, enabling - filtering of network traffic passing through a Linux bridge. - Used in Compute along with arptables, iptables, and ip6tables - to ensure isolation of network communications. - - EC2 - - The Amazon commercial compute product, similar to - Compute. - - EC2 access key - - Used along with an EC2 secret key to access the Compute EC2 - API. - - EC2 API - - OpenStack supports accessing the Amazon EC2 API through - Compute. - - EC2 Compatibility API - - A Compute component that enables OpenStack to communicate with - Amazon EC2. - - EC2 secret key - - Used along with an EC2 access key when communicating with the - Compute EC2 API; used to digitally sign each request. - - Elastic Block Storage (EBS) - - The Amazon commercial block storage product. - - encryption - - OpenStack supports encryption technologies such as HTTPS, SSH, - SSL, TLS, digital certificates, and data encryption. - - endpoint - - See API endpoint. - - endpoint registry - - Alternative term for an Identity service catalog. - - encapsulation - - The practice of placing one packet type within another for - the purposes of abstracting or securing data. Examples - include GRE, MPLS, or IPsec. - - endpoint template - - A list of URL and port number endpoints that indicate where a - service, such as Object Storage, Compute, Identity, and so on, can be - accessed. - - entity - - Any piece of hardware or software that wants to connect to the - network services provided by Networking, the network connectivity - service. An entity can make use of Networking by implementing a - VIF. - - ephemeral image - - A VM image that does not save changes made to its volumes and - reverts them to their original state after the instance is - terminated. - - ephemeral volume - - Volume that does not save the changes made to it and reverts to - its original state when the current user relinquishes control. - - Essex - - A grouped release of projects related to OpenStack that came out - in April 2012, the fifth release of OpenStack. It included Compute - (nova 2012.1), Object Storage (swift 1.4.8), Image (glance), Identity - (keystone), and Dashboard (horizon). - Essex is the code name for the fifth release of - OpenStack. The design summit took place in - Boston, Massachusetts, US and Essex is a nearby city. - - ESXi - - An OpenStack-supported hypervisor. - - ETag - - MD5 hash of an object within Object Storage, used to ensure data - integrity. - - euca2ools - - A collection of command-line tools for administering VMs; most - are compatible with OpenStack. - - Eucalyptus Kernel Image (EKI) - - Used along with an ERI to create an EMI. - - Eucalyptus Machine Image (EMI) - - VM image container format supported by Image service. - - Eucalyptus Ramdisk Image (ERI) - - Used along with an EKI to create an EMI. - - evacuate - - The process of migrating one or all virtual machine (VM) - instances from one host to another, compatible with both shared - storage live migration and block migration. - - exchange - - Alternative term for a RabbitMQ message exchange. - - exchange type - - A routing algorithm in the Compute RabbitMQ. - - exclusive queue - - Connected to by a direct consumer in RabbitMQ—Compute, the - message can be consumed only by the current connection. - - extended attributes (xattr) - - File system option that enables storage of additional - information beyond owner, group, permissions, modification time, and - so on. The underlying Object Storage file system must support extended - attributes. - - extension - - Alternative term for an API extension or plug-in. In the context - of Identity service, this is a call that is specific to the - implementation, such as adding support for OpenID. - - external network - - A network segment typically used for instance Internet - access. - - extra specs - - Specifies additional requirements when Compute determines where - to start a new instance. Examples include a minimum amount of network - bandwidth or a GPU. - - FakeLDAP - - An easy method to create a local LDAP directory for testing - Identity and Compute. Requires Redis. - - fan-out exchange - - Within RabbitMQ and Compute, it is the messaging interface that - is used by the scheduler service to receive capability messages from - the compute, volume, and network nodes. - - federated identity - - A method to establish trusts between identity providers and the - OpenStack cloud. - - Fedora - - A Linux distribution compatible with OpenStack. - - Fibre Channel - - Storage protocol similar in concept to TCP/IP; encapsulates SCSI - commands and data. - - Fibre Channel over Ethernet (FCoE) - - The fibre channel protocol tunneled within Ethernet. - - fill-first scheduler - - The Compute scheduling method that attempts to fill a host with - VMs rather than starting new VMs on a variety of hosts. - - filter - - The step in the Compute scheduling process when hosts that - cannot run VMs are eliminated and not chosen. - - firewall - - Used to restrict communications between hosts and/or nodes, - implemented in Compute using iptables, arptables, ip6tables, and - ebtables. - - FWaaS - - A Networking extension that provides perimeter firewall - functionality. - - fixed IP address - - An IP address that is associated with the same instance each - time that instance boots, is generally not accessible to end users or - the public Internet, and is used for management of the - instance. - - Flat Manager - - The Compute component that gives IP addresses to authorized - nodes and assumes DHCP, DNS, and routing configuration and services - are provided by something else. - - flat mode injection - - A Compute networking method where the OS network configuration - information is injected into the VM image before the instance - starts. - - flat network - - Virtual network type that uses neither VLANs nor tunnels to - segregate tenant traffic. Each flat network typically requires - a separate underlying physical interface defined by bridge - mappings. However, a flat network can contain multiple - subnets. - - FlatDHCP Manager - - The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, - TFTP) and radvd (routing) services. - - flavor - - Alternative term for a VM instance type. - - flavor ID - - UUID for each Compute or Image service VM flavor or instance - type. - - floating IP address - - An IP address that a project can associate with a VM so that the - instance has the same public IP address each time that it boots. You - create a pool of floating IP addresses and assign them to instances as - they are launched to maintain a consistent IP address for maintaining - DNS assignment. - - Folsom - - A grouped release of projects related to OpenStack that came out - in the fall of 2012, the sixth release of OpenStack. It includes - Compute (nova), Object Storage (swift), Identity (keystone), - Networking (neutron), Image service (glance), and Volumes or Block - Storage (cinder). - Folsom is the code name for the sixth release of - OpenStack. The design summit took place in - San Francisco, California, US and Folsom is a nearby city. - - FormPost - - Object Storage middleware that uploads (posts) an image through - a form on a web page. - - freezer - - OpenStack project that provides backup restore and disaster - recovery as a service. - - front end - - The point where a user interacts with a service; can be an API - endpoint, the horizon dashboard, or a command-line tool. - - gateway - - An IP address, typically assigned to a router, that - passes network traffic between different networks. - - generic receive offload (GRO) - - Feature of certain network interface drivers that - combines many smaller received packets into a large packet - before delivery to the kernel IP stack. - - generic routing encapsulation (GRE) - - Protocol that encapsulates a wide variety of network - layer protocols inside virtual point-to-point links. - - glance - - A core project that provides the OpenStack Image service. - - glance API server - - Processes client requests for VMs, updates Image service - metadata on the registry server, and communicates with the store - adapter to upload VM images from the back-end store. - - glance registry - - Alternative term for the Image service image registry. - - global endpoint template - - The Identity service endpoint template that contains services - available to all tenants. - - GlusterFS - - A file system designed to aggregate NAS hosts, compatible with - OpenStack. - - golden image - - A method of operating system installation where a finalized disk - image is created and then used by all nodes without - modification. - - Governance service - - OpenStack project to provide Governance-as-a-Service across - any collection of cloud services in order to monitor, - enforce, and audit policy over dynamic infrastructure. The - code name for the project is congress. - - Graphic Interchange Format (GIF) - - A type of image file that is commonly used for animated images - on web pages. - - Graphics Processing Unit (GPU) - - Choosing a host based on the existence of a GPU is currently - unsupported in OpenStack. - - Green Threads - - The cooperative threading model used by Python; reduces race - conditions and only context switches when specific library calls are - made. Each OpenStack service is its own thread. - - Grizzly - - The code name for the seventh release of - OpenStack. The design summit took place in - San Diego, California, US and Grizzly is an element of the state flag of - California. - - Group - - An Identity v3 API entity. Represents a collection of users that is - owned by a specific domain. - - guest OS - - An operating system instance running under the control of a - hypervisor. - - Hadoop - - Apache Hadoop is an open source software framework that supports - data-intensive distributed applications. - - Hadoop Distributed File System (HDFS) - - A distributed, highly fault-tolerant file system designed to run - on low-cost commodity hardware. - - handover - - An object state in Object Storage where a new replica of the - object is automatically created due to a drive failure. - - hard reboot - - A type of reboot where a physical or virtual power button is - pressed as opposed to a graceful, proper shutdown of the operating - system. - - Havana - - The code name for the eighth release of OpenStack. The - design summit took place in Portland, Oregon, US and Havana is - an unincorporated community in Oregon. - - heat - - An integrated project that aims to orchestrate multiple cloud - applications for OpenStack. - - Heat Orchestration Template (HOT) - - Heat input in the format native to OpenStack. - - health monitor - - Determines whether back-end members of a VIP pool can - process a request. A pool can have several health monitors - associated with it. When a pool has several monitors - associated with it, all monitors check each member of the - pool. All monitors must declare a member to be healthy for - it to stay active. - - high availability (HA) - - A high availability system design approach and associated - service implementation ensures that a prearranged level of - operational performance will be met during a contractual - measurement period. High availability systems seeks to - minimize system downtime and data loss. - - horizon - - OpenStack project that provides a dashboard, which is a web - interface. - - horizon plug-in - - A plug-in for the OpenStack dashboard (horizon). - - host - - A physical computer, not a VM instance (node). - - host aggregate - - A method to further subdivide availability zones into hypervisor - pools, a collection of common hosts. - - Host Bus Adapter (HBA) - - Device plugged into a PCI slot, such as a fibre channel or - network card. - - hybrid cloud - - A hybrid cloud is a composition of two or more clouds - (private, community or public) that remain distinct entities - but are bound together, offering the benefits of multiple - deployment models. Hybrid cloud can also mean the ability - to connect colocation, managed and/or dedicated services - with cloud resources. - - Hyper-V - - One of the hypervisors supported by OpenStack. - - hyperlink - - Any kind of text that contains a link to some other site, - commonly found in documents where clicking on a word or words opens up - a different website. - - Hypertext Transfer Protocol (HTTP) - - An application protocol for distributed, collaborative, - hypermedia information systems. It is the foundation of data - communication for the World Wide Web. Hypertext is structured - text that uses logical links (hyperlinks) between nodes containing - text. HTTP is the protocol to exchange or transfer hypertext. - - Hypertext Transfer Protocol Secure (HTTPS) - - An encrypted communications protocol for secure communication - over a computer network, with especially wide deployment on the - Internet. Technically, it is not a protocol in and of itself; - rather, it is the result of simply layering the Hypertext Transfer - Protocol (HTTP) on top of the TLS or SSL protocol, thus adding the - security capabilities of TLS or SSL to standard HTTP communications. - most OpenStack API endpoints and many inter-component communications - support HTTPS communication. - - hypervisor - - Software that arbitrates and controls VM access to the actual - underlying hardware. - - hypervisor pool - - A collection of hypervisors grouped together through host - aggregates. - - IaaS - - Infrastructure-as-a-Service. IaaS is a provisioning model in - which an organization outsources physical components of a data center, - such as storage, hardware, servers, and networking components. A - service provider owns the equipment and is responsible for housing, - operating and maintaining it. The client typically pays on a per-use - basis. IaaS is a model for providing cloud services. - - Icehouse - - The code name for the ninth release of OpenStack. The - design summit took place in Hong Kong and Ice House is a - street in that city. - - ICMP - - Internet Control Message Protocol, used by network - devices for control messages. For example, - :command:`ping` uses ICMP to test - connectivity. - - ID number - - Unique numeric ID associated with each user in Identity, - conceptually similar to a Linux or LDAP UID. - - Identity API - - Alternative term for the Identity service API. - - Identity back end - - The source used by Identity service to retrieve user - information; an OpenLDAP server, for example. - - identity provider - - A directory service, which allows users to login with a user - name and password. It is a typical source of authentication - tokens. - - Identity service - - The OpenStack core project that provides a central directory of - users mapped to the OpenStack services they can access. It also - registers endpoints for OpenStack services. It acts as a common - authentication system. The project name of Identity is - keystone. - - Identity service API - - The API used to access the OpenStack Identity service provided - through keystone. - - IDS - - Intrusion Detection System. - - image - - A collection of files for a specific operating system (OS) that - you use to create or rebuild a server. OpenStack provides pre-built - images. You can also create custom images, or snapshots, from servers - that you have launched. Custom images can be used for data backups or - as "gold" images for additional servers. - - Image API - - The Image service API endpoint for management of VM - images. - - image cache - - Used by Image service to obtain images on the local host rather - than re-downloading them from the image server each time one is - requested. - - image ID - - Combination of a URI and UUID used to access Image service VM - images through the image API. - - image membership - - A list of tenants that can access a given VM image within Image - service. - - image owner - - The tenant who owns an Image service virtual machine - image. - - image registry - - A list of VM images that are available through Image - service. - - Image service - - An OpenStack core project that provides discovery, registration, - and delivery services for disk and server images. The project name of - the Image service is glance. - - Image service API - - Alternative name for the glance image API. - - image status - - The current status of a VM image in Image service, not to be - confused with the status of a running instance. - - image store - - The back-end store used by Image service to store VM images, - options include Object Storage, local file system, S3, or HTTP. - - image UUID - - UUID used by Image service to uniquely identify each VM - image. - - incubated project - - A community project may be elevated to this status and is then - promoted to a core project. - - ingress filtering - - The process of filtering incoming network traffic. Supported by - Compute. - - INI - - The OpenStack configuration files use an INI format to - describe options and their values. It consists of sections - and key value pairs. - - injection - - The process of putting a file into a virtual machine image - before the instance is started. - - instance - - A running VM, or a VM in a known state such as suspended, that - can be used like a hardware server. - - instance ID - - Alternative term for instance UUID. - - instance state - - The current state of a guest VM image. - - instance tunnels network - - A network segment used for instance traffic tunnels - between compute nodes and the network node. - - instance type - - Describes the parameters of the various virtual machine images - that are available to users; includes parameters such as CPU, storage, - and memory. Alternative term for flavor. - - instance type ID - - Alternative term for a flavor ID. - - instance UUID - - Unique ID assigned to each guest VM instance. - - interface - - A physical or virtual device that provides connectivity - to another device or medium. - - interface ID - - Unique ID for a Networking VIF or vNIC in the form of a - UUID. - - Internet protocol (IP) - - Principal communications protocol in the internet protocol - suite for relaying datagrams across network boundaries. - - Internet Service Provider (ISP) - - Any business that provides Internet access to individuals or - businesses. - - Internet Small Computer System Interface (iSCSI) - - Storage protocol that encapsulates SCSI frames for transport - over IP networks. - - ironic - - OpenStack project that provisions bare metal, as opposed to - virtual, machines. - - IOPS - - IOPS (Input/Output Operations Per Second) are a common - performance measurement used to benchmark computer storage - devices like hard disk drives, solid state drives, and - storage area networks. - - IP address - - Number that is unique to every computer system on the Internet. - Two versions of the Internet Protocol (IP) are in use for addresses: - IPv4 and IPv6. - - IP Address Management (IPAM) - - The process of automating IP address allocation, deallocation, - and management. Currently provided by Compute, melange, and - Networking. - - IPL - - Initial Program Loader. - - IPMI - - Intelligent Platform Management Interface. IPMI is a - standardized computer system interface used by system administrators - for out-of-band management of computer systems and monitoring of their - operation. In layman's terms, it - is a way to manage a computer using a direct network connection, - whether it is turned on or not; connecting to the hardware rather than - an operating system or login shell. - - ip6tables - - Tool used to set up, maintain, and inspect the tables of IPv6 - packet filter rules in the Linux kernel. In OpenStack Compute, - ip6tables is used along with arptables, ebtables, and iptables to - create firewalls for both nodes and VMs. - - ipset - - Extension to iptables that allows creation of firewall rules - that match entire "sets" of IP addresses simultaneously. These - sets reside in indexed data structures to increase efficiency, - particularly on systems with a large quantity of rules. - - iptables - - Used along with arptables and ebtables, iptables create - firewalls in Compute. iptables are the tables provided by the Linux - kernel firewall (implemented as different Netfilter modules) and the - chains and rules it stores. Different kernel modules and programs are - currently used for different protocols: iptables applies to IPv4, - ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. - Requires root privilege to manipulate. - - IQN - - iSCSI Qualified Name (IQN) is the format most commonly used - for iSCSI names, which uniquely identify nodes in an iSCSI network. - All IQNs follow the pattern iqn.yyyy-mm.domain:identifier, where - 'yyyy-mm' is the year and month in which the domain was registered, - 'domain' is the reversed domain name of the issuing organization, and - 'identifier' is an optional string which makes each IQN under the same - domain unique. For example, 'iqn.2015-10.org.openstack.408ae959bce1'. - - iSCSI - - The SCSI disk protocol tunneled within Ethernet, supported by - Compute, Object Storage, and Image service. - - ISO9660 - - One of the VM image disk formats supported by Image - service. - - itsec - - A default role in the Compute RBAC system that can quarantine an - instance in any project. - - Java - - A programming language that is used to create systems that - involve more than one computer by way of a network. - - JavaScript - - A scripting language that is used to build web pages. - - JavaScript Object Notation (JSON) - - One of the supported response formats in OpenStack. - - Jenkins - - Tool used to run jobs automatically for OpenStack - development. - - jumbo frame - - Feature in modern Ethernet networks that supports frames up to - approximately 9000 bytes. - - Juno - - The code name for the tenth release of OpenStack. The - design summit took place in Atlanta, Georgia, US and Juno is - an unincorporated community in Georgia. - - Kerberos - - A network authentication protocol which works on the basis of - tickets. Kerberos allows nodes communication over a non-secure - network, and allows nodes to prove their identity to one another in a - secure manner. - - kernel-based VM (KVM) - - An OpenStack-supported hypervisor. KVM is a full - virtualization solution for Linux on x86 hardware containing - virtualization extensions (Intel VT or AMD-V), ARM, IBM - Power, and IBM zSeries. It consists of a loadable kernel - module, that provides the core virtualization infrastructure - and a processor specific module. - - Key Manager service - - OpenStack project that produces a secret storage and - generation system capable of providing key management for - services wishing to enable encryption features. The code name - of the project is barbican. - - keystone - - The project that provides OpenStack Identity services. - - Kickstart - - A tool to automate system configuration and installation on Red - Hat, Fedora, and CentOS-based Linux distributions. - - Kilo - - The code name for the eleventh release of OpenStack. The - design summit took place in Paris, France. Due to delays in the name - selection, the release was known only as K. Because ``k`` is the - unit symbol for kilo and the reference artifact is stored near Paris - in the Pavillon de Breteuil in Sèvres, the community chose Kilo as - the release name. - - large object - - An object within Object Storage that is larger than 5 GB. - - Launchpad - - The collaboration site for OpenStack. - - Layer-2 network - - Term used in the OSI network architecture for the data link - layer. The data link layer is responsible for media access - control, flow control and detecting and possibly correcting - errors that may occur in the physical layer. - - Layer-3 network - - Term used in the OSI network architecture for the network - layer. The network layer is responsible for packet - forwarding including routing from one node to another. - - Layer-2 (L2) agent - - OpenStack Networking agent that provides layer-2 - connectivity for virtual networks. - - Layer-3 (L3) agent - - OpenStack Networking agent that provides layer-3 - (routing) services for virtual networks. - - Liberty - - The code name for the twelfth release of OpenStack. The - design summit took place in Vancouver, Canada and Liberty is - the name of a village in the Canadian province of - Saskatchewan. - - libvirt - - Virtualization API library used by OpenStack to interact with - many of its supported hypervisors. - - Lightweight Directory Access Protocol (LDAP) - - An application protocol for accessing and maintaining distributed - directory information services over an IP network. - - Linux bridge - - Software that enables multiple VMs to share a single physical - NIC within Compute. - - Linux Bridge neutron plug-in - - Enables a Linux bridge to understand a Networking port, - interface attachment, and other abstractions. - - Linux containers (LXC) - - An OpenStack-supported hypervisor. - - live migration - - The ability within Compute to move running virtual machine - instances from one host to another with only a small service - interruption during switchover. - - load balancer - - A load balancer is a logical device that belongs to a cloud - account. It is used to distribute workloads between multiple back-end - systems or services, based on the criteria defined as part of its - configuration. - - load balancing - - The process of spreading client requests between two or more - nodes to improve performance and availability. - - LBaaS - - Enables Networking to distribute incoming requests evenly - between designated instances. - - Logical Volume Manager (LVM) - - Provides a method of allocating space on mass-storage - devices that is more flexible than conventional partitioning - schemes. - - magnum - - Code name for the OpenStack project that provides the - Containers Service. - - management API - - Alternative term for an admin API. - - management network - - A network segment used for administration, not accessible to the - public Internet. - - manager - - Logical groupings of related code, such as the Block Storage - volume manager or network manager. - - manifest - - Used to track segments of a large object within Object - Storage. - - manifest object - - A special Object Storage object that contains the manifest for a - large object. - - manila - - OpenStack project that provides shared file systems as - service to applications. - - maximum transmission unit (MTU) - - Maximum frame or packet size for a particular network - medium. Typically 1500 bytes for Ethernet networks. - - mechanism driver - - A driver for the Modular Layer 2 (ML2) neutron plug-in that - provides layer-2 connectivity for virtual instances. A - single OpenStack installation can use multiple mechanism - drivers. - - melange - - Project name for OpenStack Network Information Service. To be - merged with Networking. - - membership - - The association between an Image service VM image and a tenant. - Enables images to be shared with specified tenants. - - membership list - - A list of tenants that can access a given VM image within Image - service. - - memcached - - A distributed memory object caching system that is used by - Object Storage for caching. - - memory overcommit - - The ability to start new VM instances based on the actual memory - usage of a host, as opposed to basing the decision on the amount of - RAM each running instance thinks it has available. Also known as RAM - overcommit. - - message broker - - The software package used to provide AMQP messaging capabilities - within Compute. Default package is RabbitMQ. - - message bus - - The main virtual communication line used by all AMQP messages - for inter-cloud communications within Compute. - - message queue - - Passes requests from clients to the appropriate workers and - returns the output to the client after the job completes. - - Message service - - OpenStack project that aims to produce an OpenStack - messaging service that affords a variety of distributed - application patterns in an efficient, scalable and - highly-available manner, and to create and maintain associated - Python libraries and documentation. The code name for the - project is zaqar. - - Metadata agent - - OpenStack Networking agent that provides metadata - services for instances. - - Meta-Data Server (MDS) - - Stores CephFS metadata. - - migration - - The process of moving a VM instance from one host to - another. - - mistral - - OpenStack project that provides the Workflow service. - - Mitaka - - The code name for the thirteenth release of OpenStack. - The design summit took place in Tokyo, Japan. Mitaka - is a city in Tokyo. - - monasca - - OpenStack project that provides a Monitoring service. - - multi-host - - High-availability mode for legacy (nova) networking. - Each compute node handles NAT and DHCP and acts as a gateway - for all of the VMs on it. A networking failure on one compute - node doesn't affect VMs on other compute nodes. - - multinic - - Facility in Compute that allows each virtual machine instance to - have more than one VIF connected to it. - - murano - - OpenStack project that provides an Application catalog. - - Modular Layer 2 (ML2) neutron plug-in - - Can concurrently use multiple layer-2 networking technologies, - such as 802.1Q and VXLAN, in Networking. - - Monitor (LBaaS) - - LBaaS feature that provides availability monitoring using the - ``ping`` command, TCP, and HTTP/HTTPS GET. - - Monitor (Mon) - - A Ceph component that communicates with external clients, checks - data state and consistency, and performs quorum functions. - - Monitoring - - The OpenStack project that provides a multi-tenant, highly - scalable, performant, fault-tolerant Monitoring-as-a-Service - solution for metrics, complex event processing, and logging. - It builds an extensible platform for advanced monitoring - services that can be used by both operators and tenants to - gain operational insight and visibility, ensuring - availability and stability. The project name is monasca. - - multi-factor authentication - - Authentication method that uses two or more credentials, such as - a password and a private key. Currently not supported in - Identity. - - MultiNic - - Facility in Compute that enables a virtual machine instance to - have more than one VIF connected to it. - - Nebula - - Released as open source by NASA in 2010 and is the basis for - Compute. - - netadmin - - One of the default roles in the Compute RBAC system. Enables the - user to allocate publicly accessible IP addresses to instances and - change firewall rules. - - NetApp volume driver - - Enables Compute to communicate with NetApp storage devices - through the NetApp OnCommand - Provisioning Manager. - - network - - A virtual network that provides connectivity between entities. - For example, a collection of virtual ports that share network - connectivity. In Networking terminology, a network is always a layer-2 - network. - - NAT - - Network Address Translation; Process of modifying IP address - information while in transit. Supported by Compute and - Networking. - - network controller - - A Compute daemon that orchestrates the network configuration of - nodes, including IP addresses, VLANs, and bridging. Also manages - routing for both public and private networks. - - Network File System (NFS) - - A method for making file systems available over the network. - Supported by OpenStack. - - network ID - - Unique ID assigned to each network segment within Networking. - Same as network UUID. - - network manager - - The Compute component that manages various network components, - such as firewall rules, IP address allocation, and so on. - - network namespace - - Linux kernel feature that provides independent virtual - networking instances on a single host with separate routing - tables and interfaces. Similar to virtual routing and forwarding - (VRF) services on physical network equipment. - - network node - - Any compute node that runs the network worker daemon. - - network segment - - Represents a virtual, isolated OSI layer-2 subnet in - Networking. - - Newton - - The code name for the fourteenth release of OpenStack. The - design summit will take place in Austin, Texas, US. The - release is named after "Newton House" which is located at - 1013 E. Ninth St., Austin, TX. which is listed on the - National Register of Historic Places. - - NTP - - Network Time Protocol; Method of keeping a clock for a host or - node correct via communication with a trusted, accurate time - source. - - network UUID - - Unique ID for a Networking network segment. - - network worker - - The ``nova-network`` worker daemon; provides - services such as giving an IP address to a booting nova - instance. - - Networking service - - A core OpenStack project that provides a network connectivity - abstraction layer to OpenStack Compute. The project name of Networking - is neutron. - - Networking API - - API used to access OpenStack Networking. Provides an extensible - architecture to enable custom plug-in creation. - - neutron - - A core OpenStack project that provides a network connectivity - abstraction layer to OpenStack Compute. - - neutron API - - An alternative name for Networking API. - - neutron manager - - Enables Compute and Networking integration, which enables - Networking to perform network management for guest VMs. - - neutron plug-in - - Interface within Networking that enables organizations to create - custom plug-ins for advanced features, such as QoS, ACLs, or - IDS. - - Nexenta volume driver - - Provides support for NexentaStor devices in Compute. - - No ACK - - Disables server-side message acknowledgment in the Compute - RabbitMQ. Increases performance but decreases reliability. - - node - - A VM instance that runs on a host. - - non-durable exchange - - Message exchange that is cleared when the service restarts. Its - data is not written to persistent storage. - - non-durable queue - - Message queue that is cleared when the service restarts. Its - data is not written to persistent storage. - - non-persistent volume - - Alternative term for an ephemeral volume. - - north-south traffic - - Network traffic between a user or client (north) and a - server (south), or traffic into the cloud (south) and - out of the cloud (north). See also east-west traffic. - - nova - - OpenStack project that provides compute services. - - Nova API - - Alternative term for the Compute API. - - nova-network - - A Compute component that manages IP address allocation, - firewalls, and other network-related tasks. This is the legacy - networking option and an alternative to Networking. - - object - - A BLOB of data held by Object Storage; can be in any - format. - - object auditor - - Opens all objects for an object server and verifies the MD5 - hash, size, and metadata for each object. - - object expiration - - A configurable option within Object Storage to automatically - delete objects after a specified amount of time has passed or a - certain date is reached. - - object hash - - Uniquely ID for an Object Storage object. - - object path hash - - Used by Object Storage to determine the location of an object in - the ring. Maps objects to partitions. - - object replicator - - An Object Storage component that copies an object to remote - partitions for fault tolerance. - - object server - - An Object Storage component that is responsible for managing - objects. - - Object Storage service - - The OpenStack core project that provides eventually consistent - and redundant storage and retrieval of fixed digital content. The - project name of OpenStack Object Storage is swift. - - Object Storage API - - API used to access OpenStack Object Storage. - - Object Storage Device (OSD) - - The Ceph storage daemon. - - object versioning - - Allows a user to set a flag on an Object Storage container so - that all objects within the container are versioned. - - Ocata - - The code name for the fifteenth release of OpenStack. The - design summit will take place in Barcelona, Spain. Ocata is - a beach north of Barcelona. - - Oldie - - Term for an Object Storage process that runs for a long time. - Can indicate a hung process. - - Open Cloud Computing Interface (OCCI) - - A standardized interface for managing compute, data, and network - resources, currently unsupported in OpenStack. - - Open Virtualization Format (OVF) - - Standard for packaging VM images. Supported in OpenStack. - - Open vSwitch - - Open vSwitch is a production quality, multilayer virtual - switch licensed under the open source Apache 2.0 license. It - is designed to enable massive network automation through - programmatic extension, while still supporting standard - management interfaces and protocols (for example NetFlow, - sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). - - Open vSwitch (OVS) agent - - Provides an interface to the underlying Open vSwitch service for - the Networking plug-in. - - Open vSwitch neutron plug-in - - Provides support for Open vSwitch in Networking. - - OpenLDAP - - An open source LDAP server. Supported by both Compute and - Identity. - - OpenStack - - OpenStack is a cloud operating system that controls large pools - of compute, storage, and networking resources throughout a data - center, all managed through a dashboard that gives administrators - control while empowering their users to provision resources through a - web interface. OpenStack is an open source project licensed under the - Apache License 2.0. - - OpenStack code name - - Each OpenStack release has a code name. Code names ascend in - alphabetical order: Austin, Bexar, Cactus, Diablo, Essex, - Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, - and Mitaka. - Code names are cities or counties near where the - corresponding OpenStack design summit took place. An - exception, called the Waldon exception, is granted to - elements of the state flag that sound especially cool. Code - names are chosen by popular vote. - - openSUSE - - A Linux distribution that is compatible with OpenStack. - - operator - - The person responsible for planning and maintaining an OpenStack - installation. - - optional service - - An official OpenStack service defined as optional by - DefCore Committee. Currently, consists of - Dashboard (horizon), Telemetry service (Telemetry), - Orchestration service (heat), Database service (trove), - Bare Metal service (ironic), and so on. - - Orchestration service - - An integrated project that orchestrates multiple cloud - applications for OpenStack. The project name of Orchestration is - heat. - - orphan - - In the context of Object Storage, this is a process that is not - terminated after an upgrade, restart, or reload of the service. - - Oslo - - OpenStack project that produces a set of Python libraries - containing code shared by OpenStack projects. - - parent cell - - If a requested resource, such as CPU time, disk storage, or - memory, is not available in the parent cell, the request is forwarded - to associated child cells. - - partition - - A unit of storage within Object Storage used to store objects. - It exists on top of devices and is replicated for fault - tolerance. - - partition index - - Contains the locations of all Object Storage partitions within - the ring. - - partition shift value - - Used by Object Storage to determine which partition data should - reside on. - - path MTU discovery (PMTUD) - - Mechanism in IP networks to detect end-to-end MTU and adjust - packet size accordingly. - - pause - - A VM state where no changes occur (no changes in memory, network - communications stop, etc); the VM is frozen but not shut down. - - PCI passthrough - - Gives guest VMs exclusive access to a PCI device. Currently - supported in OpenStack Havana and later releases. - - persistent message - - A message that is stored both in memory and on disk. The message - is not lost after a failure or restart. - - persistent volume - - Changes to these types of disk volumes are saved. - - personality file - - A file used to customize a Compute instance. It can be used to - inject SSH keys or a specific network configuration. - - Platform-as-a-Service (PaaS) - - Provides to the consumer the ability to deploy applications - through a programming language or tools supported by the cloud - platform provider. An example of Platform-as-a-Service is an - Eclipse/Java programming platform provided with no downloads - required. - - plug-in - - Software component providing the actual implementation for - Networking APIs, or for Compute APIs, depending on the context. - - policy service - - Component of Identity that provides a rule-management - interface and a rule-based authorization engine. - - pool - - A logical set of devices, such as web servers, that you - group together to receive and process traffic. The load - balancing function chooses which member of the pool handles - the new requests or connections received on the VIP - address. Each VIP has one pool. - - pool member - - An application that runs on the back-end server in a - load-balancing system. - - port - - A virtual network port within Networking; VIFs / vNICs are - connected to a port. - - port UUID - - Unique ID for a Networking port. - - preseed - - A tool to automate system configuration and installation on - Debian-based Linux distributions. - - private image - - An Image service VM image that is only available to specified - tenants. - - private IP address - - An IP address used for management and administration, not - available to the public Internet. - - private network - - The Network Controller provides virtual networks to enable - compute servers to interact with each other and with the public - network. All machines must have a public and private network - interface. A private network interface can be a flat or VLAN network - interface. A flat network interface is controlled by the - flat_interface with flat managers. A VLAN network interface is - controlled by the ``vlan_interface`` option with VLAN - managers. - - project - - Projects represent the base unit of “ownership” in OpenStack, - in that all resources in OpenStack should be owned by a specific project. - In OpenStack Identity, a project must be owned by a specific domain. - - project ID - - User-defined alphanumeric string in Compute; the name of a - project. - - project VPN - - Alternative term for a cloudpipe. - - promiscuous mode - - Causes the network interface to pass all traffic it - receives to the host rather than passing only the frames - addressed to it. - - protected property - - Generally, extra properties on an Image service image to - which only cloud administrators have access. Limits which user - roles can perform CRUD operations on that property. The cloud - administrator can configure any image property as - protected. - - provider - - An administrator who has access to all hosts and - instances. - - proxy node - - A node that provides the Object Storage proxy service. - - proxy server - - Users of Object Storage interact with the service through the - proxy server, which in turn looks up the location of the requested - data within the ring and returns the results to the user. - - public API - - An API endpoint used for both service-to-service communication - and end-user interactions. - - public image - - An Image service VM image that is available to all - tenants. - - public IP address - - An IP address that is accessible to end-users. - - public key authentication - - Authentication method that uses keys rather than - passwords. - - public network - - The Network Controller provides virtual networks to enable - compute servers to interact with each other and with the public - network. All machines must have a public and private network - interface. The public network interface is controlled by the - ``public_interface`` option. - - Puppet - - An operating system configuration-management tool supported by - OpenStack. - - Python - - Programming language used extensively in OpenStack. - - QEMU Copy On Write 2 (QCOW2) - - One of the VM image disk formats supported by Image - service. - - Qpid - - Message queue software supported by OpenStack; an alternative to - RabbitMQ. - - quarantine - - If Object Storage finds objects, containers, or accounts that - are corrupt, they are placed in this state, are not replicated, cannot - be read by clients, and a correct copy is re-replicated. - - Quick EMUlator (QEMU) - - QEMU is a generic and open source machine emulator and - virtualizer. - One of the hypervisors supported by OpenStack, generally used - for development purposes. - - quota - - In Compute and Block Storage, the ability to set resource limits - on a per-project basis. - - RabbitMQ - - The default message queue software used by OpenStack. - - Rackspace Cloud Files - - Released as open source by Rackspace in 2010; the basis for - Object Storage. - - RADOS Block Device (RBD) - - Ceph component that enables a Linux block device to be striped - over multiple distributed data stores. - - radvd - - The router advertisement daemon, used by the Compute VLAN - manager and FlatDHCP manager to provide routing services for VM - instances. - - rally - - OpenStack project that provides the Benchmark service. - - RAM filter - - The Compute setting that enables or disables RAM - overcommitment. - - RAM overcommit - - The ability to start new VM instances based on the actual memory - usage of a host, as opposed to basing the decision on the amount of - RAM each running instance thinks it has available. Also known as - memory overcommit. - - rate limit - - Configurable option within Object Storage to limit database - writes on a per-account and/or per-container basis. - - raw - - One of the VM image disk formats supported by Image service; an - unstructured disk image. - - rebalance - - The process of distributing Object Storage partitions across all - drives in the ring; used during initial ring creation and after ring - reconfiguration. - - reboot - - Either a soft or hard reboot of a server. With a soft reboot, - the operating system is signaled to restart, which enables a graceful - shutdown of all processes. A hard reboot is the equivalent of power - cycling the server. The virtualization platform should ensure that the - reboot action has completed successfully, even in cases in which the - underlying domain/VM is paused or halted/stopped. - - rebuild - - Removes all data on the server and replaces it with the - specified image. Server ID and IP addresses remain the same. - - Recon - - An Object Storage component that collects meters. - - record - - Belongs to a particular domain and is used to specify - information about the domain. - There are several types of DNS records. Each record type contains - particular information used to describe the purpose of that record. - Examples include mail exchange (MX) records, which specify the mail - server for a particular domain; and name server (NS) records, which - specify the authoritative name servers for a domain. - - record ID - - A number within a database that is incremented each time a - change is made. Used by Object Storage when replicating. - - Red Hat Enterprise Linux (RHEL) - - A Linux distribution that is compatible with OpenStack. - - reference architecture - - A recommended architecture for an OpenStack cloud. - - region - - A discrete OpenStack environment with dedicated API endpoints - that typically shares only the Identity (keystone) with other - regions. - - registry - - Alternative term for the Image service registry. - - registry server - - An Image service that provides VM image metadata information to - clients. - - Reliable, Autonomic Distributed Object Store - (RADOS) - - A collection of components that provides object storage within - Ceph. Similar to OpenStack Object Storage. - - Remote Procedure Call (RPC) - - The method used by the Compute RabbitMQ for intra-service - communications. - - replica - - Provides data redundancy and fault tolerance by creating copies - of Object Storage objects, accounts, and containers so that they are - not lost when the underlying storage fails. - - replica count - - The number of replicas of the data in an Object Storage - ring. - - replication - - The process of copying data to a separate physical device for - fault tolerance and performance. - - replicator - - The Object Storage back-end process that creates and manages - object replicas. - - request ID - - Unique ID assigned to each request sent to Compute. - - rescue image - - A special type of VM image that is booted when an instance is - placed into rescue mode. Allows an administrator to mount the file - systems for an instance to correct the problem. - - resize - - Converts an existing server to a different flavor, which scales - the server up or down. The original server is saved to enable rollback - if a problem occurs. All resizes must be tested and explicitly - confirmed, at which time the original server is removed. - - RESTful - - A kind of web service API that uses REST, or Representational - State Transfer. REST is the style of architecture for hypermedia - systems that is used for the World Wide Web. - - ring - - An entity that maps Object Storage data to partitions. A - separate ring exists for each service, such as account, object, and - container. - - ring builder - - Builds and manages rings within Object Storage, assigns - partitions to devices, and pushes the configuration to other storage - nodes. - - Role Based Access Control (RBAC) - - Provides a predefined list of actions that the user can perform, - such as start or stop VMs, reset passwords, and so on. Supported in - both Identity and Compute and can be configured using the - horizon dashboard. - - role - - A personality that a user assumes to perform a specific set of - operations. A role includes a set of rights and privileges. A user - assuming that role inherits those rights and privileges. - - role ID - - Alphanumeric ID assigned to each Identity service role. - - rootwrap - - A feature of Compute that allows the unprivileged "nova" user to - run a specified list of commands as the Linux root user. - - round-robin scheduler - - Type of Compute scheduler that evenly distributes instances - among available hosts. - - router - - A physical or virtual network device that passes network - traffic between different networks. - - routing key - - The Compute direct exchanges, fanout exchanges, and topic - exchanges use this key to determine how to process a message; - processing varies depending on exchange type. - - RPC driver - - Modular system that allows the underlying message queue software - of Compute to be changed. For example, from RabbitMQ to ZeroMQ or - Qpid. - - rsync - - Used by Object Storage to push object replicas. - - RXTX cap - - Absolute limit on the amount of network traffic a Compute VM - instance can send and receive. - - RXTX quota - - Soft limit on the amount of network traffic a Compute VM - instance can send and receive. - - S3 - - Object storage service by Amazon; similar in function to Object - Storage, it can act as a back-end store for Image service VM images. - - sahara - - OpenStack project that provides a scalable data-processing stack - and associated management interfaces. - - SAML assertion - - Contains information about a user as provided by the identity - provider. It is an indication that a user has been authenticated. - - scheduler manager - - A Compute component that determines where VM instances should - start. Uses modular design to support a variety of scheduler - types. - - scoped token - - An Identity service API access token that is associated with a - specific tenant. - - scrubber - - Checks for and deletes unused VMs; the component of Image - service that implements delayed delete. - - secret key - - String of text known only by the user; used along with an access - key to make requests to the Compute API. - - secure shell (SSH) - - Open source tool used to access remote hosts through an - encrypted communications channel, SSH key injection is supported by - Compute. - - security group - - A set of network traffic filtering rules that are applied to a - Compute instance. - - segmented object - - An Object Storage large object that has been broken up into - pieces. The re-assembled object is called a concatenated - object. - - self-service - - For IaaS, ability for a regular (non-privileged) account to - manage a virtual infrastructure component such as networks without - involving an administrator. - - SELinux - - Linux kernel security module that provides the mechanism for - supporting access control policies. - - senlin - - OpenStack project that provides a Clustering service. - - server - - Computer that provides explicit services to the client software - running on that system, often managing a variety of computer - operations. - A server is a VM instance in the Compute system. Flavor and - image are requisite elements when creating a server. - - server image - - Alternative term for a VM image. - - server UUID - - Unique ID assigned to each guest VM instance. - - service - - An OpenStack service, such as Compute, Object Storage, or Image - service. Provides one or more endpoints through which users can access - resources and perform operations. - - service catalog - - Alternative term for the Identity service catalog. - - service ID - - Unique ID assigned to each service that is available in the - Identity service catalog. - - service provider - - A system that provides services to other system entities. In - case of federated identity, OpenStack Identity is the service - provider. - - service registration - - An Identity service feature that enables services, such as - Compute, to automatically register with the catalog. - - service tenant - - Special tenant that contains all services that are listed in the - catalog. - - service token - - An administrator-defined token used by Compute to communicate - securely with the Identity service. - - session back end - - The method of storage used by horizon to track client sessions, - such as local memory, cookies, a database, or memcached. - - session persistence - - A feature of the load-balancing service. It attempts to force - subsequent connections to a service to be redirected to the same node - as long as it is online. - - session storage - - A horizon component that stores and tracks client session - information. Implemented through the Django sessions framework. - - share - - A remote, mountable file system in the context of the Shared File - Systems. You can mount a share to, and access a share from, several - hosts by several users at a time. - - share network - - An entity in the context of the Shared File Systems that - encapsulates interaction with the Networking service. If the driver - you selected runs in the mode requiring such kind of interaction, you - need to specify the share network to create a share. - - Shared File Systems API - - A Shared File Systems service that provides a stable RESTful API. - The service authenticates and routes requests throughout the Shared - File Systems service. There is python-manilaclient to interact with - the API. - - Shared File Systems service - - An OpenStack service that provides a set of services for - management of shared file systems in a multi-tenant cloud - environment. The service is similar to how OpenStack provides - block-based storage management through the OpenStack Block Storage - service project. With the Shared File Systems service, you can create - a remote file system and mount the file system on your instances. You - can also read and write data from your instances to and from your - file system. The project name of the Shared File Systems service is - manila. - - shared IP address - - An IP address that can be assigned to a VM instance within the - shared IP group. Public IP addresses can be shared across multiple - servers for use in various high-availability scenarios. When an IP - address is shared to another server, the cloud network restrictions - are modified to enable each server to listen to and respond on that IP - address. You can optionally specify that the target server network - configuration be modified. Shared IP addresses can be used with many - standard heartbeat facilities, such as keepalive, that monitor for - failure and manage IP failover. - - shared IP group - - A collection of servers that can share IPs with other members of - the group. Any server in a group can share one or more public IPs with - any other server in the group. With the exception of the first server - in a shared IP group, servers must be launched into shared IP groups. - A server may be a member of only one shared IP group. - - shared storage - - Block storage that is simultaneously accessible by multiple - clients, for example, NFS. - - Sheepdog - - Distributed block storage system for QEMU, supported by - OpenStack. - - Simple Cloud Identity Management (SCIM) - - Specification for managing identity in the cloud, currently - unsupported by OpenStack. - - Single-root I/O Virtualization (SR-IOV) - - A specification that, when implemented by a physical PCIe - device, enables it to appear as multiple separate PCIe devices. This - enables multiple virtualized guests to share direct access to the - physical device, offering improved performance over an equivalent - virtual device. Currently supported in OpenStack Havana and later - releases. - - Service Level Agreement (SLA) - - Contractual obligations that ensure the availability of a - service. - - SmokeStack - - Runs automated tests against the core OpenStack API; written in - Rails. - - snapshot - - A point-in-time copy of an OpenStack storage volume or image. - Use storage volume snapshots to back up volumes. Use image snapshots - to back up data, or as "gold" images for additional servers. - - soft reboot - - A controlled reboot where a VM instance is properly restarted - through operating system commands. - - Software Development Lifecycle Automation service - - OpenStack project that aims to make cloud services easier to - consume and integrate with application development process - by automating the source-to-image process, and simplifying - app-centric deployment. The project name is solum. - - SolidFire Volume Driver - - The Block Storage driver for the SolidFire iSCSI storage - appliance. - - solum - - OpenStack project that provides a Software Development - Lifecycle Automation service. - - SPICE - - The Simple Protocol for Independent Computing Environments - (SPICE) provides remote desktop access to guest virtual machines. It - is an alternative to VNC. SPICE is supported by OpenStack. - - spread-first scheduler - - The Compute VM scheduling algorithm that attempts to start a new - VM on the host with the least amount of load. - - SQL-Alchemy - - An open source SQL toolkit for Python, used in OpenStack. - - SQLite - - A lightweight SQL database, used as the default persistent - storage method in many OpenStack services. - - stack - - A set of OpenStack resources created and managed by the - Orchestration service according to a given template (either an - AWS CloudFormation template or a Heat Orchestration - Template (HOT)). - - StackTach - - Community project that captures Compute AMQP communications; - useful for debugging. - - static IP address - - Alternative term for a fixed IP address. - - StaticWeb - - WSGI middleware component of Object Storage that serves - container data as a static web page. - - storage back end - - The method that a service uses for persistent storage, such as - iSCSI, NFS, or local disk. - - storage node - - An Object Storage node that provides container services, account - services, and object services; controls the account databases, - container databases, and object storage. - - storage manager - - A XenAPI component that provides a pluggable interface to - support a wide variety of persistent storage back ends. - - storage manager back end - - A persistent storage method supported by XenAPI, such as iSCSI - or NFS. - - storage services - - Collective name for the Object Storage object services, - container services, and account services. - - strategy - - Specifies the authentication source used by Image service or - Identity. In the Database service, it refers to the extensions - implemented for a data store. - - subdomain - - A domain within a parent domain. Subdomains cannot be - registered. Subdomains enable you to delegate domains. Subdomains can - themselves have subdomains, so third-level, fourth-level, fifth-level, - and deeper levels of nesting are possible. - - subnet - - Logical subdivision of an IP network. - - SUSE Linux Enterprise Server (SLES) - - A Linux distribution that is compatible with OpenStack. - - suspend - - Alternative term for a paused VM instance. - - swap - - Disk-based virtual memory used by operating systems to provide - more memory than is actually available on the system. - - swauth - - An authentication and authorization service for Object Storage, - implemented through WSGI middleware; uses Object Storage itself as the - persistent backing store. - - swift - - An OpenStack core project that provides object storage - services. - - swift All in One (SAIO) - - Creates a full Object Storage development environment within a - single VM. - - swift middleware - - Collective term for Object Storage components that provide - additional functionality. - - swift proxy server - - Acts as the gatekeeper to Object Storage and is responsible for - authenticating the user. - - swift storage node - - A node that runs Object Storage account, container, and object - services. - - sync point - - Point in time since the last container and accounts database - sync among nodes within Object Storage. - - sysadmin - - One of the default roles in the Compute RBAC system. Enables a - user to add other users to a project, interact with VM images that are - associated with the project, and start and stop VM instances. - - system usage - - A Compute component that, along with the notification system, - collects meters and usage information. This information can be used - for billing. - - Telemetry service - - An integrated project that provides metering and measuring - facilities for OpenStack. The project name of Telemetry is - ceilometer. - - TempAuth - - An authentication facility within Object Storage that enables - Object Storage itself to perform authentication and authorization. - Frequently used in testing and development. - - Tempest - - Automated software test suite designed to run against the trunk - of the OpenStack core project. - - TempURL - - An Object Storage middleware component that enables creation of - URLs for temporary object access. - - tenant - - A group of users; used to isolate access to Compute resources. - An alternative term for a project. - - Tenant API - - An API that is accessible to tenants. - - tenant endpoint - - An Identity service API endpoint that is associated with one or - more tenants. - - tenant ID - - Unique ID assigned to each tenant within the Identity service. - The project IDs map to the tenant IDs. - - token - - An alpha-numeric string of text used to access OpenStack APIs - and resources. - - token services - - An Identity service component that manages and validates tokens - after a user or tenant has been authenticated. - - tombstone - - Used to mark Object Storage objects that have been - deleted; ensures that the object is not updated on another node after - it has been deleted. - - topic publisher - - A process that is created when a RPC call is executed; used to - push the message to the topic exchange. - - Torpedo - - Community project used to run automated tests against the - OpenStack API. - - transaction ID - - Unique ID assigned to each Object Storage request; used for - debugging and tracing. - - transient - - Alternative term for non-durable. - - transient exchange - - Alternative term for a non-durable exchange. - - transient message - - A message that is stored in memory and is lost after the server - is restarted. - - transient queue - - Alternative term for a non-durable queue. - - TripleO - - OpenStack-on-OpenStack program. The code name for the - OpenStack Deployment program. - - trove - - OpenStack project that provides database services to - applications. - - Ubuntu - - A Debian-based Linux distribution. - - unscoped token - - Alternative term for an Identity service default token. - - updater - - Collective term for a group of Object Storage components that - processes queued and failed updates for containers and objects. - - user - - In OpenStack Identity, entities represent individual API - consumers and are owned by a specific domain. In OpenStack Compute, - a user can be associated with roles, projects, or both. - - user data - - A blob of data that the user can specify when they launch - an instance. The instance can access this data through the - metadata service or config drive. - Commonly used to pass a shell script that the instance runs on boot. - - User Mode Linux (UML) - - An OpenStack-supported hypervisor. - - VIF UUID - - Unique ID assigned to each Networking VIF. - - VIP - - The primary load balancing configuration object. - Specifies the virtual IP address and port where client traffic - is received. Also defines other details such as the load - balancing method to be used, protocol, and so on. This entity - is sometimes known in load-balancing products as a virtual - server, vserver, or listener. - - Virtual Central Processing Unit (vCPU) - - Subdivides physical CPUs. Instances can then use those - divisions. - - Virtual Disk Image (VDI) - - One of the VM image disk formats supported by Image - service. - - VXLAN - - A network virtualization technology that attempts to reduce the - scalability problems associated with large cloud computing - deployments. It uses a VLAN-like encapsulation technique to - encapsulate Ethernet frames within UDP packets. - - Virtual Hard Disk (VHD) - - One of the VM image disk formats supported by Image - service. - - virtual IP - - An Internet Protocol (IP) address configured on the load - balancer for use by clients connecting to a service that is load - balanced. Incoming connections are distributed to back-end nodes based - on the configuration of the load balancer. - - virtual machine (VM) - - An operating system instance that runs on top of a hypervisor. - Multiple VMs can run at the same time on the same physical - host. - - virtual network - - An L2 network segment within Networking. - - virtual networking - - A generic term for virtualization of network functions - such as switching, routing, load balancing, and security using - a combination of VMs and overlays on physical network - infrastructure. - - Virtual Network Computing (VNC) - - Open source GUI and CLI tools used for remote console access to - VMs. Supported by Compute. - - Virtual Network InterFace (VIF) - - An interface that is plugged into a port in a Networking - network. Typically a virtual network interface belonging to a - VM. - - virtual port - - Attachment point where a virtual interface connects to a virtual - network. - - virtual private network (VPN) - - Provided by Compute in the form of cloudpipes, specialized - instances that are used to create VPNs on a per-project basis. - - virtual server - - Alternative term for a VM or guest. - - virtual switch (vSwitch) - - Software that runs on a host or node and provides the features - and functions of a hardware-based network switch. - - virtual VLAN - - Alternative term for a virtual network. - - VirtualBox - - An OpenStack-supported hypervisor. - - VLAN manager - - A Compute component that provides dnsmasq and radvd and sets up - forwarding to and from cloudpipe instances. - - VLAN network - - The Network Controller provides virtual networks to enable - compute servers to interact with each other and with the public - network. All machines must have a public and private network - interface. A VLAN network is a private network interface, which is - controlled by the ``vlan_interface`` option with VLAN - managers. - - VM disk (VMDK) - - One of the VM image disk formats supported by Image - service. - - VM image - - Alternative term for an image. - - VM Remote Control (VMRC) - - Method to access VM instance consoles using a web browser. - Supported by Compute. - - VMware API - - Supports interaction with VMware products in Compute. - - VMware NSX Neutron plug-in - - Provides support for VMware NSX in Neutron. - - VNC proxy - - A Compute component that provides users access to the consoles - of their VM instances through VNC or VMRC. - - volume - - Disk-based data storage generally represented as an iSCSI target - with a file system that supports extended attributes; can be - persistent or ephemeral. - - Volume API - - Alternative name for the Block Storage API. - - volume controller - - A Block Storage component that oversees and coordinates storage - volume actions. - - volume driver - - Alternative term for a volume plug-in. - - volume ID - - Unique ID applied to each storage volume under the Block Storage - control. - - volume manager - - A Block Storage component that creates, attaches, and detaches - persistent storage volumes. - - volume node - - A Block Storage node that runs the cinder-volume daemon. - - volume plug-in - - Provides support for new and specialized types of back-end - storage for the Block Storage volume manager. - - volume worker - - A cinder component that interacts with back-end storage to manage - the creation and deletion of volumes and the creation of compute - volumes, provided by the cinder-volume daemon. - - vSphere - - An OpenStack-supported hypervisor. - - weighting - - A Compute process that determines the suitability of the VM - instances for a job for a particular host. For example, not enough RAM - on the host, too many CPUs on the host, and so on. - - weight - - Used by Object Storage devices to determine which storage - devices are suitable for the job. Devices are weighted by size. - - weighted cost - - The sum of each cost used when deciding where to start a new VM - instance in Compute. - - worker - - A daemon that listens to a queue and carries out tasks in - response to messages. For example, the cinder-volume worker manages volume - creation and deletion on storage arrays. - - Workflow service - - OpenStack project that provides a simple YAML-based language - to write workflows, tasks and transition rules, and a - service that allows to upload them, modify, run them at - scale and in a highly available manner, manage and monitor - workflow execution state and state of individual tasks. The - code name of the project is mistral. - - Xen - - Xen is a hypervisor using a microkernel design, providing - services that allow multiple computer operating systems to - execute on the same computer hardware concurrently. - - Xen API - - The Xen administrative API, which is supported by - Compute. - - Xen Cloud Platform (XCP) - - An OpenStack-supported hypervisor. - - Xen Storage Manager Volume Driver - - A Block Storage volume plug-in that enables communication with - the Xen Storage Manager API. - - XenServer - - An OpenStack-supported hypervisor. - - XFS - - High-performance 64-bit file system created by Silicon - Graphics. Excels in parallel I/O operations and data - consistency. - - zaqar - - OpenStack project that provides a message service to - applications. - - ZeroMQ - - Message queue software supported by OpenStack. An alternative to - RabbitMQ. Also spelled 0MQ. - - Zuul - - Tool used in OpenStack development to ensure correctly ordered - testing of changes in parallel. diff --git a/doc/figures/Check_mark_23x20_02.png b/doc/figures/Check_mark_23x20_02.png deleted file mode 100644 index e6e5d5a7..00000000 Binary files a/doc/figures/Check_mark_23x20_02.png and /dev/null differ diff --git a/doc/figures/Check_mark_23x20_02.svg b/doc/figures/Check_mark_23x20_02.svg deleted file mode 100644 index 3051a2f9..00000000 --- a/doc/figures/Check_mark_23x20_02.svg +++ /dev/null @@ -1,60 +0,0 @@ - - - - - - - - - image/svg+xml - - - - - - - - diff --git a/doc/figures/network_packet_ping.svg b/doc/figures/network_packet_ping.svg deleted file mode 100644 index f5dda8e2..00000000 --- a/doc/figures/network_packet_ping.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2013-03-02 18:48ZCanvas 1Layer 1Compute Node nbr100Internetinstanceeth0eth0vnet1L2 Switchgateway12345 diff --git a/doc/figures/neutron_packet_ping.svg b/doc/figures/neutron_packet_ping.svg deleted file mode 100644 index 898794ff..00000000 --- a/doc/figures/neutron_packet_ping.svg +++ /dev/null @@ -1,1734 +0,0 @@ - - - - - 2013-03-02 18:48Z - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - IP Link - Layer2 VLANTrunk - - Neutron Network Paths - - VLAN and GRE networks - - GRE networks - - VLAN networks - - - - - - Compute Node n - - - - br-int - - - - instance - - - - - eth0 - - - - - - tap - - - - - - - 1 - - - - - - - - - 2 - - - - - - - - - - 3 - - - - - br-tun - - - - - - 4b - - - - - - - 4a - - - - - patch-tun - - - - int-br-eth1 - - - - - - - eth1 - - - - - phy-br-eth1 - - - - - eth0 - - - - br-eth1 - - - - patch-int - - - - gre0 - - - - gre<N> - - - - - - - - - - - - - Network Node - - - - - dhcp-agent - - - - - - - 10 - - - - - - - 5b - - - - - - 5a - - - - - - - 8 - - - - - - br-eth1 - - - br-tun - - - - - eth1 - - - - - phy-br-eth1 - - - - patch-int - - - - gre<N> - - - - gre0 - - - - - eth2 - - - - - - - - qg-<n> - - - - - - eth0 - - - - - - - 9 - - - - - - - - - 6 - - - - br-int - - - - - - tap - - - - - - qr-<n> - - - - - phy-br-eth1 - - - - patch-tun - - - - br-ex - - - - netns qrouter-uuid - netns qdhcp-uuid - - - - - - - - - - - l3-agent - - - - - - - 7 - - - - - - - - Internet - - - - - - diff --git a/doc/figures/os-ref-arch.svg b/doc/figures/os-ref-arch.svg deleted file mode 100644 index 7fea7f19..00000000 --- a/doc/figures/os-ref-arch.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2013-02-26 23:27ZCanvas 1Layer 1Compute Node 2nova-computenova-api-metadatanova-vncconsolenova-networketh1eth0Compute Node 1HypervisorAPI for metadatanoVNCnova-networkInternetCloud Controller NodeDatabaseMessage QueueAPI servicesSchedulerIdentityImageBlock StorageDashboardConsole accesseth0eth1Management Network 192.168.1.0/24Public Network 203.0.113.0/24Flat Network 10.1.0.0/16eth1eth0Block Storage NodeSCSI target (tgt)eth1Ephemeral Storage NodeNFSeth1cinder-volume diff --git a/doc/figures/os_physical_network.svg b/doc/figures/os_physical_network.svg deleted file mode 100644 index d4d83fcb..00000000 --- a/doc/figures/os_physical_network.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2013-02-27 18:33ZCanvas 1Layer 1Compute Node neth0eth1Management Network192.168.1.0/24Flat Network10.1.0.0/16Public Network203.0.113.0/24br100instance ninstance 2instance 1 diff --git a/doc/figures/osog_0001.png b/doc/figures/osog_0001.png deleted file mode 100644 index 3d7c556a..00000000 Binary files a/doc/figures/osog_0001.png and /dev/null differ diff --git a/doc/figures/osog_00in01.png b/doc/figures/osog_00in01.png deleted file mode 100644 index 1a7c150c..00000000 Binary files a/doc/figures/osog_00in01.png and /dev/null differ diff --git a/doc/figures/osog_0101.png b/doc/figures/osog_0101.png deleted file mode 100644 index 083d916a..00000000 Binary files a/doc/figures/osog_0101.png and /dev/null differ diff --git a/doc/figures/osog_0102.png b/doc/figures/osog_0102.png deleted file mode 100644 index 33ac264b..00000000 Binary files a/doc/figures/osog_0102.png and /dev/null differ diff --git a/doc/figures/osog_0103.png b/doc/figures/osog_0103.png deleted file mode 100644 index b5a2de15..00000000 Binary files a/doc/figures/osog_0103.png and /dev/null differ diff --git a/doc/figures/osog_0104.png b/doc/figures/osog_0104.png deleted file mode 100644 index 3405b5f5..00000000 Binary files a/doc/figures/osog_0104.png and /dev/null differ diff --git a/doc/figures/osog_0105.png b/doc/figures/osog_0105.png deleted file mode 100644 index a1971f47..00000000 Binary files a/doc/figures/osog_0105.png and /dev/null differ diff --git a/doc/figures/osog_0106.png b/doc/figures/osog_0106.png deleted file mode 100644 index 57fc63af..00000000 Binary files a/doc/figures/osog_0106.png and /dev/null differ diff --git a/doc/figures/osog_01in01.png b/doc/figures/osog_01in01.png deleted file mode 100644 index 8656f98b..00000000 Binary files a/doc/figures/osog_01in01.png and /dev/null differ diff --git a/doc/figures/osog_01in02.png b/doc/figures/osog_01in02.png deleted file mode 100644 index 1565401e..00000000 Binary files a/doc/figures/osog_01in02.png and /dev/null differ diff --git a/doc/figures/osog_0201.png b/doc/figures/osog_0201.png deleted file mode 100644 index 794c327e..00000000 Binary files a/doc/figures/osog_0201.png and /dev/null differ diff --git a/doc/figures/osog_0901.png b/doc/figures/osog_0901.png deleted file mode 100644 index 8a11b8d8..00000000 Binary files a/doc/figures/osog_0901.png and /dev/null differ diff --git a/doc/figures/osog_0902.png b/doc/figures/osog_0902.png deleted file mode 100644 index 90a9db57..00000000 Binary files a/doc/figures/osog_0902.png and /dev/null differ diff --git a/doc/figures/osog_1201.png b/doc/figures/osog_1201.png deleted file mode 100644 index d0e3a3fd..00000000 Binary files a/doc/figures/osog_1201.png and /dev/null differ diff --git a/doc/figures/osog_1202.png b/doc/figures/osog_1202.png deleted file mode 100644 index ce1e475e..00000000 Binary files a/doc/figures/osog_1202.png and /dev/null differ diff --git a/doc/figures/osog_ac01.png b/doc/figures/osog_ac01.png deleted file mode 100644 index 6caddef4..00000000 Binary files a/doc/figures/osog_ac01.png and /dev/null differ diff --git a/doc/figures/releasecyclegrizzlydiagram.png b/doc/figures/releasecyclegrizzlydiagram.png deleted file mode 100644 index 26ae2250..00000000 Binary files a/doc/figures/releasecyclegrizzlydiagram.png and /dev/null differ diff --git a/doc/glossary/README.rst b/doc/glossary/README.rst deleted file mode 100644 index d39c86dc..00000000 --- a/doc/glossary/README.rst +++ /dev/null @@ -1,7 +0,0 @@ -Important note about glossary -============================= - -This directory is synced from openstack-manuals. If you need to make -changes, make the changes in openstack-manuals/doc/glossary. After any -change merged to openstack-manuals/doc/glossary, automatically a patch -for this directory will be proposed. diff --git a/doc/glossary/glossary-terms.xml b/doc/glossary/glossary-terms.xml deleted file mode 100644 index 45166e3c..00000000 --- a/doc/glossary/glossary-terms.xml +++ /dev/null @@ -1,9698 +0,0 @@ - - -%openstack; -]> - - - Glossary - - - Licensed under the Apache License, Version 2.0 (the - "License"); you may not use this file except in - compliance with the License. You may obtain a copy of - the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in - writing, software distributed under the License is - distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR - CONDITIONS OF ANY KIND, either express or implied. See - the License for the specific language governing - permissions and limitations under the License. - - - This glossary offers a list of terms and definitions to define a - vocabulary for OpenStack-related concepts. - - To add to OpenStack glossary, clone the openstack/openstack-manuals - repository and update the source file - doc/glossary/glossary-terms.xml through the - OpenStack contribution process. - - - - Numbers - - - 6to4 - - 6to4 - - - A mechanism that allows IPv6 packets to be transmitted - over an IPv4 network, providing a strategy for migrating to - IPv6. - - - - - - - - - - A - - - absolute limit - - absolute limit - - - - Impassable limits for guest VMs. Settings include total RAM - size, maximum number of vCPUs, and maximum disk size. - - - - - access control list - - access control list (ACL) - - - - A list of permissions attached to an object. An ACL specifies - which users or system processes have access to objects. It also - defines which operations can be performed on specified objects. Each - entry in a typical ACL specifies a subject and an operation. For - instance, the ACL entry (Alice, delete) for a file gives - Alice permission to delete the file. - - - - - access key - - access key - - - - Alternative term for an Amazon EC2 access key. See EC2 access - key. - - - - - account - - accounts - - - - The Object Storage context of an account. Do not confuse with a - user account from an authentication service, such as Active Directory, - /etc/passwd, OpenLDAP, OpenStack Identity, and so on. - - - - - account auditor - - account auditor - - - - Checks for missing replicas and incorrect or corrupted objects - in a specified Object Storage account by running queries against the - back-end SQLite database. - - - - - account database - - account database - - - - A SQLite database that contains Object Storage accounts and - related metadata and that the accounts server accesses. - - - - - account reaper - - account reaper - - - - An Object Storage worker that scans for and deletes account - databases and that the account server has marked for deletion. - - - - - account server - - account server - - - - Lists containers in Object Storage and stores container - information in the account database. - - - - - account service - - account service - - - - An Object Storage component that provides account services such - as list, create, modify, and audit. Do not confuse with OpenStack - Identity service, OpenLDAP, or similar user-account services. - - - - - accounting - - accounting - - - - The Compute service provides accounting information through the - event notification and system usage data facilities. - - - - - ACL - - ACL - - access control list - - - - See access control list. - - - - - active/active configuration - - active/active configuration - - - - In a high-availability setup with an active/active - configuration, several systems share the load together and if one - fails, the load is distributed to the remaining systems. - - - - - Active Directory - - Active Directory - - - - Authentication and identity service by Microsoft, based on LDAP. - Supported in OpenStack. - - - - - active/passive configuration - - active/passive configuration - - - - In a high-availability setup with an active/passive - configuration, systems are set up to bring additional resources online - to replace those that have failed. - - - - - address pool - - address pool - - - - A group of fixed and/or floating IP addresses that are assigned - to a project and can be used by or assigned to the VM instances in a - project. - - - - - admin API - - admin API - - - - A subset of API calls that are accessible to authorized - administrators and are generally not accessible to end users or the - public Internet. They can exist as a separate service (keystone) or - can be a subset of another API (nova). - - - - - administrator - - administrator - - - - The person responsible for installing, configuring, - and managing an OpenStack cloud. - - - - - admin server - - admin server - - - - In the context of the Identity service, the worker process that - provides access to the admin API. - - - - - Advanced Message Queuing Protocol (AMQP) - - Advanced Message Queuing Protocol (AMQP) - - - - The open standard messaging protocol used by OpenStack - components for intra-service communications, provided by RabbitMQ, - Qpid, or ZeroMQ. - - - - - Advanced RISC Machine (ARM) - - Advanced RISC Machine (ARM) - - - - Lower power consumption CPU often found in mobile and embedded - devices. Supported by OpenStack. - - - - - alert - - alerts - - definition of - - - - The Compute service can send alerts through its notification - system, which includes a facility to create custom notification - drivers. Alerts can be sent to and displayed on the horizon - dashboard. - - - - - allocate - - allocate, definition of - - - - The process of taking a floating IP address from the address - pool so it can be associated with a fixed IP on a guest VM - instance. - - - - - Amazon Kernel Image (AKI) - - Amazon Kernel Image (AKI) - - - - Both a VM container format and disk format. Supported by Image - service. - - - - - Amazon Machine Image (AMI) - - Amazon Machine Image (AMI) - - - - Both a VM container format and disk format. Supported by Image - service. - - - - - Amazon Ramdisk Image (ARI) - - Amazon Ramdisk Image (ARI) - - - - Both a VM container format and disk format. Supported by Image - service. - - - - - Anvil - - Anvil - - - - A project that ports the shell script-based project named - DevStack to Python. - - - - - Apache - - Apache - - - - The Apache Software Foundation supports the Apache community of - open-source software projects. These projects provide software - products for the public good. - - - - - Apache License 2.0 - - Apache License 2.0 - - - - All OpenStack core projects are provided under the terms of the - Apache License 2.0 license. - - - - - Apache Web Server - - Apache Web Server - - - - The most common web server software currently used on the - Internet. - - - - - API endpoint - - endpoints - - API endpoint - - - API (application programming interface) - - API endpoint - - - - The daemon, worker, or service that a client communicates with - to access an API. API endpoints can provide any number of services, - such as authentication, sales data, performance meters, Compute VM - commands, census data, and so on. - - - - - API extension - - API (application programming interface) - - API extension - - - - Custom modules that extend some OpenStack core APIs. - - - - - API extension plug-in - - API (application programming interface) - - API extension plug-in - - - - Alternative term for a Networking plug-in or Networking API - extension. - - - - - API key - - API (application programming interface) - - API key - - - - Alternative term for an API token. - - - - - API server - - API (application programming interface) - - API server - - - - Any node running a daemon or worker that provides an API - endpoint. - - - - - API token - - API (application programming interface) - - API token - - - - Passed to API requests and used by OpenStack to verify that the - client is authorized to run the requested operation. - - - - - API version - - API (application programming interface) - - API version - - - - In OpenStack, the API version for a project is part of the URL. - For example, example.com/nova/v1/foobar. - - - - - applet - - applet - - - - A Java program that can be embedded into a web page. - - - - - Application Programming Interface (API) - - - A collection of specifications used to access a service, - application, or program. Includes service calls, required parameters - for each call, and the expected return values. - - - - - Application Catalog service - - Application Catalog service - murano - - - - - OpenStack project that provides an application catalog - service so that users can compose and deploy composite - environments on an application abstraction level while - managing the application lifecycle. The code name of the - project is murano. - - - - - application server - - servers - - application servers - - - application server - - - - A piece of software that makes available another piece of - software over a network. - - - - - Application Service Provider (ASP) - - Application Service Provider (ASP) - - - - - Companies that rent specialized applications that help - businesses and organizations provide additional services - with lower cost. - - - - - - Address Resolution Protocol (ARP) - - Address Resolution Protocol (ARP) - - - - - The protocol by which layer-3 IP addresses are resolved into - layer-2 link local addresses. - - - - - - arptables - - arptables - - - - Tool used for maintaining Address Resolution Protocol packet - filter rules in the Linux kernel firewall modules. Used along with - iptables, ebtables, and ip6tables in Compute to provide firewall - services for VMs. - - - - - associate - - associate, definition of - - - - The process associating a Compute floating IP address with a - fixed IP address. - - - - - Asynchronous JavaScript and XML (AJAX) - - Asynchronous JavaScript and XML (AJAX) - - - - A group of interrelated web development techniques used on the - client-side to create asynchronous web applications. Used extensively - in horizon. - - - - - ATA over Ethernet (AoE) - - ATA over Ethernet (AoE) - - - - A disk storage protocol tunneled within Ethernet. - - - - - attach - - attach, definition of - - - - The process of connecting a VIF or vNIC to a L2 network in - Networking. In the context of Compute, this process connects a storage - volume to an instance. - - - - - attachment (network) - - attachment (network) - - - - Association of an interface ID to a logical port. Plugs an - interface into a port. - - - - - auditing - - auditing - - - - Provided in Compute through the system usage data - facility. - - - - - auditor - - auditor - - - - A worker process that verifies the integrity of Object Storage - objects, containers, and accounts. Auditors is the collective term for - the Object Storage account auditor, container auditor, and object - auditor. - - - - - Austin - - Austin - - - - The code name for the initial release of - OpenStack. The first design summit took place in - Austin, Texas, US. - - - - - auth node - - auth node - - - - Alternative term for an Object Storage authorization - node. - - - - - authentication - - authentication - - - - The process that confirms that the user, process, or client is - really who they say they are through private key, secret token, - password, fingerprint, or similar method. - - - - - authentication token - - authentication tokens - - - - A string of text provided to the client after authentication. - Must be provided by the user or process in subsequent requests to the - API endpoint. - - - - - AuthN - - AuthN - - - - The Identity service component that provides authentication - services. - - - - - authorization - - authorization - - - - The act of verifying that a user, process, or client is - authorized to perform an action. - - - - - authorization node - - authorization node - - - - An Object Storage node that provides authorization - services. - - - - - AuthZ - - AuthZ - - - - The Identity component that provides high-level - authorization services. - - - - - Auto ACK - - Auto ACK - - - - Configuration setting within RabbitMQ that enables or disables - message acknowledgment. Enabled by default. - - - - - auto declare - - auto declare - - - - A Compute RabbitMQ setting that determines whether a message - exchange is automatically created when the program starts. - - - - - availability zone - - availability zone - - - - An Amazon EC2 concept of an isolated area that is used for fault - tolerance. Do not confuse with an OpenStack Compute zone or - cell. - - - - - AWS - - AWS (Amazon Web Services) - - - - Amazon Web Services. - - - - - AWS CloudFormation template - - AWS CloudFormation template - - - - - AWS CloudFormation allows AWS users to create and manage a - collection of related resources. The Orchestration service - supports a CloudFormation-compatible format (CFN). - - - - - - - - - B - - - back end - - back-end interactions - - definition of - - - - Interactions and processes that are obfuscated from the user, - such as Compute volume mount, data transmission to an iSCSI target by - a daemon, or Object Storage object integrity checks. - - - - - back-end catalog - - back-end interactions - - catalog - - - - The storage method used by the Identity service catalog service - to store and retrieve information about API endpoints that are - available to the client. Examples include an SQL database, LDAP - database, or KVS back end. - - - - - back-end store - - back-end interactions - - store - - - - The persistent data store used to save and retrieve information - for a service, such as lists of Object Storage objects, current state - of guest VMs, lists of user names, and so on. Also, the method that the - Image service uses to get and store VM images. Options include Object - Storage, local file system, S3, and HTTP. - - - - - backup restore and disaster recovery as a service - - backup restore and disaster recovery as a service - - - - - The OpenStack project that provides integrated tooling for - backing up, restoring, and recovering file systems, - instances, or database backups. The project name is freezer. - - - - - - bandwidth - - bandwidth - - definition of - - - - The amount of available data used by communication resources, - such as the Internet. Represents the amount of data that is used to - download things or the amount of data available to download. - - - - - barbican - - barbican - - - - Code name of the key management service for OpenStack. - - - - - - bare - - bare, definition of - - - - An Image service container format that indicates that no - container exists for the VM image. - - - - - Bare Metal service - - Bare Metal service - ironic - - - - OpenStack project that provisions bare metal, as opposed to - virtual, machines. The code name for the project is ironic. - - - - - base image - - base image - - - - An OpenStack-provided image. - - - - - Bell-LaPadula model - - Bell-LaPadula model - - - - A security model that focuses on data confidentiality - and controlled access to classified information. - This model divide the entities into subjects and objects. - The clearance of a subject is compared to the classification of the - object to determine if the subject is authorized for the specific access mode. - The clearance or classification scheme is expressed in terms of a lattice. - - - - - Benchmark service - - Benchmark service - rally - - - - - OpenStack project that provides a framework for - performance analysis and benchmarking of individual - OpenStack components as well as full production OpenStack - cloud deployments. The code name of the project is rally. - - - - - Bexar - - Bexar - - - - A grouped release of projects related to - OpenStack that came out in February of 2011. It - included only Compute (nova) and Object Storage (swift). - Bexar is the code name for the second release of - OpenStack. The design summit took place in - San Antonio, Texas, US, which is the county seat for Bexar county. - - - - - binary - - binary - - definition of - - - - Information that consists solely of ones and zeroes, which is - the language of computers. - - - - - bit - - bits, definition of - - - - A bit is a single digit number that is in base of 2 (either a - zero or one). Bandwidth usage is measured in bits per second. - - - - - bits per second (BPS) - - bits per second (BPS) - - - - The universal measurement of how quickly data is transferred - from place to place. - - - - - block device - - block device - - - - A device that moves data in the form of blocks. These device - nodes interface the devices, such as hard disks, CD-ROM drives, flash - drives, and other addressable regions of memory. - - - - - block migration - - block migration - - - - A method of VM live migration used by KVM to evacuate instances - from one host to another with very little downtime during a - user-initiated switchover. Does not require shared storage. Supported - by Compute. - - - - - Block Storage service - - Block Storage service - - - - The OpenStack core project that enables management of volumes, - volume snapshots, and volume types. The project name of Block Storage - is cinder. - - - - - Block Storage API - - Block Storage API - - - - An API on a separate endpoint for attaching, - detaching, and creating block storage for compute - VMs. - - - - - BMC - - BMC (Baseboard Management Controller) - - - - Baseboard Management Controller. The intelligence in the IPMI - architecture, which is a specialized micro-controller that is embedded - on the motherboard of a computer and acts as a server. Manages the - interface between system management software and platform - hardware. - - - - - bootable disk image - - bootable disk image - - - - A type of VM image that exists as a single, bootable - file. - - - - - Bootstrap Protocol (BOOTP) - - Bootstrap Protocol (BOOTP) - - - - A network protocol used by a network client to obtain an IP - address from a configuration server. Provided in Compute through the - dnsmasq daemon when using either the FlatDHCP manager or VLAN manager - network manager. - - - - - Border Gateway Protocol (BGP) - - Border Gateway Protocol (BGP) - - - - The Border Gateway Protocol is a dynamic routing protocol - that connects autonomous systems. Considered the - backbone of the Internet, this protocol connects disparate - networks to form a larger network. - - - - - browser - - browsers, definition of - - - - Any client software that enables a computer or device to access - the Internet. - - - - - builder file - - builder files - - - - Contains configuration information that Object Storage uses to - reconfigure a ring or to re-create it from scratch after a serious - failure. - - - - - bursting - - bursting - - - - - The practice of utilizing a secondary environment to - elastically build instances on-demand when the primary - environment is resource constrained. - - - - - - button class - - button classes - - - - A group of related button types within horizon. Buttons to - start, stop, and suspend VMs are in one class. Buttons to associate - and disassociate floating IP addresses are in another class, and so - on. - - - - - byte - - bytes, definition of - - - - Set of bits that make up a single character; there are usually 8 - bits to a byte. - - - - - - - - C - - - CA - - CA (Certificate/Certification Authority) - - - - Certificate Authority or Certification Authority. In - cryptography, an entity that issues digital certificates. The digital - certificate certifies the ownership of a public key by the named - subject of the certificate. This enables others (relying parties) to - rely upon signatures or assertions made by the private key that - corresponds to the certified public key. In this model of trust - relationships, a CA is a trusted third party for both the subject - (owner) of the certificate and the party relying upon the certificate. - CAs are characteristic of many public key infrastructure (PKI) - schemes. - - - - - cache pruner - - cache pruners - - - - A program that keeps the Image service VM image cache at or - below its configured maximum size. - - - - - Cactus - - Cactus - - - - An OpenStack grouped release of projects that came out in the - spring of 2011. It included Compute (nova), Object Storage (swift), - and the Image service (glance). - Cactus is a city in Texas, US and is the code name for - the third release of OpenStack. When OpenStack releases went - from three to six months long, the code name of the release - changed to match a geography nearest the previous - summit. - - - - CADF - - - Cloud Auditing Data Federation (CADF) is a - specification for audit event data. CADF is - supported by OpenStack Identity. - - - - - - CALL - - CALL - - - - One of the RPC primitives used by the OpenStack message queue - software. Sends a message and waits for a response. - - - - - capability - - capability - - definition of - - - - Defines resources for a cell, including CPU, storage, and - networking. Can apply to the specific services within a cell or a - whole cell. - - - - - capacity cache - - capacity cache - - - - A Compute back-end database table that contains the current - workload, amount of free RAM, and number of VMs running on each host. - Used to determine on which host a VM starts. - - - - - capacity updater - - capacity updater - - - - A notification driver that monitors VM instances and updates the - capacity cache as needed. - - - - - CAST - - CAST (RPC primitive) - - - - One of the RPC primitives used by the OpenStack message queue - software. Sends a message and does not wait for a response. - - - - - catalog - - catalog - - - - A list of API endpoints that are available to a user after - authentication with the Identity service. - - - - - catalog service - - catalog service - - - - An Identity service that lists API endpoints that are available - to a user after authentication with the Identity service. - - - - - ceilometer - - ceilometer - - - - The project name for the Telemetry service, which is an - integrated project that provides metering and measuring facilities for - OpenStack. - - - - - cell - - cells - - definition of - - - - Provides logical partitioning of Compute resources in a child - and parent relationship. Requests are passed from parent cells to - child cells if the parent cannot provide the requested - resource. - - - - - cell forwarding - - cells - - cell forwarding - - - - A Compute option that enables parent cells to pass resource - requests to child cells if the parent cannot provide the requested - resource. - - - - - cell manager - - cells - - cell managers - - - - The Compute component that contains a list of the current - capabilities of each host within the cell and routes requests as - appropriate. - - - - - CentOS - - CentOS - - - - A Linux distribution that is compatible with OpenStack. - - - - - Ceph - - Ceph - - - - Massively scalable distributed storage system that consists of - an object store, block store, and POSIX-compatible distributed file - system. Compatible with OpenStack. - - - - - CephFS - - CephFS - - - - The POSIX-compliant file system provided by Ceph. - - - - - certificate authority - - certificate authority (Compute) - - - - A simple certificate authority provided by Compute for cloudpipe - VPNs and VM image decryption. - - - - - Challenge-Handshake Authentication Protocol (CHAP) - - Challenge-Handshake Authentication Protocol - (CHAP) - - - - An iSCSI authentication method supported by Compute. - - - - - chance scheduler - - chance scheduler - - - - A scheduling method used by Compute that randomly chooses an - available host from the pool. - - - - - changes since - - changes since - - - - A Compute API parameter that downloads changes to the requested - item since your last request, instead of downloading a new, fresh set - of data and comparing it against the old data. - - - - - Chef - - Chef - - - - An operating system configuration management tool supporting - OpenStack deployments. - - - - - child cell - - cells - - child cells - - - child cells - - - - If a requested resource such as CPU time, disk storage, or - memory is not available in the parent cell, the request is forwarded - to its associated child cells. If the child cell can fulfill the - request, it does. Otherwise, it attempts to pass the request to any of - its children. - - - - - cinder - - cinder - - - - A core OpenStack project that provides block storage services - for VMs. - - - - - CirrOS - - CirrOS - - - - A minimal Linux distribution designed for use as a test - image on clouds such as OpenStack. - - - - - Cisco neutron plug-in - - Cisco neutron plug-in - - - - A Networking plug-in for Cisco devices and technologies, - including UCS and Nexus. - - - - - cloud architect - - cloud architect - - - - A person who plans, designs, and oversees the creation of - clouds. - - - - - cloud computing - - cloud computing - - definition of - - - - A model that enables access to a shared pool of configurable - computing resources, such as networks, servers, storage, applications, - and services, that can be rapidly provisioned and released with - minimal management effort or service provider interaction. - - - - - cloud controller - - cloud computing - - cloud controllers - - - - Collection of Compute components that represent the global state - of the cloud; talks to services, such as Identity authentication, - Object Storage, and node/storage workers through a - queue. - - - - - cloud controller node - - cloud computing - - cloud controller nodes - - - - A node that runs network, volume, API, scheduler, and image - services. Each service may be broken out into separate nodes for - scalability or availability. - - - - - Cloud Data Management Interface (CDMI) - - Cloud Data Management Interface (CDMI) - - - - SINA standard that defines a RESTful API for managing objects in - the cloud, currently unsupported in OpenStack. - - - - - Cloud Infrastructure Management Interface (CIMI) - - Cloud Infrastructure Management Interface (CIMI) - - - - An in-progress specification for cloud management. Currently - unsupported in OpenStack. - - - - - cloud-init - - cloud-init - - - - A package commonly installed in VM images that performs - initialization of an instance after boot using information that it - retrieves from the metadata service, such as the SSH public key and - user data. - - - - - cloudadmin - - cloudadmin - - - - One of the default roles in the Compute RBAC system. Grants - complete system access. - - - - - Cloudbase-Init - - Cloudbase-Init - cloud-init - - - - A Windows project providing guest initialization features, - similar to cloud-init. - - - - - cloudpipe - - cloudpipe - - definition of - - - - A compute service that creates VPNs on a per-project - basis. - - - - - cloudpipe image - - cloudpipe - - cloudpipe image - - - - A pre-made VM image that serves as a cloudpipe server. - Essentially, OpenVPN running on Linux. - - - - - Clustering service - - Clustering service - - - - - The OpenStack project that OpenStack project that implements - clustering services and libraries for the management of - groups of homogeneous objects exposed by other OpenStack - services. The project name of Clustering service is - senlin. - - - - - - CMDB - - CMDB (Configuration Management Database) - - - - Configuration Management Database. - - - - - congress - - congress - - - - - OpenStack project that provides the Governance service. - - - - - - command filter - - command filters - - - - Lists allowed commands within the Compute rootwrap - facility. - - - - - Common Internet File System (CIFS) - - Common Internet File System (CIFS) - - - - A file sharing protocol. It is a public or open variation of the - original Server Message Block (SMB) protocol developed and used by - Microsoft. Like the SMB protocol, CIFS runs at a higher level and uses - the TCP/IP protocol. - - - - - community project - - community projects - - - - A project that is not officially endorsed by the OpenStack - Foundation. If the project is successful enough, it might be elevated - to an incubated project and then to a core project, or it might be - merged with the main code trunk. - - - - - compression - - compression - - - - Reducing the size of files by special encoding, the file can be - decompressed again to its original content. OpenStack supports - compression at the Linux file system level but does not support - compression for things such as Object Storage objects or Image service - VM images. - - - - - Compute service - - Compute service - - - - The OpenStack core project that provides compute services. The - project name of Compute service is nova. - - - - - Compute API - - Compute - - Compute API - - - - The nova-api daemon - provides access to nova services. Can communicate with other APIs, - such as the Amazon EC2 API. - - - - - compute controller - - Compute - - compute controller - - - - The Compute component that chooses suitable hosts on which to - start VM instances. - - - - - compute host - - Compute - - compute host - - - - Physical host dedicated to running compute nodes. - - - - - compute node - - compute nodes - - definition of - - - - A node that runs the nova-compute daemon that manages VM - instances that provide a wide - range of services, such as web applications and analytics. - - - - - Compute service - - Compute - Compute service - - - - Name for the Compute component that manages VMs. - - - - - compute worker - - Compute - - compute worker - - - - The Compute component that runs on each compute node and manages - the VM instance lifecycle, including run, reboot, terminate, - attach/detach volumes, and so on. Provided by the nova-compute daemon. - - - - - concatenated object - - objects - - concatenated objects - - - concatenated objects - - - - A set of segment objects that Object Storage combines and sends - to the client. - - - - - conductor - - conductors - - - - In Compute, conductor is the process that proxies database - requests from the compute process. Using conductor improves security - because compute nodes do not need direct access to the - database. - - - - - consistency window - - consistency window - - - - The amount of time it takes for a new Object Storage object to - become accessible to all clients. - - - - - console log - - console logs - - - - Contains the output from a Linux VM console in Compute. - - - - - container - - containers - - definition of - - - - Organizes and stores objects in Object Storage. Similar to the - concept of a Linux directory but cannot be nested. Alternative term - for an Image service container format. - - - - - container auditor - - containers - - container auditors - - - - Checks for missing replicas or incorrect objects in specified - Object Storage containers through queries to the SQLite back-end - database. - - - - - container database - - containers - - container databases - - - - A SQLite database that stores Object Storage containers and - container metadata. The container server accesses this - database. - - - - - container format - - containers - - container format - - - - A wrapper used by the Image service that contains a VM image and - its associated metadata, such as machine state, OS disk size, and so - on. - - - - - container server - - containers - - container servers - - - - An Object Storage server that manages containers. - - - - - Containers service - - Containers service - magnum - - - - - OpenStack project that provides a set of services for - management of application containers in a multi-tenant cloud - environment. The code name of the project name is magnum. - - - - - - container service - - containers - - container service - - - - The Object Storage component that provides container services, - such as create, delete, list, and so on. - - - - - content delivery network (CDN) - - content delivery network (CDN) - - - - - A content delivery network is a specialized network that is - used to distribute content to clients, typically located - close to the client for increased performance. - - - - - - - controller node - - controller nodes - - under cloud computing - - - - Alternative term for a cloud controller node. - - - - - core API - - core API - - - - Depending on context, the core API is either the OpenStack API - or the main API of a specific core project, such as Compute, - Networking, Image service, and so on. - - - - - core service - - core service - - - - An official OpenStack service defined as core by - DefCore Committee. Currently, consists of - Block Storage service (cinder), Compute service (nova), - Identity service (keystone), Image service (glance), - Networking service (neutron), and Object Storage service (swift). - - - - - - cost - - cost - - - - Under the Compute distributed scheduler, this is calculated by - looking at the capabilities of each host relative to the flavor of the - VM instance being requested. - - - - credentials - - credentials - - - Data that is only known to or accessible by a user and - used to verify that the user is who he says he is. - Credentials are presented to the server during - authentication. Examples include a password, secret key, - digital certificate, and fingerprint. - - - - Cross-Origin Resource Sharing (CORS) - - Cross-Origin Resource Sharing (CORS) - - - A mechanism that allows many resources (for example, - fonts, JavaScript) on a web page to be requested from - another domain outside the domain from which the resource - originated. In particular, JavaScript's AJAX calls can use - the XMLHttpRequest mechanism. - - - - - Crowbar - - Crowbar - - - - An open source community project by Dell that aims to provide - all necessary services to quickly deploy clouds. - - - - - current workload - - current workload - - - - An element of the Compute capacity cache that is calculated - based on the number of build, snapshot, migrate, and resize operations - currently in progress on a given host. - - - - - customer - - customers - - tenants - - - - Alternative term for tenant. - - - - - customization module - - customization module - - - - A user-created Python module that is loaded by horizon to change - the look and feel of the dashboard. - - - - - - - - D - - - daemon - - daemons - - definition of - - - - A process that runs in the background and waits for requests. - May or may not listen on a TCP or UDP port. Do not confuse with a - worker. - - - - - DAC - - DAC (discretionary access control) - - - - Discretionary access control. Governs the ability of subjects to - access objects, while enabling users to make policy decisions and - assign security attributes. The traditional UNIX system of users, - groups, and read-write-execute permissions is an example of - DAC. - - - - - Dashboard - - Dashboard - - - - The web-based management interface for OpenStack. An alternative - name for horizon. - - - - - data encryption - - data - - data encryption - - - - Both Image service and Compute support encrypted virtual machine - (VM) images (but not instances). In-transit data encryption is - supported in OpenStack using technologies such as HTTPS, SSL, TLS, and - SSH. Object Storage does not support object encryption at the - application level but may support storage that uses disk encryption. - - - - - database ID - - databases - - database ID - - - - A unique ID given to each replica of an Object Storage - database. - - - - - database replicator - - databases - - database replicators - - - - An Object Storage component that copies changes in the account, - container, and object databases to other nodes. - - - - - Database service - - Database service - - - - - An integrated project that provide scalable and reliable - Cloud Database-as-a-Service functionality for both - relational and non-relational database engines. The project - name of Database service is trove. - - - - - - Data Processing service - - Data Processing service - sahara - - - - OpenStack project that provides a scalable - data-processing stack and associated management - interfaces. The code name for the project is sahara. - - - - - - data store - - data store, definition of - - - A database engine supported by the Database service. - - - - - deallocate - - deallocate, definition of - - - - The process of removing the association between a floating IP - address and a fixed IP address. Once this association is removed, the - floating IP returns to the address pool. - - - - - Debian - - Debian - - - - A Linux distribution that is compatible with OpenStack. - - - - - deduplication - - deduplication - - - - The process of finding duplicate data at the disk block, file, - and/or object level to minimize storage use—currently unsupported - within OpenStack. - - - - - default panel - - default panels - - - - The default panel that is displayed when a user accesses the - horizon dashboard. - - - - - default tenant - - default tenants - - - - New users are assigned to this tenant if no tenant is specified - when a user is created. - - - - - default token - - default tokens - - - - An Identity service token that is not associated with a specific - tenant and is exchanged for a scoped token. - - - - - delayed delete - - delayed delete - - - - An option within Image service so that an image is deleted after - a predefined number of seconds instead of immediately. - - - - - delivery mode - - delivery mode - - - - Setting for the Compute RabbitMQ message delivery mode; can be - set to either transient or persistent. - - - - - denial of service (DoS) - - denial of service (DoS) - - - - - Denial of service (DoS) is a short form for - denial-of-service attack. This is a malicious attempt to - prevent legitimate users from using a service. - - - - - deprecated auth - - deprecated auth - - - - An option within Compute that enables administrators to create - and manage users through the nova-manage command as - opposed to using the Identity service. - - - - - designate - - designate - - - - - Code name for the DNS service project for OpenStack. - - - - - - Desktop-as-a-Service - - Desktop-as-a-Service - - - - - A platform that provides a suite of desktop environments - that users access to receive a desktop experience from - any location. This may provide general use, development, or - even homogeneous testing environments. - - - - - - developer - - developer - - - - One of the default roles in the Compute RBAC system and the - default role assigned to a new user. - - - - - device ID - - device ID - - - - Maps Object Storage partitions to physical storage - devices. - - - - - device weight - - device weight - - - - Distributes partitions proportionately across Object Storage - devices based on the storage capacity of each device. - - - - - DevStack - - DevStack - - definition of - - - - Community project that uses shell scripts to quickly build - complete OpenStack development environments. - - - - - DHCP - - DHCP (Dynamic Host Configuration Protocol) - - basics of - - - - Dynamic Host Configuration Protocol. A network protocol that - configures devices that are connected to a network so that they can - communicate on that network by using the Internet Protocol (IP). The - protocol is implemented in a client-server model where DHCP clients - request configuration data, such as an IP address, a default route, - and one or more DNS server addresses from a DHCP server. - - - - - DHCP agent - - DHCP agent - - - - OpenStack Networking agent that provides DHCP services - for virtual networks. - - - - - Diablo - - Diablo - - - - A grouped release of projects related to OpenStack that came out - in the fall of 2011, the fourth release of OpenStack. It included - Compute (nova 2011.3), Object Storage (swift 1.4.3), and the Image - service (glance). - Diablo is the code name for the fourth release of - OpenStack. The design summit took place in - the Bay Area near Santa Clara, - California, US and Diablo is a nearby city. - - - - - direct consumer - - direct consumers - - - - An element of the Compute RabbitMQ that comes to life when a RPC - call is executed. It connects to a direct exchange through a unique - exclusive queue, sends the message, and terminates. - - - - - direct exchange - - direct exchanges - - - - A routing table that is created within the Compute RabbitMQ - during RPC calls; one is created for each RPC call that is - invoked. - - - - - direct publisher - - direct publishers - - - - Element of RabbitMQ that provides a response to an incoming MQ - message. - - - - - disassociate - - disassociate - - - - The process of removing the association between a floating IP - address and fixed IP and thus returning the floating IP address to the - address pool. - - - - - disk encryption - - disk encryption - - - - The ability to encrypt data at the file system, disk partition, - or whole-disk level. Supported within Compute VMs. - - - - - disk format - - disk format - - - - The underlying format that a disk image for a VM is stored as - within the Image service back-end store. For example, AMI, ISO, QCOW2, - VMDK, and so on. - - - - - dispersion - - dispersion - - - - In Object Storage, tools to test and ensure dispersion of - objects and containers to ensure fault tolerance. - - - - - distributed virtual router (DVR) - - distributed virtual router (DVR) - - - - Mechanism for highly-available multi-host routing when using - OpenStack Networking (neutron). - - - - - Django - - Django - - - - A web framework used extensively in horizon. - - - - - DNS - - DNS (Domain Name System, Server or Service) - - definitions of - - - - Domain Name System. A hierarchical and distributed naming system - for computers, services, and resources connected to the Internet or a - private network. Associates a human-friendly names to IP - addresses. - - - - - DNS record - - DNS (Domain Name Server, Service or System) - - DNS records - - - - A record that specifies information about a particular domain - and belongs to the domain. - - - - - DNS service - - DNS service - designate - - - - - OpenStack project that provides scalable, on demand, self - service access to authoritative DNS services, in a - technology-agnostic manner. The code name for the project is - designate. - - - - - - dnsmasq - - dnsmasq - - - - Daemon that provides DNS, DHCP, BOOTP, and TFTP services for - virtual networks. - - - - - domain - - domain, definition of - - - - An Identity API v3 entity. Represents a collection of - projects, groups and users that defines administrative boundaries for - managing OpenStack Identity entities. - On the Internet, separates a website from other sites. Often, - the domain name has two or more parts that are separated by dots. - For example, yahoo.com, usa.gov, harvard.edu, or - mail.yahoo.com. - Also, a domain is an entity or container of all DNS-related - information containing one or more records. - - - - - Domain Name System (DNS) - - - A system by which Internet domain name-to-address and - address-to-name resolutions are determined. - - DNS helps navigate the Internet by translating the IP address - into an address that is easier to remember. For example, translating - 111.111.111.1 into www.yahoo.com. - - All domains and their components, such as mail servers, utilize - DNS to resolve to the appropriate locations. DNS servers are usually - set up in a master-slave relationship such that failure of the master - invokes the slave. DNS servers might also be clustered or replicated - such that changes made to one DNS server are automatically propagated - to other active servers. - - In Compute, the support that enables associating DNS entries - with floating IP addresses, nodes, or cells so that hostnames are - consistent across reboots. - - - - - download - - download, definition of - - - - The transfer of data, usually in the form of files, from one - computer to another. - - - - - DRTM - - DRTM (dynamic root of trust measurement) - - - - Dynamic root of trust measurement. - - - - - durable exchange - - durable exchange - - - - The Compute RabbitMQ message exchange that remains active when - the server restarts. - - - - - durable queue - - durable queue - - - - A Compute RabbitMQ message queue that remains active when the - server restarts. - - - - - Dynamic Host Configuration Protocol (DHCP) - - - A method to automatically configure networking for a host at - boot time. Provided by both Networking and Compute. - - - - - Dynamic HyperText Markup Language (DHTML) - - DHTML (Dynamic HyperText Markup Language) - - - - Pages that use HTML, JavaScript, and Cascading Style Sheets to - enable users to interact with a web page or show simple - animation. - - - - - - - - E - - - east-west traffic - - east-west traffic - - - - - Network traffic between servers in the same cloud or data center. - See also north-south traffic. - - - - - EBS boot volume - - EBS boot volume - - - - An Amazon EBS storage volume that contains a bootable VM image, - currently unsupported in OpenStack. - - - - - ebtables - - ebtables - - - - Filtering tool for a Linux bridging firewall, enabling - filtering of network traffic passing through a Linux bridge. - Used in Compute along with arptables, iptables, and ip6tables - to ensure isolation of network communications. - - - - - EC2 - - - The Amazon commercial compute product, similar to - Compute. - - - - - EC2 access key - - EC2 - - EC2 access key - - - - Used along with an EC2 secret key to access the Compute EC2 - API. - - - - - EC2 API - - EC2 - - EC2 API - - - - OpenStack supports accessing the Amazon EC2 API through - Compute. - - - - - EC2 Compatibility API - - EC2 - - EC2 compatibility API - - - - A Compute component that enables OpenStack to communicate with - Amazon EC2. - - - - - EC2 secret key - - EC2 - - EC2 secret key - - - - Used along with an EC2 access key when communicating with the - Compute EC2 API; used to digitally sign each request. - - - - - Elastic Block Storage (EBS) - - Elastic Block Storage (EBS) - - - - The Amazon commercial block storage product. - - - - - encryption - - encryption, definition of - - - - OpenStack supports encryption technologies such as HTTPS, SSH, - SSL, TLS, digital certificates, and data encryption. - - - - - endpoint - - - See API endpoint. - - - - - endpoint registry - - endpoints - - endpoint registry - - - - Alternative term for an Identity service catalog. - - - - - encapsulation - - encapsulation - - - - - The practice of placing one packet type within another for - the purposes of abstracting or securing data. Examples - include GRE, MPLS, or IPsec. - - - - - - endpoint template - - endpoints - - endpoint templates - - - - A list of URL and port number endpoints that indicate where a - service, such as Object Storage, Compute, Identity, and so on, can be - accessed. - - - - - entity - - entity, definition of - - - - Any piece of hardware or software that wants to connect to the - network services provided by Networking, the network connectivity - service. An entity can make use of Networking by implementing a - VIF. - - - - - ephemeral image - - ephemeral images - - - - A VM image that does not save changes made to its volumes and - reverts them to their original state after the instance is - terminated. - - - - - ephemeral volume - - ephemeral volume - - - - Volume that does not save the changes made to it and reverts to - its original state when the current user relinquishes control. - - - - - Essex - - Essex - - - - A grouped release of projects related to OpenStack that came out - in April 2012, the fifth release of OpenStack. It included Compute - (nova 2012.1), Object Storage (swift 1.4.8), Image (glance), Identity - (keystone), and Dashboard (horizon). - Essex is the code name for the fifth release of - OpenStack. The design summit took place in - Boston, Massachusetts, US and Essex is a nearby city. - - - - - ESXi - - ESXi hypervisor - - - - An OpenStack-supported hypervisor. - - - - - ETag - - ETag - - - - MD5 hash of an object within Object Storage, used to ensure data - integrity. - - - - - euca2ools - - euca2ools - - - - A collection of command-line tools for administering VMs; most - are compatible with OpenStack. - - - - - Eucalyptus Kernel Image (EKI) - - Eucalyptus Kernel Image (EKI) - - - - Used along with an ERI to create an EMI. - - - - - Eucalyptus Machine Image (EMI) - - Eucalyptus Machine Image (EMI) - - - - VM image container format supported by Image service. - - - - - Eucalyptus Ramdisk Image (ERI) - - Eucalyptus Ramdisk Image (ERI) - - - - Used along with an EKI to create an EMI. - - - - - evacuate - - evacuation, definition of - - - - The process of migrating one or all virtual machine (VM) - instances from one host to another, compatible with both shared - storage live migration and block migration. - - - - - exchange - - exchange - - - - Alternative term for a RabbitMQ message exchange. - - - - - exchange type - - exchange types - - - - A routing algorithm in the Compute RabbitMQ. - - - - - exclusive queue - - queues - - exclusive queues - - - exclusive queues - - - - Connected to by a direct consumer in RabbitMQ—Compute, the - message can be consumed only by the current connection. - - - - - extended attributes (xattr) - - extended attributes (xattr) - - - - File system option that enables storage of additional - information beyond owner, group, permissions, modification time, and - so on. The underlying Object Storage file system must support extended - attributes. - - - - - extension - - extensions - - definition of - - - - Alternative term for an API extension or plug-in. In the context - of Identity service, this is a call that is specific to the - implementation, such as adding support for OpenID. - - - - - external network - - external network, definition of - - - - A network segment typically used for instance Internet - access. - - - - - extra specs - - extra specs, definition of - - - - Specifies additional requirements when Compute determines where - to start a new instance. Examples include a minimum amount of network - bandwidth or a GPU. - - - - - - - - F - - - FakeLDAP - - FakeLDAP - - - - An easy method to create a local LDAP directory for testing - Identity and Compute. Requires Redis. - - - - - fan-out exchange - - fan-out exchange - - - - Within RabbitMQ and Compute, it is the messaging interface that - is used by the scheduler service to receive capability messages from - the compute, volume, and network nodes. - - - - - federated identity - - federated identity - - - - A method to establish trusts between identity providers and the - OpenStack cloud. - - - - - Fedora - - Fedora - - - - A Linux distribution compatible with OpenStack. - - - - - Fibre Channel - - Fibre Channel - - - - Storage protocol similar in concept to TCP/IP; encapsulates SCSI - commands and data. - - - - - Fibre Channel over Ethernet (FCoE) - - Fibre Channel over Ethernet (FCoE) - - - - The fibre channel protocol tunneled within Ethernet. - - - - - fill-first scheduler - - fill-first scheduler - - - - The Compute scheduling method that attempts to fill a host with - VMs rather than starting new VMs on a variety of hosts. - - - - - filter - - filtering - - definition of - - - - The step in the Compute scheduling process when hosts that - cannot run VMs are eliminated and not chosen. - - - - - firewall - - firewalls - - - - Used to restrict communications between hosts and/or nodes, - implemented in Compute using iptables, arptables, ip6tables, and - ebtables. - - - - - FWaaS - - Firewall-as-a-Service (FWaaS) - - - - A Networking extension that provides perimeter firewall - functionality. - - - - - fixed IP address - - IP addresses - - fixed - - - fixed IP addresses - - - - An IP address that is associated with the same instance each - time that instance boots, is generally not accessible to end users or - the public Internet, and is used for management of the - instance. - - - - - Flat Manager - - Flat Manager - - - - The Compute component that gives IP addresses to authorized - nodes and assumes DHCP, DNS, and routing configuration and services - are provided by something else. - - - - - flat mode injection - - flat mode injection - - - - A Compute networking method where the OS network configuration - information is injected into the VM image before the instance - starts. - - - - - flat network - - flat network - - - - Virtual network type that uses neither VLANs nor tunnels to - segregate tenant traffic. Each flat network typically requires - a separate underlying physical interface defined by bridge - mappings. However, a flat network can contain multiple - subnets. - - - - - FlatDHCP Manager - - FlatDHCP Manager - - - - The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, - TFTP) and radvd (routing) services. - - - - - flavor - - flavor - - - - Alternative term for a VM instance type. - - - - - flavor ID - - flavor ID - - - - UUID for each Compute or Image service VM flavor or instance - type. - - - - - floating IP address - - IP addresses - - floating - - - floating IP address - - - - An IP address that a project can associate with a VM so that the - instance has the same public IP address each time that it boots. You - create a pool of floating IP addresses and assign them to instances as - they are launched to maintain a consistent IP address for maintaining - DNS assignment. - - - - - Folsom - - Folsom - - - - A grouped release of projects related to OpenStack that came out - in the fall of 2012, the sixth release of OpenStack. It includes - Compute (nova), Object Storage (swift), Identity (keystone), - Networking (neutron), Image service (glance), and Volumes or Block - Storage (cinder). - Folsom is the code name for the sixth release of - OpenStack. The design summit took place in - San Francisco, California, US and Folsom is a nearby city. - - - - - - FormPost - - FormPost - - - - Object Storage middleware that uploads (posts) an image through - a form on a web page. - - - - - freezer - - freezer - - - - - OpenStack project that provides backup restore and disaster - recovery as a service. - - - - - - front end - - front end, definition of - - - - The point where a user interacts with a service; can be an API - endpoint, the horizon dashboard, or a command-line tool. - - - - - - - - G - - - gateway - - gateway - - - - An IP address, typically assigned to a router, that - passes network traffic between different networks. - - - - - generic receive offload (GRO) - - generic receive offload (GRO) - - - Feature of certain network interface drivers that - combines many smaller received packets into a large packet - before delivery to the kernel IP stack. - - - - - generic routing encapsulation (GRE) - - generic routing encapsulation (GRE) - - - Protocol that encapsulates a wide variety of network - layer protocols inside virtual point-to-point links. - - - - - - glance - - - A core project that provides the OpenStack Image service. - - - - - glance API server - - glance - - glance API server - - - - Processes client requests for VMs, updates Image service - metadata on the registry server, and communicates with the store - adapter to upload VM images from the back-end store. - - - - - glance registry - - glance - - glance registry - - - - Alternative term for the Image service image registry. - - - - - global endpoint template - - endpoints - - global endpoint template - - - global endpoint template - - - - The Identity service endpoint template that contains services - available to all tenants. - - - - - GlusterFS - - GlusterFS - - - - A file system designed to aggregate NAS hosts, compatible with - OpenStack. - - - - - golden image - - golden image - - - - A method of operating system installation where a finalized disk - image is created and then used by all nodes without - modification. - - - - - Governance service - - Governance service - congress - - - - - OpenStack project to provide Governance-as-a-Service across - any collection of cloud services in order to monitor, - enforce, and audit policy over dynamic infrastructure. The - code name for the project is congress. - - - - - - Graphic Interchange Format (GIF) - - Graphic Interchange Format (GIF) - - - - A type of image file that is commonly used for animated images - on web pages. - - - - - Graphics Processing Unit (GPU) - - Graphics Processing Unit (GPU) - - - - Choosing a host based on the existence of a GPU is currently - unsupported in OpenStack. - - - - - Green Threads - - Green Threads - - - - The cooperative threading model used by Python; reduces race - conditions and only context switches when specific library calls are - made. Each OpenStack service is its own thread. - - - - - Grizzly - - Grizzly - - - - The code name for the seventh release of - OpenStack. The design summit took place in - San Diego, California, US and Grizzly is an element of the state flag of - California. - - - - - Group - - Group - - - - An Identity v3 API entity. Represents a collection of users that is - owned by a specific domain. - - - - - guest OS - - guest OS - - - - An operating system instance running under the control of a - hypervisor. - - - - - - - - H - - - Hadoop - - Hadoop - - - - Apache Hadoop is an open source software framework that supports - data-intensive distributed applications. - - - - - Hadoop Distributed File System (HDFS) - - Hadoop Distributed File System (HDFS) - - - - A distributed, highly fault-tolerant file system designed to run - on low-cost commodity hardware. - - - - - handover - - handover - - - - An object state in Object Storage where a new replica of the - object is automatically created due to a drive failure. - - - - - hard reboot - - hard reboot - - - - A type of reboot where a physical or virtual power button is - pressed as opposed to a graceful, proper shutdown of the operating - system. - - - - - Havana - - Havana - - - - The code name for the eighth release of OpenStack. The - design summit took place in Portland, Oregon, US and Havana is - an unincorporated community in Oregon. - - - - - heat - - heat - - - - An integrated project that aims to orchestrate multiple cloud - applications for OpenStack. - - - - - Heat Orchestration Template (HOT) - - Heat Orchestration Template (HOT) - - - - Heat input in the format native to OpenStack. - - - - - health monitor - - health monitor - - - - Determines whether back-end members of a VIP pool can - process a request. A pool can have several health monitors - associated with it. When a pool has several monitors - associated with it, all monitors check each member of the - pool. All monitors must declare a member to be healthy for - it to stay active. - - - - - high availability (HA) - - high availability (HA) - - - - - A high availability system design approach and associated - service implementation ensures that a prearranged level of - operational performance will be met during a contractual - measurement period. High availability systems seeks to - minimize system downtime and data loss. - - - - - - horizon - - - OpenStack project that provides a dashboard, which is a web - interface. - - - - - horizon plug-in - - horizon plug-ins - - - - A plug-in for the OpenStack dashboard (horizon). - - - - - host - - hosts, definition of - - - - A physical computer, not a VM instance (node). - - - - - host aggregate - - host aggregate - - - - A method to further subdivide availability zones into hypervisor - pools, a collection of common hosts. - - - - - Host Bus Adapter (HBA) - - Host Bus Adapter (HBA) - - - - Device plugged into a PCI slot, such as a fibre channel or - network card. - - - - - hybrid cloud - - hybrid cloud - - - - - A hybrid cloud is a composition of two or more clouds - (private, community or public) that remain distinct entities - but are bound together, offering the benefits of multiple - deployment models. Hybrid cloud can also mean the ability - to connect colocation, managed and/or dedicated services - with cloud resources. - - - - - - Hyper-V - - Hyper-V - - - - One of the hypervisors supported by OpenStack. - - - - - hyperlink - - hyperlink - - - - Any kind of text that contains a link to some other site, - commonly found in documents where clicking on a word or words opens up - a different website. - - - - - Hypertext Transfer Protocol (HTTP) - - - An application protocol for distributed, collaborative, - hypermedia information systems. It is the foundation of data - communication for the World Wide Web. Hypertext is structured - text that uses logical links (hyperlinks) between nodes containing - text. HTTP is the protocol to exchange or transfer hypertext. - - - - - Hypertext Transfer Protocol Secure (HTTPS) - - - An encrypted communications protocol for secure communication - over a computer network, with especially wide deployment on the - Internet. Technically, it is not a protocol in and of itself; - rather, it is the result of simply layering the Hypertext Transfer - Protocol (HTTP) on top of the TLS or SSL protocol, thus adding the - security capabilities of TLS or SSL to standard HTTP communications. - most OpenStack API endpoints and many inter-component communications - support HTTPS communication. - - - - - hypervisor - - hypervisors - - definition of - - - - Software that arbitrates and controls VM access to the actual - underlying hardware. - - - - - hypervisor pool - - hypervisors - - hypervisor pools - - - - A collection of hypervisors grouped together through host - aggregates. - - - - - - - - I - - - IaaS - - IaaS (Infrastructure-as-a-Service) - - basics of - - - - Infrastructure-as-a-Service. IaaS is a provisioning model in - which an organization outsources physical components of a data center, - such as storage, hardware, servers, and networking components. A - service provider owns the equipment and is responsible for housing, - operating and maintaining it. The client typically pays on a per-use - basis. IaaS is a model for providing cloud services. - - - - - Icehouse - - Icehouse - - definition of - - - - The code name for the ninth release of OpenStack. The - design summit took place in Hong Kong and Ice House is a - street in that city. - - - - - ICMP - - Internet Control Message Protocol (ICMP) - - - - Internet Control Message Protocol, used by network - devices for control messages. For example, - ping uses ICMP to test - connectivity. - - - - - ID number - - ID number - - - - Unique numeric ID associated with each user in Identity, - conceptually similar to a Linux or LDAP UID. - - - - - Identity API - - - Alternative term for the Identity service API. - - - - - Identity back end - - Identity - - Identity back end - - - - The source used by Identity service to retrieve user - information; an OpenLDAP server, for example. - - - - - identity provider - - identity provider - basics of - - - - A directory service, which allows users to login with a user - name and password. It is a typical source of authentication - tokens. - - - - - Identity service - - Identity service - - - - The OpenStack core project that provides a central directory of - users mapped to the OpenStack services they can access. It also - registers endpoints for OpenStack services. It acts as a common - authentication system. The project name of Identity is - keystone. - - - - - Identity service API - - Identity service - - Identity service API - - - - The API used to access the OpenStack Identity service provided - through keystone. - - - - - IDS - - IDS (Intrusion Detection System) - - - - Intrusion Detection System. - - - - - image - - images - - definition of - - - - A collection of files for a specific operating system (OS) that - you use to create or rebuild a server. OpenStack provides pre-built - images. You can also create custom images, or snapshots, from servers - that you have launched. Custom images can be used for data backups or - as "gold" images for additional servers. - - - - - Image API - - Image service - - Image service API - - - - The Image service API endpoint for management of VM - images. - - - - - image cache - - Image service - - image cache - - - - Used by Image service to obtain images on the local host rather - than re-downloading them from the image server each time one is - requested. - - - - - image ID - - Identity - - image ID - - - - Combination of a URI and UUID used to access Image service VM - images through the image API. - - - - - image membership - - Image service - - image membership - - - - A list of tenants that can access a given VM image within Image - service. - - - - - image owner - - Image service - - image owner - - - - The tenant who owns an Image service virtual machine - image. - - - - - image registry - - Image service - - image registry - - - - A list of VM images that are available through Image - service. - - - - - Image service - - - An OpenStack core project that provides discovery, registration, - and delivery services for disk and server images. The project name of - the Image service is glance. - - - - - Image service API - - - Alternative name for the glance image API. - - - - - image status - - Image service - - image status - - - - The current status of a VM image in Image service, not to be - confused with the status of a running instance. - - - - - image store - - Image service - - image store - - - - The back-end store used by Image service to store VM images, - options include Object Storage, local file system, S3, or HTTP. - - - - - image UUID - - Image service - - image UUID - - - - UUID used by Image service to uniquely identify each VM - image. - - - - - incubated project - - incubated projects - - - - A community project may be elevated to this status and is then - promoted to a core project. - - - - - ingress filtering - - filtering - - ingress filtering - - - ingress filtering - - - - The process of filtering incoming network traffic. Supported by - Compute. - - - - - INI - - INI - - - - The OpenStack configuration files use an INI format to - describe options and their values. It consists of sections - and key value pairs. - - - - - - injection - - injection - - - - The process of putting a file into a virtual machine image - before the instance is started. - - - - - instance - - instances - - definition of - - - - A running VM, or a VM in a known state such as suspended, that - can be used like a hardware server. - - - - - instance ID - - instances - - instance ID - - - - Alternative term for instance UUID. - - - - - instance state - - instances - - instance state - - - - The current state of a guest VM image. - - - - - instance tunnels network - - instance tunnels network - - - A network segment used for instance traffic tunnels - between compute nodes and the network node. - - - - - instance type - - instances - - instance type - - - - Describes the parameters of the various virtual machine images - that are available to users; includes parameters such as CPU, storage, - and memory. Alternative term for flavor. - - - - - instance type ID - - instances - - instance type ID - - - - Alternative term for a flavor ID. - - - - - instance UUID - - instances - - instance UUID - - - - Unique ID assigned to each guest VM instance. - - - - - interface - - interface - - - - A physical or virtual device that provides connectivity - to another device or medium. - - - - - interface ID - - interface ID - - - - Unique ID for a Networking VIF or vNIC in the form of a - UUID. - - - - - Internet protocol (IP) - - Internet protocol (IP) - - - - Principal communications protocol in the internet protocol - suite for relaying datagrams across network boundaries. - - - - - Internet Service Provider (ISP) - - Internet Service Provider (ISP) - - - - Any business that provides Internet access to individuals or - businesses. - - - - - Internet Small Computer System Interface (iSCSI) - - Internet Small Computer System Interface (iSCSI) - - - - Storage protocol that encapsulates SCSI frames for transport - over IP networks. - - - - - ironic - - ironic - - - - OpenStack project that provisions bare metal, as opposed to - virtual, machines. - - - - - IOPS - - IOPS - - definition of - - - - - IOPS (Input/Output Operations Per Second) are a common - performance measurement used to benchmark computer storage - devices like hard disk drives, solid state drives, and - storage area networks. - - - - - - IP address - - IP addresses - - definition of - - - - Number that is unique to every computer system on the Internet. - Two versions of the Internet Protocol (IP) are in use for addresses: - IPv4 and IPv6. - - - - - IP Address Management (IPAM) - - IP Address Management (IPAM) - - - - The process of automating IP address allocation, deallocation, - and management. Currently provided by Compute, melange, and - Networking. - - - - - IPL - - IPL (Initial Program Loader) - - - - Initial Program Loader. - - - - - IPMI - - IPMI (Intelligent Platform Management Interface) - - - - Intelligent Platform Management Interface. IPMI is a - standardized computer system interface used by system administrators - for out-of-band management of computer systems and monitoring of their - operation. In layman's terms, it - is a way to manage a computer using a direct network connection, - whether it is turned on or not; connecting to the hardware rather than - an operating system or login shell. - - - - - ip6tables - - ip6tables - - - - Tool used to set up, maintain, and inspect the tables of IPv6 - packet filter rules in the Linux kernel. In OpenStack Compute, - ip6tables is used along with arptables, ebtables, and iptables to - create firewalls for both nodes and VMs. - - - - - ipset - - ipset - - - - Extension to iptables that allows creation of firewall rules - that match entire "sets" of IP addresses simultaneously. These - sets reside in indexed data structures to increase efficiency, - particularly on systems with a large quantity of rules. - - - - - iptables - - iptables - - - - Used along with arptables and ebtables, iptables create - firewalls in Compute. iptables are the tables provided by the Linux - kernel firewall (implemented as different Netfilter modules) and the - chains and rules it stores. Different kernel modules and programs are - currently used for different protocols: iptables applies to IPv4, - ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. - Requires root privilege to manipulate. - - - - - IQN - - iSCSI Qualified Name (IQN) - - - - iSCSI Qualified Name (IQN) is the format most commonly used - for iSCSI names, which uniquely identify nodes in an iSCSI network. - All IQNs follow the pattern iqn.yyyy-mm.domain:identifier, where - 'yyyy-mm' is the year and month in which the domain was registered, - 'domain' is the reversed domain name of the issuing organization, and - 'identifier' is an optional string which makes each IQN under the same - domain unique. For example, 'iqn.2015-10.org.openstack.408ae959bce1'. - - - - - iSCSI - - iSCSI protocol - - - - The SCSI disk protocol tunneled within Ethernet, supported by - Compute, Object Storage, and Image service. - - - - - ISO9660 - - ISO9660 format - - - - One of the VM image disk formats supported by Image - service. - - - - - itsec - - itsec - - - - A default role in the Compute RBAC system that can quarantine an - instance in any project. - - - - - - - - J - - - Java - - Java - - - - A programming language that is used to create systems that - involve more than one computer by way of a network. - - - - - JavaScript - - JavaScript - - - - A scripting language that is used to build web pages. - - - - - JavaScript Object Notation (JSON) - - JavaScript Object Notation (JSON) - - - - One of the supported response formats in OpenStack. - - - - - Jenkins - - Jenkins - - - - Tool used to run jobs automatically for OpenStack - development. - - - - - jumbo frame - - jumbo frame - - - - Feature in modern Ethernet networks that supports frames up to - approximately 9000 bytes. - - - - - Juno - - Juno - - - - The code name for the tenth release of OpenStack. The - design summit took place in Atlanta, Georgia, US and Juno is - an unincorporated community in Georgia. - - - - - - - - K - - - Kerberos - - Kerberos - - - - - A network authentication protocol which works on the basis of - tickets. Kerberos allows nodes communication over a non-secure - network, and allows nodes to prove their identity to one another in a - secure manner. - - - - - - kernel-based VM (KVM) - - kernel-based VM (KVM) hypervisor - - - - - An OpenStack-supported hypervisor. KVM is a full - virtualization solution for Linux on x86 hardware containing - virtualization extensions (Intel VT or AMD-V), ARM, IBM - Power, and IBM zSeries. It consists of a loadable kernel - module, that provides the core virtualization infrastructure - and a processor specific module. - - - - - - Key Manager service - - Key Manager service - barbican - - - - OpenStack project that produces a secret storage and - generation system capable of providing key management for - services wishing to enable encryption features. The code name - of the project is barbican. - - - - - - keystone - - keystone - - - - The project that provides OpenStack Identity services. - - - - - Kickstart - - Kickstart - - - - A tool to automate system configuration and installation on Red - Hat, Fedora, and CentOS-based Linux distributions. - - - - - Kilo - - Kilo - - - - The code name for the eleventh release of OpenStack. The - design summit took place in Paris, France. Due to delays in the name - selection, the release was known only as K. Because k is the - unit symbol for kilo and the reference artifact is stored near Paris - in the Pavillon de Breteuil in Sèvres, the community chose Kilo as - the release name. - - - - - - - - L - - - large object - - large object - - - - An object within Object Storage that is larger than 5 GB. - - - - - Launchpad - - Launchpad - - - - The collaboration site for OpenStack. - - - - - Layer-2 network - - Layer-2 network - - - - - Term used in the OSI network architecture for the data link - layer. The data link layer is responsible for media access - control, flow control and detecting and possibly correcting - errors that may occur in the physical layer. - - - - - - Layer-3 network - - Layer-3 network - - - - - Term used in the OSI network architecture for the network - layer. The network layer is responsible for packet - forwarding including routing from one node to another. - - - - - - Layer-2 (L2) agent - - Layer-2 (L2) agent - - - - OpenStack Networking agent that provides layer-2 - connectivity for virtual networks. - - - - - Layer-3 (L3) agent - - Layer-3 (L3) agent - - - - OpenStack Networking agent that provides layer-3 - (routing) services for virtual networks. - - - - - Liberty - - Liberty - - - - The code name for the twelfth release of OpenStack. The - design summit took place in Vancouver, Canada and Liberty is - the name of a village in the Canadian province of - Saskatchewan. - - - - - libvirt - - libvirt - - - - Virtualization API library used by OpenStack to interact with - many of its supported hypervisors. - - - - - Lightweight Directory Access Protocol (LDAP) - - Lightweight Directory Access Protocol (LDAP) - - - - An application protocol for accessing and maintaining distributed - directory information services over an IP network. - - - - - Linux bridge - - - Software that enables multiple VMs to share a single physical - NIC within Compute. - - - - - Linux Bridge neutron plug-in - - Linux Bridge - - neutron plug-in for - - - - Enables a Linux bridge to understand a Networking port, - interface attachment, and other abstractions. - - - - - Linux containers (LXC) - - Linux containers (LXC) - - - - An OpenStack-supported hypervisor. - - - - - live migration - - live migration - - - - The ability within Compute to move running virtual machine - instances from one host to another with only a small service - interruption during switchover. - - - - - load balancer - - - A load balancer is a logical device that belongs to a cloud - account. It is used to distribute workloads between multiple back-end - systems or services, based on the criteria defined as part of its - configuration. - - - - - load balancing - - load balancing - - - - The process of spreading client requests between two or more - nodes to improve performance and availability. - - - - - LBaaS - - Load-Balancer-as-a-Service (LBaaS) - - - - Enables Networking to distribute incoming requests evenly - between designated instances. - - - - - Logical Volume Manager (LVM) - - Logical Volume Manager (LVM) - - - - Provides a method of allocating space on mass-storage - devices that is more flexible than conventional partitioning - schemes. - - - - - - - - M - - - magnum - - magnum - - - - - Code name for the OpenStack project that provides the - Containers Service. - - - - - - management API - - management API - - admin API - - - - Alternative term for an admin API. - - - - - management network - - management network - - - - A network segment used for administration, not accessible to the - public Internet. - - - - - manager - - manager - - - - Logical groupings of related code, such as the Block Storage - volume manager or network manager. - - - - - manifest - - manifests - - definition of - - - - Used to track segments of a large object within Object - Storage. - - - - - manifest object - - objects - - manifest objects - - - manifests - - manifest objects - - - - A special Object Storage object that contains the manifest for a - large object. - - - - - manila - - manila - - - - - OpenStack project that provides shared file systems as - service to applications. - - - - - - maximum transmission unit (MTU) - - maximum transmission unit (MTU) - - - - Maximum frame or packet size for a particular network - medium. Typically 1500 bytes for Ethernet networks. - - - - - mechanism driver - - mechanism driver - - - - - A driver for the Modular Layer 2 (ML2) neutron plug-in that - provides layer-2 connectivity for virtual instances. A - single OpenStack installation can use multiple mechanism - drivers. - - - - - - melange - - melange - - - - Project name for OpenStack Network Information Service. To be - merged with Networking. - - - - - membership - - membership - - - - The association between an Image service VM image and a tenant. - Enables images to be shared with specified tenants. - - - - - membership list - - membership lists - - - - A list of tenants that can access a given VM image within Image - service. - - - - - memcached - - memcached - - - - A distributed memory object caching system that is used by - Object Storage for caching. - - - - - memory overcommit - - memory overcommit - - - - The ability to start new VM instances based on the actual memory - usage of a host, as opposed to basing the decision on the amount of - RAM each running instance thinks it has available. Also known as RAM - overcommit. - - - - - message broker - - message brokers - - - - The software package used to provide AMQP messaging capabilities - within Compute. Default package is RabbitMQ. - - - - - message bus - - message bus - - - - The main virtual communication line used by all AMQP messages - for inter-cloud communications within Compute. - - - - - message queue - - message queue - - - - Passes requests from clients to the appropriate workers and - returns the output to the client after the job completes. - - - - - Message service - - Message service - zaqar - - - - OpenStack project that aims to produce an OpenStack - messaging service that affords a variety of distributed - application patterns in an efficient, scalable and - highly-available manner, and to create and maintain associated - Python libraries and documentation. The code name for the - project is zaqar. - - - - - - Metadata agent - - Metadata agent - - - - OpenStack Networking agent that provides metadata - services for instances. - - - - - Meta-Data Server (MDS) - - Meta-Data Server (MDS) - - - - Stores CephFS metadata. - - - - - migration - - migration - - - - The process of moving a VM instance from one host to - another. - - - - - mistral - - mistral - - - - - OpenStack project that provides the Workflow service. - - - - - - Mitaka - - Mitaka - - - - The code name for the thirteenth release of OpenStack. - The design summit took place in Tokyo, Japan. Mitaka - is a city in Tokyo. - - - - - monasca - - monasca - - - - - OpenStack project that provides a Monitoring service. - - - - - - - multi-host - - multi-host - - - - High-availability mode for legacy (nova) networking. - Each compute node handles NAT and DHCP and acts as a gateway - for all of the VMs on it. A networking failure on one compute - node doesn't affect VMs on other compute nodes. - - - - - multinic - - - Facility in Compute that allows each virtual machine instance to - have more than one VIF connected to it. - - - - - murano - - murano - - - - - OpenStack project that provides an Application catalog. - - - - - - Modular Layer 2 (ML2) neutron plug-in - - Modular Layer 2 (ML2) neutron plug-in - - - - Can concurrently use multiple layer-2 networking technologies, - such as 802.1Q and VXLAN, in Networking. - - - - - Monitor (LBaaS) - - Monitor (LBaaS) - - - - LBaaS feature that provides availability monitoring using the - ping command, TCP, and HTTP/HTTPS GET. - - - - - Monitor (Mon) - - Monitor (Mon) - - - - A Ceph component that communicates with external clients, checks - data state and consistency, and performs quorum functions. - - - - - Monitoring - - Monitoring - - - - - The OpenStack project that provides a multi-tenant, highly - scalable, performant, fault-tolerant Monitoring-as-a-Service - solution for metrics, complex event processing, and logging. - It builds an extensible platform for advanced monitoring - services that can be used by both operators and tenants to - gain operational insight and visibility, ensuring - availability and stability. The project name is monasca. - - - - - - multi-factor authentication - - multi-factor authentication - - - - Authentication method that uses two or more credentials, such as - a password and a private key. Currently not supported in - Identity. - - - - - MultiNic - - MultiNic - - - - Facility in Compute that enables a virtual machine instance to - have more than one VIF connected to it. - - - - - - - - N - - - Nebula - - Nebula - - - - Released as open source by NASA in 2010 and is the basis for - Compute. - - - - - netadmin - - netadmin - - - - One of the default roles in the Compute RBAC system. Enables the - user to allocate publicly accessible IP addresses to instances and - change firewall rules. - - - - - NetApp volume driver - - NetApp volume driver - - - - Enables Compute to communicate with NetApp storage devices - through the NetApp OnCommand - Provisioning Manager. - - - - - network - - networks - - definition of - - - - A virtual network that provides connectivity between entities. - For example, a collection of virtual ports that share network - connectivity. In Networking terminology, a network is always a layer-2 - network. - - - - - NAT - - networks - - Network Address Translation (NAT) - - - - Network Address Translation; Process of modifying IP address - information while in transit. Supported by Compute and - Networking. - - - - - network controller - - networks - - network controllers - - - - A Compute daemon that orchestrates the network configuration of - nodes, including IP addresses, VLANs, and bridging. Also manages - routing for both public and private networks. - - - - - Network File System (NFS) - - networks - - Network File System (NFS) - - - - A method for making file systems available over the network. - Supported by OpenStack. - - - - - network ID - - networks - - network IDs - - - - Unique ID assigned to each network segment within Networking. - Same as network UUID. - - - - - network manager - - networks - - network managers - - - - The Compute component that manages various network components, - such as firewall rules, IP address allocation, and so on. - - - - - network namespace - - network namespace - - - - Linux kernel feature that provides independent virtual - networking instances on a single host with separate routing - tables and interfaces. Similar to virtual routing and forwarding - (VRF) services on physical network equipment. - - - - - network node - - networks - - network nodes - - - - Any compute node that runs the network worker daemon. - - - - - network segment - - networks - - network segments - - - - Represents a virtual, isolated OSI layer-2 subnet in - Networking. - - - - - Newton - - Newton - - - - - The code name for the fourteenth release of OpenStack. The - design summit will take place in Austin, Texas, US. The - release is named after "Newton House" which is located at - 1013 E. Ninth St., Austin, TX. which is listed on the - National Register of Historic Places. - - - - - - NTP - - networks - - Network Time Protocol (NTP) - - - - Network Time Protocol; Method of keeping a clock for a host or - node correct via communication with a trusted, accurate time - source. - - - - - network UUID - - networks - - network UUID - - - - Unique ID for a Networking network segment. - - - - - network worker - - networks - - network workers - - - - The nova-network worker daemon; provides - services such as giving an IP address to a booting nova - instance. - - - - - Networking service - - - A core OpenStack project that provides a network connectivity - abstraction layer to OpenStack Compute. The project name of Networking - is neutron. - - - - - Networking API - - Networking API - - - - API used to access OpenStack Networking. Provides an extensible - architecture to enable custom plug-in creation. - - - - - neutron - - - A core OpenStack project that provides a network connectivity - abstraction layer to OpenStack Compute. - - - - - neutron API - - neutron - - Networking API - - - - An alternative name for Networking API. - - - - - neutron manager - - neutron - - neutron manager - - - - Enables Compute and Networking integration, which enables - Networking to perform network management for guest VMs. - - - - - neutron plug-in - - neutron - - neutron plug-in - - - - Interface within Networking that enables organizations to create - custom plug-ins for advanced features, such as QoS, ACLs, or - IDS. - - - - - Nexenta volume driver - - Nexenta volume driver - - - - Provides support for NexentaStor devices in Compute. - - - - - No ACK - - No ACK - - - - Disables server-side message acknowledgment in the Compute - RabbitMQ. Increases performance but decreases reliability. - - - - - node - - nodes - - definition of - - - - A VM instance that runs on a host. - - - - - non-durable exchange - - messages - - non-durable exchanges - - - non-durable exchanges - - - - Message exchange that is cleared when the service restarts. Its - data is not written to persistent storage. - - - - - non-durable queue - - messages - - non-durable queues - - - non-durable queue - - - - Message queue that is cleared when the service restarts. Its - data is not written to persistent storage. - - - - - non-persistent volume - - non-persistent volume - - ephemeral volume - - - - Alternative term for an ephemeral volume. - - - - - north-south traffic - - north-south traffic - - - - - Network traffic between a user or client (north) and a - server (south), or traffic into the cloud (south) and - out of the cloud (north). See also east-west traffic. - - - - - nova - - - OpenStack project that provides compute services. - - - - - Nova API - - nova - - Compute API - - - - Alternative term for the Compute API. - - - - - nova-network - - nova - - nova-network - - - - A Compute component that manages IP address allocation, - firewalls, and other network-related tasks. This is the legacy - networking option and an alternative to Networking. - - - - - - - - O - - - object - - objects - - definition of - - - - A BLOB of data held by Object Storage; can be in any - format. - - - - - object auditor - - objects - - object auditors - - - - Opens all objects for an object server and verifies the MD5 - hash, size, and metadata for each object. - - - - - object expiration - - objects - - object expiration - - - - A configurable option within Object Storage to automatically - delete objects after a specified amount of time has passed or a - certain date is reached. - - - - - object hash - - objects - - object hash - - - - Uniquely ID for an Object Storage object. - - - - - object path hash - - objects - - object path hash - - - - Used by Object Storage to determine the location of an object in - the ring. Maps objects to partitions. - - - - - object replicator - - objects - - object replicators - - - - An Object Storage component that copies an object to remote - partitions for fault tolerance. - - - - - object server - - objects - - object servers - - - - An Object Storage component that is responsible for managing - objects. - - - - - Object Storage service - - - The OpenStack core project that provides eventually consistent - and redundant storage and retrieval of fixed digital content. The - project name of OpenStack Object Storage is swift. - - - - - Object Storage API - - swift - - Object Storage API - - - Object Storage - - Object Storage API - - - - API used to access OpenStack Object Storage. - - - - - Object Storage Device (OSD) - - Object Storage - - Object Storage Device (OSD) - - - - The Ceph storage daemon. - - - - - object versioning - - objects - - object versioning - - - - Allows a user to set a flag on an Object Storage container so - that all objects within the container are versioned. - - - - - Ocata - - Ocata - - - - - The code name for the fifteenth release of OpenStack. The - design summit will take place in Barcelona, Spain. Ocata is - a beach north of Barcelona. - - - - - - Oldie - - Oldie - - - - Term for an Object Storage process that runs for a long time. - Can indicate a hung process. - - - - - Open Cloud Computing Interface (OCCI) - - Open Cloud Computing Interface (OCCI) - - - - A standardized interface for managing compute, data, and network - resources, currently unsupported in OpenStack. - - - - - Open Virtualization Format (OVF) - - Open Virtualization Format (OVF) - - - - Standard for packaging VM images. Supported in OpenStack. - - - - - Open vSwitch - - Open vSwitch - - - - - Open vSwitch is a production quality, multilayer virtual - switch licensed under the open source Apache 2.0 license. It - is designed to enable massive network automation through - programmatic extension, while still supporting standard - management interfaces and protocols (for example NetFlow, - sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). - - - - - - Open vSwitch (OVS) agent - - Open vSwitch (OVS) agent - - - - - Provides an interface to the underlying Open vSwitch service for - the Networking plug-in. - - - - - - Open vSwitch neutron plug-in - - Open vSwitch - - neutron plug-in for - - - - Provides support for Open vSwitch in Networking. - - - - - OpenLDAP - - OpenLDAP - - - - An open source LDAP server. Supported by both Compute and - Identity. - - - - - OpenStack - - OpenStack - - basics of - - - - OpenStack is a cloud operating system that controls large pools - of compute, storage, and networking resources throughout a data - center, all managed through a dashboard that gives administrators - control while empowering their users to provision resources through a - web interface. OpenStack is an open source project licensed under the - Apache License 2.0. - - - - - OpenStack code name - - OpenStack - code name - - - - - Each OpenStack release has a code name. Code names ascend in - alphabetical order: Austin, Bexar, Cactus, Diablo, Essex, - Folsom, Grizzly, Havana, Icehouse, Juno, Kilo, Liberty, - and Mitaka. - Code names are cities or counties near where the - corresponding OpenStack design summit took place. An - exception, called the Waldon exception, is granted to - elements of the state flag that sound especially cool. Code - names are chosen by popular vote. - - - - - - openSUSE - - openSUSE - - - - A Linux distribution that is compatible with OpenStack. - - - - - operator - - operator - - - - The person responsible for planning and maintaining an OpenStack - installation. - - - - - optional service - - optional service - - - - An official OpenStack service defined as optional by - DefCore Committee. Currently, consists of - Dashboard (horizon), Telemetry service (Telemetry), - Orchestration service (heat), Database service (trove), - Bare Metal service (ironic), and so on. - - - - - - Orchestration service - - Orchestration service - - - - An integrated project that orchestrates multiple cloud - applications for OpenStack. The project name of Orchestration is - heat. - - - - - orphan - - orphans - - - - In the context of Object Storage, this is a process that is not - terminated after an upgrade, restart, or reload of the service. - - - - - Oslo - - Oslo - - - - - OpenStack project that produces a set of Python libraries - containing code shared by OpenStack projects. - - - - - - - - - P - - - parent cell - - cells - - parent cells - - - parent cells - - - - If a requested resource, such as CPU time, disk storage, or - memory, is not available in the parent cell, the request is forwarded - to associated child cells. - - - - - partition - - partitions - - definition of - - - - A unit of storage within Object Storage used to store objects. - It exists on top of devices and is replicated for fault - tolerance. - - - - - partition index - - partitions - - partition index - - - - Contains the locations of all Object Storage partitions within - the ring. - - - - - partition shift value - - partitions - - partition index value - - - - Used by Object Storage to determine which partition data should - reside on. - - - - - path MTU discovery (PMTUD) - - path MTU discovery (PMTUD) - - - - Mechanism in IP networks to detect end-to-end MTU and adjust - packet size accordingly. - - - - - pause - - pause - - - - A VM state where no changes occur (no changes in memory, network - communications stop, etc); the VM is frozen but not shut down. - - - - - PCI passthrough - - PCI passthrough - - - - Gives guest VMs exclusive access to a PCI device. Currently - supported in OpenStack Havana and later releases. - - - - - persistent message - - messages - - persistent messages - - - persistent messages - - - - A message that is stored both in memory and on disk. The message - is not lost after a failure or restart. - - - - - persistent volume - - persistent volume - - - - Changes to these types of disk volumes are saved. - - - - - personality file - - personality file - - - - A file used to customize a Compute instance. It can be used to - inject SSH keys or a specific network configuration. - - - - - Platform-as-a-Service (PaaS) - - Platform-as-a-Service (PaaS) - - - - Provides to the consumer the ability to deploy applications - through a programming language or tools supported by the cloud - platform provider. An example of Platform-as-a-Service is an - Eclipse/Java programming platform provided with no downloads - required. - - - - - plug-in - - plug-ins, definition of - - - - Software component providing the actual implementation for - Networking APIs, or for Compute APIs, depending on the context. - - - - - policy service - - policy service - - - - Component of Identity that provides a rule-management - interface and a rule-based authorization engine. - - - - - pool - - pool - - - - A logical set of devices, such as web servers, that you - group together to receive and process traffic. The load - balancing function chooses which member of the pool handles - the new requests or connections received on the VIP - address. Each VIP has one pool. - - - - - pool member - - pool member - - - - An application that runs on the back-end server in a - load-balancing system. - - - - - port - - ports - - definition of - - - - A virtual network port within Networking; VIFs / vNICs are - connected to a port. - - - - - port UUID - - ports - - port UUID - - - - Unique ID for a Networking port. - - - - - preseed - - preseed, definition of - - - - A tool to automate system configuration and installation on - Debian-based Linux distributions. - - - - - private image - - private image - - - - An Image service VM image that is only available to specified - tenants. - - - - - private IP address - - IP addresses - - private - - - private IP address - - - - An IP address used for management and administration, not - available to the public Internet. - - - - - private network - - networks - - private networks - - - private networks - - - - The Network Controller provides virtual networks to enable - compute servers to interact with each other and with the public - network. All machines must have a public and private network - interface. A private network interface can be a flat or VLAN network - interface. A flat network interface is controlled by the - flat_interface with flat managers. A VLAN network interface is - controlled by the vlan_interface option with VLAN - managers. - - - - - project - - projects - - definition of - - - - Projects represent the base unit of “ownership” in OpenStack, - in that all resources in OpenStack should be owned by a specific project. - In OpenStack Identity, a project must be owned by a specific domain. - - - - - project ID - - projects - - project ID - - - - User-defined alphanumeric string in Compute; the name of a - project. - - - - - project VPN - - projects - - project VPN - - - - Alternative term for a cloudpipe. - - - - - promiscuous mode - - promiscuous mode - - - - Causes the network interface to pass all traffic it - receives to the host rather than passing only the frames - addressed to it. - - - - - protected property - - protected property - - - - Generally, extra properties on an Image service image to - which only cloud administrators have access. Limits which user - roles can perform CRUD operations on that property. The cloud - administrator can configure any image property as - protected. - - - - - provider - - provider - - - - An administrator who has access to all hosts and - instances. - - - - - proxy node - - nodes - - proxy nodes - - - proxy nodes - - - - A node that provides the Object Storage proxy service. - - - - - proxy server - - servers - - proxy servers - - - proxy servers - - - - Users of Object Storage interact with the service through the - proxy server, which in turn looks up the location of the requested - data within the ring and returns the results to the user. - - - - - public API - - API (application programming interface) - - public APIs - - - public API - - - - An API endpoint used for both service-to-service communication - and end-user interactions. - - - - - public image - - Image service - - public images - - - public image - - - - An Image service VM image that is available to all - tenants. - - - - - public IP address - - IP addresses - - public - - - public IP address - - - - An IP address that is accessible to end-users. - - - - - public key authentication - - public key authentication - - - - Authentication method that uses keys rather than - passwords. - - - - - public network - - networks - - public - - - public network - - - - The Network Controller provides virtual networks to enable - compute servers to interact with each other and with the public - network. All machines must have a public and private network - interface. The public network interface is controlled by the - public_interface option. - - - - - Puppet - - Puppet - - - - An operating system configuration-management tool supported by - OpenStack. - - - - - Python - - Python - - - - Programming language used extensively in OpenStack. - - - - - - - - Q - - - QEMU Copy On Write 2 (QCOW2) - - QEMU Copy On Write 2 (QCOW2) - - - - One of the VM image disk formats supported by Image - service. - - - - - Qpid - - Qpid - - - - Message queue software supported by OpenStack; an alternative to - RabbitMQ. - - - - - quarantine - - quarantine - - - - If Object Storage finds objects, containers, or accounts that - are corrupt, they are placed in this state, are not replicated, cannot - be read by clients, and a correct copy is re-replicated. - - - - - Quick EMUlator (QEMU) - - Quick EMUlator (QEMU) - - - - QEMU is a generic and open source machine emulator and - virtualizer. - - One of the hypervisors supported by OpenStack, generally used - for development purposes. - - - - - quota - - quotas - - - - In Compute and Block Storage, the ability to set resource limits - on a per-project basis. - - - - - - - - R - - - RabbitMQ - - RabbitMQ - - - - The default message queue software used by OpenStack. - - - - - Rackspace Cloud Files - - Rackspace Cloud Files - - - - Released as open source by Rackspace in 2010; the basis for - Object Storage. - - - - - RADOS Block Device (RBD) - - RADOS Block Device (RBD) - - - - Ceph component that enables a Linux block device to be striped - over multiple distributed data stores. - - - - - radvd - - radvd - - - - The router advertisement daemon, used by the Compute VLAN - manager and FlatDHCP manager to provide routing services for VM - instances. - - - - - rally - - rally - - - - - OpenStack project that provides the Benchmark service. - - - - - RAM filter - - RAM filter - - - - The Compute setting that enables or disables RAM - overcommitment. - - - - - RAM overcommit - - RAM overcommit - - - - The ability to start new VM instances based on the actual memory - usage of a host, as opposed to basing the decision on the amount of - RAM each running instance thinks it has available. Also known as - memory overcommit. - - - - - rate limit - - rate limits - - - - Configurable option within Object Storage to limit database - writes on a per-account and/or per-container basis. - - - - - raw - - raw format - - - - One of the VM image disk formats supported by Image service; an - unstructured disk image. - - - - - rebalance - - rebalancing - - - - The process of distributing Object Storage partitions across all - drives in the ring; used during initial ring creation and after ring - reconfiguration. - - - - - reboot - - reboot - - hard vs. soft - - - - Either a soft or hard reboot of a server. With a soft reboot, - the operating system is signaled to restart, which enables a graceful - shutdown of all processes. A hard reboot is the equivalent of power - cycling the server. The virtualization platform should ensure that the - reboot action has completed successfully, even in cases in which the - underlying domain/VM is paused or halted/stopped. - - - - - rebuild - - rebuilding - - - - Removes all data on the server and replaces it with the - specified image. Server ID and IP addresses remain the same. - - - - - Recon - - Recon - - - - An Object Storage component that collects meters. - - - - - record - - records - - basics of - - - - Belongs to a particular domain and is used to specify - information about the domain. - There are several types of DNS records. Each record type contains - particular information used to describe the purpose of that record. - Examples include mail exchange (MX) records, which specify the mail - server for a particular domain; and name server (NS) records, which - specify the authoritative name servers for a domain. - - - - - record ID - - records - - record IDs - - - - A number within a database that is incremented each time a - change is made. Used by Object Storage when replicating. - - - - - Red Hat Enterprise Linux (RHEL) - - Red Hat Enterprise Linux (RHEL) - - - - A Linux distribution that is compatible with OpenStack. - - - - - reference architecture - - reference architecture - - - - A recommended architecture for an OpenStack cloud. - - - - - region - - region - - - - A discrete OpenStack environment with dedicated API endpoints - that typically shares only the Identity (keystone) with other - regions. - - - - - registry - - registry - - under Image service - - - - Alternative term for the Image service registry. - - - - - registry server - - servers - - registry servers - - - registry servers - - - - An Image service that provides VM image metadata information to - clients. - - - - - Reliable, Autonomic Distributed Object Store - (RADOS) - - Reliable, Autonomic Distributed Object Store - (RADOS) - - - - A collection of components that provides object storage within - Ceph. Similar to OpenStack Object Storage. - - - - - Remote Procedure Call (RPC) - - Remote Procedure Call (RPC) - - - - The method used by the Compute RabbitMQ for intra-service - communications. - - - - - replica - - replication - - definition of - - - - Provides data redundancy and fault tolerance by creating copies - of Object Storage objects, accounts, and containers so that they are - not lost when the underlying storage fails. - - - - - replica count - - replication - - replica count - - - - The number of replicas of the data in an Object Storage - ring. - - - - - replication - - - The process of copying data to a separate physical device for - fault tolerance and performance. - - - - - replicator - - replication - - replicators - - - - The Object Storage back-end process that creates and manages - object replicas. - - - - - request ID - - request IDs - - - - Unique ID assigned to each request sent to Compute. - - - - - rescue image - - rescue images - - - - A special type of VM image that is booted when an instance is - placed into rescue mode. Allows an administrator to mount the file - systems for an instance to correct the problem. - - - - - resize - - resizing - - - - Converts an existing server to a different flavor, which scales - the server up or down. The original server is saved to enable rollback - if a problem occurs. All resizes must be tested and explicitly - confirmed, at which time the original server is removed. - - - - - RESTful - - RESTful web services - - - - A kind of web service API that uses REST, or Representational - State Transfer. REST is the style of architecture for hypermedia - systems that is used for the World Wide Web. - - - - - ring - - rings - - definition of - - - - An entity that maps Object Storage data to partitions. A - separate ring exists for each service, such as account, object, and - container. - - - - - ring builder - - rings - - ring builders - - - - Builds and manages rings within Object Storage, assigns - partitions to devices, and pushes the configuration to other storage - nodes. - - - - - Role Based Access Control (RBAC) - - Role Based Access Control (RBAC) - - - - Provides a predefined list of actions that the user can perform, - such as start or stop VMs, reset passwords, and so on. Supported in - both Identity and Compute and can be configured using the - horizon dashboard. - - - - - role - - roles - - definition of - - - - A personality that a user assumes to perform a specific set of - operations. A role includes a set of rights and privileges. A user - assuming that role inherits those rights and privileges. - - - - - role ID - - roles - - role ID - - - - Alphanumeric ID assigned to each Identity service role. - - - - - rootwrap - - rootwrap - - - - A feature of Compute that allows the unprivileged "nova" user to - run a specified list of commands as the Linux root user. - - - - - round-robin scheduler - - schedulers - - round-robin - - - round-robin scheduler - - - - Type of Compute scheduler that evenly distributes instances - among available hosts. - - - - - router - - router - - - - A physical or virtual network device that passes network - traffic between different networks. - - - - - routing key - - routing keys - - - - The Compute direct exchanges, fanout exchanges, and topic - exchanges use this key to determine how to process a message; - processing varies depending on exchange type. - - - - - RPC driver - - drivers - - RPC drivers - - - RPC drivers - - - - Modular system that allows the underlying message queue software - of Compute to be changed. For example, from RabbitMQ to ZeroMQ or - Qpid. - - - - - rsync - - rsync - - - - Used by Object Storage to push object replicas. - - - - - RXTX cap - - RXTX cap/quota - - - - Absolute limit on the amount of network traffic a Compute VM - instance can send and receive. - - - - - RXTX quota - - - Soft limit on the amount of network traffic a Compute VM - instance can send and receive. - - - - - - - - S - - - S3 - - S3 storage service - - - - Object storage service by Amazon; similar in function to Object - Storage, it can act as a back-end store for Image service VM images. - - - - - sahara - - sahara - - - - OpenStack project that provides a scalable data-processing stack - and associated management interfaces. - - - - - SAML assertion - - SAML assertion - - - - Contains information about a user as provided by the identity - provider. It is an indication that a user has been authenticated. - - - - - scheduler manager - - scheduler manager - - - - A Compute component that determines where VM instances should - start. Uses modular design to support a variety of scheduler - types. - - - - - scoped token - - scoped tokens - - - - An Identity service API access token that is associated with a - specific tenant. - - - - - scrubber - - scrubbers - - - - Checks for and deletes unused VMs; the component of Image - service that implements delayed delete. - - - - - secret key - - secret keys - - - - String of text known only by the user; used along with an access - key to make requests to the Compute API. - - - - - secure shell (SSH) - - secure shell (SSH) - - - - Open source tool used to access remote hosts through an - encrypted communications channel, SSH key injection is supported by - Compute. - - - - - security group - - security groups - - - - A set of network traffic filtering rules that are applied to a - Compute instance. - - - - - segmented object - - objects - - segmented objects - - - segmented objects - - - - An Object Storage large object that has been broken up into - pieces. The re-assembled object is called a concatenated - object. - - - - - self-service - - self-service - - - - For IaaS, ability for a regular (non-privileged) account to - manage a virtual infrastructure component such as networks without - involving an administrator. - - - - - SELinux - - SELinux - - - - Linux kernel security module that provides the mechanism for - supporting access control policies. - - - - - senlin - - senlin - - - - - OpenStack project that provides a Clustering service. - - - - - - server - - servers - - definition of - - - - Computer that provides explicit services to the client software - running on that system, often managing a variety of computer - operations. - - A server is a VM instance in the Compute system. Flavor and - image are requisite elements when creating a server. - - - - - server image - - server image - - - - Alternative term for a VM image. - - - - - server UUID - - servers - - server UUID - - - - Unique ID assigned to each guest VM instance. - - - - - service - - services - - definition of - - - - An OpenStack service, such as Compute, Object Storage, or Image - service. Provides one or more endpoints through which users can access - resources and perform operations. - - - - - service catalog - - service catalog - - - - Alternative term for the Identity service catalog. - - - - - service ID - - service ID - - - - Unique ID assigned to each service that is available in the - Identity service catalog. - - - - - service provider - - service provider - - - - A system that provides services to other system entities. In - case of federated identity, OpenStack Identity is the service - provider. - - - - - service registration - - service registration - - - - An Identity service feature that enables services, such as - Compute, to automatically register with the catalog. - - - - - service tenant - - service tenant - - - - Special tenant that contains all services that are listed in the - catalog. - - - - - service token - - service token - - - - An administrator-defined token used by Compute to communicate - securely with the Identity service. - - - - - session back end - - sessions - - session back end - - - - The method of storage used by horizon to track client sessions, - such as local memory, cookies, a database, or memcached. - - - - - session persistence - - sessions - - session persistence - - - - A feature of the load-balancing service. It attempts to force - subsequent connections to a service to be redirected to the same node - as long as it is online. - - - - - session storage - - sessions - - session storage - - - - A horizon component that stores and tracks client session - information. Implemented through the Django sessions framework. - - - - - share - - share - - - - - A remote, mountable file system in the context of the Shared File - Systems. You can mount a share to, and access a share from, several - hosts by several users at a time. - - - - - share network - - share network - - - - - An entity in the context of the Shared File Systems that - encapsulates interaction with the Networking service. If the driver - you selected runs in the mode requiring such kind of interaction, you - need to specify the share network to create a share. - - - - - Shared File Systems API - - Shared File Systems API - - - - - A Shared File Systems service that provides a stable RESTful API. - The service authenticates and routes requests throughout the Shared - File Systems service. There is python-manilaclient to interact with - the API. - - - - - - Shared File Systems service - - Shared File Systems service - - - - - An OpenStack service that provides a set of services for - management of shared file systems in a multi-tenant cloud - environment. The service is similar to how OpenStack provides - block-based storage management through the OpenStack Block Storage - service project. With the Shared File Systems service, you can create - a remote file system and mount the file system on your instances. You - can also read and write data from your instances to and from your - file system. The project name of the Shared File Systems service is - manila. - - - - - - shared IP address - - IP addresses - - shared - - - shared IP address - - - - An IP address that can be assigned to a VM instance within the - shared IP group. Public IP addresses can be shared across multiple - servers for use in various high-availability scenarios. When an IP - address is shared to another server, the cloud network restrictions - are modified to enable each server to listen to and respond on that IP - address. You can optionally specify that the target server network - configuration be modified. Shared IP addresses can be used with many - standard heartbeat facilities, such as keepalive, that monitor for - failure and manage IP failover. - - - - - shared IP group - - shared IP groups - - - - A collection of servers that can share IPs with other members of - the group. Any server in a group can share one or more public IPs with - any other server in the group. With the exception of the first server - in a shared IP group, servers must be launched into shared IP groups. - A server may be a member of only one shared IP group. - - - - - shared storage - - shared storage - - - - Block storage that is simultaneously accessible by multiple - clients, for example, NFS. - - - - - Sheepdog - - Sheepdog - - - - Distributed block storage system for QEMU, supported by - OpenStack. - - - - - Simple Cloud Identity Management (SCIM) - - Simple Cloud Identity Management (SCIM) - - - - Specification for managing identity in the cloud, currently - unsupported by OpenStack. - - - - - Single-root I/O Virtualization (SR-IOV) - - Single-root I/O Virtualization (SR-IOV) - - - - A specification that, when implemented by a physical PCIe - device, enables it to appear as multiple separate PCIe devices. This - enables multiple virtualized guests to share direct access to the - physical device, offering improved performance over an equivalent - virtual device. Currently supported in OpenStack Havana and later - releases. - - - - - Service Level Agreement (SLA) - - Service Level Agreement (SLA) - - - - Contractual obligations that ensure the availability of a - service. - - - - - SmokeStack - - SmokeStack - - - - Runs automated tests against the core OpenStack API; written in - Rails. - - - - - snapshot - - snapshot - - - - A point-in-time copy of an OpenStack storage volume or image. - Use storage volume snapshots to back up volumes. Use image snapshots - to back up data, or as "gold" images for additional servers. - - - - - soft reboot - - reboot - - hard vs. soft - - - soft reboot - - - - A controlled reboot where a VM instance is properly restarted - through operating system commands. - - - - - Software Development Lifecycle Automation service - - Software Development Lifecycle Automation service - - - - - OpenStack project that aims to make cloud services easier to - consume and integrate with application development process - by automating the source-to-image process, and simplifying - app-centric deployment. The project name is solum. - - - - - - SolidFire Volume Driver - - SolidFire Volume Driver - - - - The Block Storage driver for the SolidFire iSCSI storage - appliance. - - - - - solum - - solum - - - - - OpenStack project that provides a Software Development - Lifecycle Automation service. - - - - - - SPICE - - SPICE (Simple Protocol for Independent Computing - Environments) - - - - The Simple Protocol for Independent Computing Environments - (SPICE) provides remote desktop access to guest virtual machines. It - is an alternative to VNC. SPICE is supported by OpenStack. - - - - - spread-first scheduler - - schedulers - - spread-first - - - spread-first scheduler - - - - The Compute VM scheduling algorithm that attempts to start a new - VM on the host with the least amount of load. - - - - - SQL-Alchemy - - SQL-Alchemy - - - - An open source SQL toolkit for Python, used in OpenStack. - - - - - SQLite - - SQLite - - - - A lightweight SQL database, used as the default persistent - storage method in many OpenStack services. - - - - - stack - - stack - Heat Orchestration Template (HOT) - - - - A set of OpenStack resources created and managed by the - Orchestration service according to a given template (either an - AWS CloudFormation template or a Heat Orchestration - Template (HOT)). - - - - - StackTach - - StackTach - - - - Community project that captures Compute AMQP communications; - useful for debugging. - - - - - static IP address - - IP addresses - - static - - - static IP addresses - - - - Alternative term for a fixed IP address. - - - - - StaticWeb - - StaticWeb - - - - WSGI middleware component of Object Storage that serves - container data as a static web page. - - - - - storage back end - - storage back end - - - - The method that a service uses for persistent storage, such as - iSCSI, NFS, or local disk. - - - - - storage node - - nodes - - storage nodes - - - storage node - - - - An Object Storage node that provides container services, account - services, and object services; controls the account databases, - container databases, and object storage. - - - - - storage manager - - storage - - storage manager - - - - A XenAPI component that provides a pluggable interface to - support a wide variety of persistent storage back ends. - - - - - storage manager back end - - storage - - storage manager back end - - - - A persistent storage method supported by XenAPI, such as iSCSI - or NFS. - - - - - storage services - - storage - - storage services - - - - Collective name for the Object Storage object services, - container services, and account services. - - - - - strategy - - strategy - - - - Specifies the authentication source used by Image service or - Identity. In the Database service, it refers to the extensions - implemented for a data store. - - - - - subdomain - - subdomains - - - - A domain within a parent domain. Subdomains cannot be - registered. Subdomains enable you to delegate domains. Subdomains can - themselves have subdomains, so third-level, fourth-level, fifth-level, - and deeper levels of nesting are possible. - - - - - subnet - - subnet - - - - Logical subdivision of an IP network. - - - - - SUSE Linux Enterprise Server (SLES) - - SUSE Linux Enterprise Server (SLES) - - - - A Linux distribution that is compatible with OpenStack. - - - - - suspend - - suspend, definition of - - - - Alternative term for a paused VM instance. - - - - - swap - - swap, definition of - - - - Disk-based virtual memory used by operating systems to provide - more memory than is actually available on the system. - - - - - swauth - - swauth - - - - An authentication and authorization service for Object Storage, - implemented through WSGI middleware; uses Object Storage itself as the - persistent backing store. - - - - - swift - - - An OpenStack core project that provides object storage - services. - - - - - swift All in One (SAIO) - - swift All in One (SAIO) - - - - Creates a full Object Storage development environment within a - single VM. - - - - - swift middleware - - swift - - swift middleware - - - - Collective term for Object Storage components that provide - additional functionality. - - - - - swift proxy server - - swift - - swift proxy server - - - - Acts as the gatekeeper to Object Storage and is responsible for - authenticating the user. - - - - - swift storage node - - storage - - swift storage nodes - - - nodes - - swift storage nodes - - - swift - - swift storage nodes - - - - A node that runs Object Storage account, container, and object - services. - - - - - sync point - - sync point - - - - Point in time since the last container and accounts database - sync among nodes within Object Storage. - - - - - sysadmin - - sysadmin - - - - One of the default roles in the Compute RBAC system. Enables a - user to add other users to a project, interact with VM images that are - associated with the project, and start and stop VM instances. - - - - - system usage - - system usage - - - - A Compute component that, along with the notification system, - collects meters and usage information. This information can be used - for billing. - - - - - - - - T - - - Telemetry service - - Telemetry service - - - - An integrated project that provides metering and measuring - facilities for OpenStack. The project name of Telemetry is - ceilometer. - - - - - TempAuth - - TempAuth - - - - An authentication facility within Object Storage that enables - Object Storage itself to perform authentication and authorization. - Frequently used in testing and development. - - - - - Tempest - - Tempest - - - - Automated software test suite designed to run against the trunk - of the OpenStack core project. - - - - - TempURL - - TempURL - - - - An Object Storage middleware component that enables creation of - URLs for temporary object access. - - - - - tenant - - - A group of users; used to isolate access to Compute resources. - An alternative term for a project. - - - - - Tenant API - - tenant - - Tenant API - - - - An API that is accessible to tenants. - - - - - tenant endpoint - - endpoints - - tenant endpoint - - - tenant - - tenant endpoint - - - - An Identity service API endpoint that is associated with one or - more tenants. - - - - - tenant ID - - tenant - - tenant ID - - - - Unique ID assigned to each tenant within the Identity service. - The project IDs map to the tenant IDs. - - - - - token - - tokens - - - - An alpha-numeric string of text used to access OpenStack APIs - and resources. - - - - - token services - - token services - - - - An Identity service component that manages and validates tokens - after a user or tenant has been authenticated. - - - - - tombstone - - tombstone - - - - Used to mark Object Storage objects that have been - deleted; ensures that the object is not updated on another node after - it has been deleted. - - - - - topic publisher - - topic publisher - - - - A process that is created when a RPC call is executed; used to - push the message to the topic exchange. - - - - - Torpedo - - Torpedo - - - - Community project used to run automated tests against the - OpenStack API. - - - - - transaction ID - - transaction IDs - - - - Unique ID assigned to each Object Storage request; used for - debugging and tracing. - - - - - transient - - transient exchanges - - non-durable exchanges - - - - Alternative term for non-durable. - - - - - transient exchange - - - Alternative term for a non-durable exchange. - - - - - transient message - - messages - - transient messages - - - transient messages - - - - A message that is stored in memory and is lost after the server - is restarted. - - - - - transient queue - - queues - - transient queues - - - transient queues - - - - Alternative term for a non-durable queue. - - - - - TripleO - - TripleO - - - - - OpenStack-on-OpenStack program. The code name for the - OpenStack Deployment program. - - - - - - trove - - trove - - - - OpenStack project that provides database services to - applications. - - - - - - - - U - - - Ubuntu - - Ubuntu - - - - A Debian-based Linux distribution. - - - - - unscoped token - - unscoped token - - - - Alternative term for an Identity service default token. - - - - - updater - - updaters - - - - Collective term for a group of Object Storage components that - processes queued and failed updates for containers and objects. - - - - - user - - users, definition of - - - - In OpenStack Identity, entities represent individual API - consumers and are owned by a specific domain. In OpenStack Compute, - a user can be associated with roles, projects, or both. - - - - - user data - - user data - - - - A blob of data that the user can specify when they launch - an instance. The instance can access this data through the - metadata service or config drive. - - config drive - Commonly used to pass a shell script that the instance runs on boot. - - - - - User Mode Linux (UML) - - User Mode Linux (UML) - - - - An OpenStack-supported hypervisor. - - - - - - - - V - - - VIF UUID - - VIF UUID - - - - Unique ID assigned to each Networking VIF. - - - - - VIP - - VIP - - - - The primary load balancing configuration object. - Specifies the virtual IP address and port where client traffic - is received. Also defines other details such as the load - balancing method to be used, protocol, and so on. This entity - is sometimes known in load-balancing products as a virtual - server, vserver, or listener. - - - - - Virtual Central Processing Unit (vCPU) - - Virtual Central Processing Unit (vCPU) - - - - Subdivides physical CPUs. Instances can then use those - divisions. - - - - - Virtual Disk Image (VDI) - - Virtual Disk Image (VDI) - - - - One of the VM image disk formats supported by Image - service. - - - - - VXLAN - - virtual extensible LAN (VXLAN) - - - - A network virtualization technology that attempts to reduce the - scalability problems associated with large cloud computing - deployments. It uses a VLAN-like encapsulation technique to - encapsulate Ethernet frames within UDP packets. - - - - - Virtual Hard Disk (VHD) - - Virtual Hard Disk (VHD) - - - - One of the VM image disk formats supported by Image - service. - - - - - virtual IP - - virtual IP - - - - An Internet Protocol (IP) address configured on the load - balancer for use by clients connecting to a service that is load - balanced. Incoming connections are distributed to back-end nodes based - on the configuration of the load balancer. - - - - - virtual machine (VM) - - virtual machine (VM) - - - - An operating system instance that runs on top of a hypervisor. - Multiple VMs can run at the same time on the same physical - host. - - - - - virtual network - - networks - - virtual - - - virtual network - - - - An L2 network segment within Networking. - - - - - virtual networking - - virtual networking - - - - A generic term for virtualization of network functions - such as switching, routing, load balancing, and security using - a combination of VMs and overlays on physical network - infrastructure. - - - - - - Virtual Network Computing (VNC) - - Virtual Network Computing (VNC) - - - - Open source GUI and CLI tools used for remote console access to - VMs. Supported by Compute. - - - - - Virtual Network InterFace (VIF) - - Virtual Network InterFace (VIF) - - - - An interface that is plugged into a port in a Networking - network. Typically a virtual network interface belonging to a - VM. - - - - - virtual port - - ports - - virtual - - - virtual port - - - - Attachment point where a virtual interface connects to a virtual - network. - - - - - virtual private network (VPN) - - virtual private network (VPN) - - - - Provided by Compute in the form of cloudpipes, specialized - instances that are used to create VPNs on a per-project basis. - - - - - virtual server - - servers - - virtual - - - virtual servers - - - - Alternative term for a VM or guest. - - - - - virtual switch (vSwitch) - - virtual switch (vSwitch) - - - - Software that runs on a host or node and provides the features - and functions of a hardware-based network switch. - - - - - virtual VLAN - - virtual VLAN - - - - Alternative term for a virtual network. - - - - - VirtualBox - - VirtualBox - - - - An OpenStack-supported hypervisor. - - - - - VLAN manager - - VLAN manager - - - - A Compute component that provides dnsmasq and radvd and sets up - forwarding to and from cloudpipe instances. - - - - - VLAN network - - networks - - VLAN - - - VLAN network - - - - The Network Controller provides virtual networks to enable - compute servers to interact with each other and with the public - network. All machines must have a public and private network - interface. A VLAN network is a private network interface, which is - controlled by the vlan_interface option with VLAN - managers. - - - - - VM disk (VMDK) - - VM disk (VMDK) - - - - One of the VM image disk formats supported by Image - service. - - - - - VM image - - VM image - - - - Alternative term for an image. - - - - - VM Remote Control (VMRC) - - VM Remote Control (VMRC) - - - - Method to access VM instance consoles using a web browser. - Supported by Compute. - - - - - VMware API - - VMware API - - - - Supports interaction with VMware products in Compute. - - - - - VMware NSX Neutron plug-in - - - Provides support for VMware NSX in Neutron. - - - - - VNC proxy - - VNC proxy - - - - A Compute component that provides users access to the consoles - of their VM instances through VNC or VMRC. - - - - - volume - - - Disk-based data storage generally represented as an iSCSI target - with a file system that supports extended attributes; can be - persistent or ephemeral. - - - - - Volume API - - volume - - Volume API - - - - Alternative name for the Block Storage API. - - - - - volume controller - - volume - - volume controller - - - - A Block Storage component that oversees and coordinates storage - volume actions. - - - - - volume driver - - volume - - volume driver - - - - Alternative term for a volume plug-in. - - - - - volume ID - - volume - - volume ID - - - - Unique ID applied to each storage volume under the Block Storage - control. - - - - - volume manager - - volume - - volume manager - - - - A Block Storage component that creates, attaches, and detaches - persistent storage volumes. - - - - - volume node - - volume - - volume node - - - - A Block Storage node that runs the cinder-volume daemon. - - - - - volume plug-in - - volume - - volume plug-in - - - - Provides support for new and specialized types of back-end - storage for the Block Storage volume manager. - - - - - volume worker - - volume workers - - - - A cinder component that interacts with back-end storage to manage - the creation and deletion of volumes and the creation of compute - volumes, provided by the cinder-volume daemon. - - - - - vSphere - - vSphere - - - - An OpenStack-supported hypervisor. - - - - - - - - W - - - weighting - - weighting - - - - A Compute process that determines the suitability of the VM - instances for a job for a particular host. For example, not enough RAM - on the host, too many CPUs on the host, and so on. - - - - - weight - - weight - - - - Used by Object Storage devices to determine which storage - devices are suitable for the job. Devices are weighted by size. - - - - - weighted cost - - weighted cost - - - - The sum of each cost used when deciding where to start a new VM - instance in Compute. - - - - - worker - - workers - - - - A daemon that listens to a queue and carries out tasks in - response to messages. For example, the cinder-volume worker manages volume - creation and deletion on storage arrays. - - - - - Workflow service - - Workflow service - mistral - - - - - OpenStack project that provides a simple YAML-based language - to write workflows, tasks and transition rules, and a - service that allows to upload them, modify, run them at - scale and in a highly available manner, manage and monitor - workflow execution state and state of individual tasks. The - code name of the project is mistral. - - - - - - - - - X - - - Xen - - Xen - - - - - Xen is a hypervisor using a microkernel design, providing - services that allow multiple computer operating systems to - execute on the same computer hardware concurrently. - - - - - - - Xen API - - - The Xen administrative API, which is supported by - Compute. - - - - - Xen Cloud Platform (XCP) - - Xen API - - Xen Cloud Platform (XCP) - - - - An OpenStack-supported hypervisor. - - - - - Xen Storage Manager Volume Driver - - Xen API - - Xen Storage Manager Volume Driver - - - - A Block Storage volume plug-in that enables communication with - the Xen Storage Manager API. - - - - - XenServer - - Xen API - - XenServer hypervisor - - - - An OpenStack-supported hypervisor. - - - - - XFS - - XFS - - - High-performance 64-bit file system created by Silicon - Graphics. Excels in parallel I/O operations and data - consistency. - - - - - - - - - Y - - - - - - - - - - - - - - Z - - - zaqar - - zaqar - - - - OpenStack project that provides a message service to - applications. - - - - - ZeroMQ - - ZeroMQ - - - - Message queue software supported by OpenStack. An alternative to - RabbitMQ. Also spelled 0MQ. - - - - - Zuul - - Zuul - - - - Tool used in OpenStack development to ensure correctly ordered - testing of changes in parallel. - - - - diff --git a/doc/glossary/locale/glossary.pot b/doc/glossary/locale/glossary.pot deleted file mode 100644 index 61c75dd0..00000000 --- a/doc/glossary/locale/glossary.pot +++ /dev/null @@ -1,6192 +0,0 @@ -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2014-09-04 06:11+0000\n" -"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" -"Last-Translator: FULL NAME \n" -"Language-Team: LANGUAGE \n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" - -#: ./doc/glossary/openstack-glossary.xml:7(title) ./doc/glossary/openstack-glossary.xml:10(title) -msgid "OpenStack glossary" -msgstr "" - -#: ./doc/glossary/openstack-glossary.xml:11(para) -msgid "Use this glossary to get definitions of OpenStack-related words and phrases." -msgstr "" - -#: ./doc/glossary/openstack-glossary.xml:13(para) -msgid "To add to this glossary follow the OpenStack Documentation HowTo." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:13(title) -msgid "Glossary" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:16(para) -msgid "Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:23(link) -msgid "http://www.apache.org/licenses/LICENSE-2.0" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:25(para) -msgid "Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:33(para) -msgid "This glossary offers a list of terms and definitions to define a vocabulary for OpenStack-related concepts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:35(para) -msgid "To add to OpenStack glossary, clone the openstack/openstack-manuals repository and update the source file doc/glossary/glossary-terms.xml through the OpenStack contribution process." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:44(title) -msgid "Numbers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:47(glossterm) ./doc/glossary/glossary-terms.xml:49(primary) -msgid "6to4" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:52(para) -msgid "A mechanism that allows IPv6 packets to be transmitted over an IPv4 network, providing a strategy for migrating to IPv6." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:64(title) -msgid "A" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:67(glossterm) ./doc/glossary/glossary-terms.xml:69(primary) -msgid "absolute limit" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:73(para) -msgid "Impassable limits for guest VMs. Settings include total RAM size, maximum number of vCPUs, and maximum disk size." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:79(glossterm) ./doc/glossary/glossary-terms.xml:198(see) -msgid "access control list" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:81(primary) -msgid "access control list (ACL)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:85(para) -msgid "A list of permissions attached to an object. An ACL specifies which users or system processes have access to objects. It also defines which operations can be performed on specified objects. Each entry in a typical ACL specifies a subject and an operation. For instance, the ACL entry (Alice, delete) for a file gives Alice permission to delete the file." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:95(glossterm) ./doc/glossary/glossary-terms.xml:97(primary) -msgid "access key" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:101(para) -msgid "Alternative term for an Amazon EC2 access key. See EC2 access key." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:107(glossterm) -msgid "account" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:109(primary) -msgid "accounts" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:113(para) -msgid "The Object Storage context of an account. Do not confuse with a user account from an authentication service, such as Active Directory, /etc/passwd, OpenLDAP, OpenStack Identity Service, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:120(glossterm) ./doc/glossary/glossary-terms.xml:122(primary) -msgid "account auditor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:126(para) -msgid "Checks for missing replicas and incorrect or corrupted objects in a specified Object Storage account by running queries against the back-end SQLite database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:133(glossterm) ./doc/glossary/glossary-terms.xml:135(primary) -msgid "account database" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:139(para) -msgid "A SQLite database that contains Object Storage accounts and related metadata and that the accounts server accesses." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:145(glossterm) ./doc/glossary/glossary-terms.xml:147(primary) -msgid "account reaper" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:151(para) -msgid "An Object Storage worker that scans for and deletes account databases and that the account server has marked for deletion." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:157(glossterm) ./doc/glossary/glossary-terms.xml:159(primary) -msgid "account server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:163(para) -msgid "Lists containers in Object Storage and stores container information in the account database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:169(glossterm) ./doc/glossary/glossary-terms.xml:171(primary) -msgid "account service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:175(para) -msgid "An Object Storage component that provides account services such as list, create, modify, and audit. Do not confuse with OpenStack Identity Service, OpenLDAP, or similar user-account services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:182(glossterm) ./doc/glossary/glossary-terms.xml:184(primary) -msgid "accounting" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:188(para) -msgid "The Compute service provides accounting information through the event notification and system usage data facilities." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:194(glossterm) ./doc/glossary/glossary-terms.xml:196(primary) -msgid "ACL" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:202(para) -msgid "See access control list." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:207(glossterm) ./doc/glossary/glossary-terms.xml:209(primary) -msgid "active/active configuration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:213(para) -msgid "In a high-availability setup with an active/active configuration, several systems share the load together and if one fails, the load is distributed to the remaining systems." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:220(glossterm) ./doc/glossary/glossary-terms.xml:222(primary) -msgid "Active Directory" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:226(para) -msgid "Authentication and identity service by Microsoft, based on LDAP. Supported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:232(glossterm) ./doc/glossary/glossary-terms.xml:234(primary) -msgid "active/passive configuration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:238(para) -msgid "In a high-availability setup with an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:245(glossterm) ./doc/glossary/glossary-terms.xml:247(primary) -msgid "address pool" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:251(para) -msgid "A group of fixed and/or floating IP addresses that are assigned to a project and can be used by or assigned to the VM instances in a project." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:258(glossterm) ./doc/glossary/glossary-terms.xml:260(primary) ./doc/glossary/glossary-terms.xml:4947(see) -msgid "admin API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:264(para) -msgid "A subset of API calls that are accessible to authorized administrators and are generally not accessible to end users or the public Internet. They can exist as a separate service (keystone) or can be a subset of another API (nova)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:272(glossterm) ./doc/glossary/glossary-terms.xml:274(primary) -msgid "admin server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:278(para) -msgid "In the context of the Identity Service, the worker process that provides access to the admin API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:284(glossterm) ./doc/glossary/glossary-terms.xml:287(primary) -msgid "Advanced Message Queuing Protocol (AMQP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:291(para) -msgid "The open standard messaging protocol used by OpenStack components for intra-service communications, provided by RabbitMQ, Qpid, or ZeroMQ." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:298(glossterm) ./doc/glossary/glossary-terms.xml:300(primary) -msgid "Advanced RISC Machine (ARM)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:304(para) -msgid "Lower power consumption CPU often found in mobile and embedded devices. Supported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:310(glossterm) -msgid "alert" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:312(primary) -msgid "alerts" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:314(secondary) ./doc/glossary/glossary-terms.xml:876(secondary) ./doc/glossary/glossary-terms.xml:924(secondary) ./doc/glossary/glossary-terms.xml:994(secondary) ./doc/glossary/glossary-terms.xml:1284(secondary) ./doc/glossary/glossary-terms.xml:1373(secondary) ./doc/glossary/glossary-terms.xml:1584(secondary) ./doc/glossary/glossary-terms.xml:1695(secondary) ./doc/glossary/glossary-terms.xml:1775(secondary) ./doc/glossary/glossary-terms.xml:1831(secondary) ./doc/glossary/glossary-terms.xml:1930(secondary) ./doc/glossary/glossary-terms.xml:2159(secondary) ./doc/glossary/glossary-terms.xml:2439(secondary) ./doc/glossary/glossary-terms.xml:3169(secondary) ./doc/glossary/glossary-terms.xml:3286(secondary) ./doc/glossary/glossary-terms.xml:3930(secondary) ./doc/glossary/glossary-terms.xml:3982(secondary) ./doc/glossary/glossary-terms.xml:4087(secondary) ./doc/glossary/glossary-terms.xml:4304(secondary) ./doc/glossary/glossary-terms.xml:4469(secondary) ./doc/glossary/glossary-terms.xml:4487(secondary) ./doc/glossary/glossary-terms.xml:4984(secondary) ./doc/glossary/glossary-terms.xml:5318(secondary) ./doc/glossary/glossary-terms.xml:5571(secondary) ./doc/glossary/glossary-terms.xml:5687(secondary) ./doc/glossary/glossary-terms.xml:6030(secondary) ./doc/glossary/glossary-terms.xml:6215(secondary) ./doc/glossary/glossary-terms.xml:6306(secondary) ./doc/glossary/glossary-terms.xml:6878(secondary) ./doc/glossary/glossary-terms.xml:6981(secondary) ./doc/glossary/glossary-terms.xml:7025(secondary) ./doc/glossary/glossary-terms.xml:7292(secondary) ./doc/glossary/glossary-terms.xml:7335(secondary) -msgid "definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:318(para) -msgid "The Compute service can send alerts through its notification system, which includes a facility to create custom notification drivers. Alerts can be sent to and displayed on the horizon dashboard." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:326(glossterm) -msgid "allocate" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:328(primary) -msgid "allocate, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:332(para) -msgid "The process of taking a floating IP address from the address pool so it can be associated with a fixed IP on a guest VM instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:339(glossterm) ./doc/glossary/glossary-terms.xml:341(primary) -msgid "Amazon Kernel Image (AKI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:345(para) ./doc/glossary/glossary-terms.xml:357(para) ./doc/glossary/glossary-terms.xml:369(para) -msgid "Both a VM container format and disk format. Supported by Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:351(glossterm) ./doc/glossary/glossary-terms.xml:353(primary) -msgid "Amazon Machine Image (AMI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:363(glossterm) ./doc/glossary/glossary-terms.xml:365(primary) -msgid "Amazon Ramdisk Image (ARI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:375(glossterm) ./doc/glossary/glossary-terms.xml:377(primary) -msgid "Anvil" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:381(para) -msgid "A project that ports the shell script-based project named DevStack to Python." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:387(glossterm) ./doc/glossary/glossary-terms.xml:389(primary) -msgid "Apache" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:393(para) -msgid "The Apache Software Foundation supports the Apache community of open-source software projects. These projects provide software products for the public good." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:400(glossterm) ./doc/glossary/glossary-terms.xml:402(primary) -msgid "Apache License 2.0" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:406(para) -msgid "All OpenStack core projects are provided under the terms of the Apache License 2.0 license." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:412(glossterm) ./doc/glossary/glossary-terms.xml:414(primary) -msgid "Apache Web Server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:418(para) -msgid "The most common web server software currently used on the Internet." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:424(glossterm) -msgid "API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:427(para) -msgid "Application programming interface." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:432(glossterm) ./doc/glossary/glossary-terms.xml:436(secondary) ./doc/glossary/glossary-terms.xml:441(secondary) -msgid "API endpoint" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:434(primary) ./doc/glossary/glossary-terms.xml:2909(primary) ./doc/glossary/glossary-terms.xml:2937(primary) ./doc/glossary/glossary-terms.xml:3560(primary) ./doc/glossary/glossary-terms.xml:8053(primary) -msgid "endpoints" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:439(primary) ./doc/glossary/glossary-terms.xml:455(primary) ./doc/glossary/glossary-terms.xml:468(primary) ./doc/glossary/glossary-terms.xml:482(primary) ./doc/glossary/glossary-terms.xml:495(primary) ./doc/glossary/glossary-terms.xml:509(primary) ./doc/glossary/glossary-terms.xml:523(primary) ./doc/glossary/glossary-terms.xml:6419(primary) -msgid "API (application programming interface)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:445(para) -msgid "The daemon, worker, or service that a client communicates with to access an API. API endpoints can provide any number of services, such as authentication, sales data, performance metrics, Compute VM commands, census data, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:453(glossterm) ./doc/glossary/glossary-terms.xml:457(secondary) -msgid "API extension" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:461(para) -msgid "Custom modules that extend some OpenStack core APIs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:466(glossterm) ./doc/glossary/glossary-terms.xml:470(secondary) -msgid "API extension plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:474(para) -msgid "Alternative term for a Networking plug-in or Networking API extension." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:480(glossterm) ./doc/glossary/glossary-terms.xml:484(secondary) -msgid "API key" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:488(para) -msgid "Alternative term for an API token." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:493(glossterm) ./doc/glossary/glossary-terms.xml:497(secondary) -msgid "API server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:501(para) -msgid "Any node running a daemon or worker that provides an API endpoint." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:507(glossterm) ./doc/glossary/glossary-terms.xml:511(secondary) -msgid "API token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:515(para) -msgid "Passed to API requests and used by OpenStack to verify that the client is authorized to run the requested operation." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:521(glossterm) ./doc/glossary/glossary-terms.xml:525(secondary) -msgid "API version" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:529(para) -msgid "In OpenStack, the API version for a project is part of the URL. For example, example.com/nova/v1/foobar." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:535(glossterm) ./doc/glossary/glossary-terms.xml:537(primary) -msgid "applet" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:541(para) -msgid "A Java program that can be embedded into a web page." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:546(glossterm) -msgid "Application Programming Interface (API)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:549(para) -msgid "A collection of specifications used to access a service, application, or program. Includes service calls, required parameters for each call, and the expected return values." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:556(glossterm) ./doc/glossary/glossary-terms.xml:563(primary) -msgid "application server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:558(primary) ./doc/glossary/glossary-terms.xml:6401(primary) ./doc/glossary/glossary-terms.xml:6833(primary) ./doc/glossary/glossary-terms.xml:7290(primary) ./doc/glossary/glossary-terms.xml:7319(primary) ./doc/glossary/glossary-terms.xml:8503(primary) -msgid "servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:560(secondary) -msgid "application servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:567(para) -msgid "A piece of software that makes available another piece of software over a network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:573(glossterm) ./doc/glossary/glossary-terms.xml:575(primary) -msgid "Application Service Provider (ASP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:579(para) -msgid "Companies that rent specialized applications that help businesses and organizations provide additional services with lower cost." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:588(glossterm) ./doc/glossary/glossary-terms.xml:590(primary) -msgid "Address Resolution Protocol (ARP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:594(para) -msgid "The protocol by which layer-3 IP addresses are resolved into layer-2 link local addresses." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:602(glossterm) ./doc/glossary/glossary-terms.xml:604(primary) -msgid "arptables" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:608(para) -msgid "Tool used for maintaining Address Resolution Protocol packet filter rules in the Linux kernel firewall modules. Used along with iptables, ebtables, and ip6tables in Compute to provide firewall services for VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:616(glossterm) -msgid "associate" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:618(primary) -msgid "associate, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:622(para) -msgid "The process associating a Compute floating IP address with a fixed IP address." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:628(glossterm) ./doc/glossary/glossary-terms.xml:631(primary) -msgid "Asynchronous JavaScript and XML (AJAX)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:635(para) -msgid "A group of interrelated web development techniques used on the client-side to create asynchronous web applications. Used extensively in horizon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:642(glossterm) ./doc/glossary/glossary-terms.xml:644(primary) -msgid "ATA over Ethernet (AoE)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:648(para) -msgid "A disk storage protocol tunneled within Ethernet." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:653(glossterm) -msgid "attach" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:655(primary) -msgid "attach, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:659(para) -msgid "The process of connecting a VIF or vNIC to a L2 network in Networking. In the context of Compute, this process connects a storage volume to an instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:666(glossterm) ./doc/glossary/glossary-terms.xml:668(primary) -msgid "attachment (network)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:672(para) -msgid "Association of an interface ID to a logical port. Plugs an interface into a port." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:678(glossterm) ./doc/glossary/glossary-terms.xml:680(primary) -msgid "auditing" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:684(para) -msgid "Provided in Compute through the system usage data facility." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:690(glossterm) ./doc/glossary/glossary-terms.xml:692(primary) -msgid "auditor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:696(para) -msgid "A worker process that verifies the integrity of Object Storage objects, containers, and accounts. Auditors is the collective term for the Object Storage account auditor, container auditor, and object auditor." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:704(glossterm) ./doc/glossary/glossary-terms.xml:706(primary) -msgid "Austin" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:710(para) -msgid "The code name for the initial release of OpenStack. The first design summit took place in Austin, Texas, US." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:717(glossterm) ./doc/glossary/glossary-terms.xml:719(primary) -msgid "auth node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:723(para) -msgid "Alternative term for an Object Storage authorization node." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:729(glossterm) ./doc/glossary/glossary-terms.xml:731(primary) -msgid "authentication" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:735(para) -msgid "The process that confirms that the user, process, or client is really who they say they are through private key, secret token, password, fingerprint, or similar method." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:742(glossterm) -msgid "authentication token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:744(primary) -msgid "authentication tokens" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:748(para) -msgid "A string of text provided to the client after authentication. Must be provided by the user or process in subsequent requests to the API endpoint." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:755(glossterm) ./doc/glossary/glossary-terms.xml:757(primary) -msgid "AuthN" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:761(para) -msgid "The Identity Service component that provides authentication services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:767(glossterm) ./doc/glossary/glossary-terms.xml:769(primary) -msgid "authorization" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:773(para) -msgid "The act of verifying that a user, process, or client is authorized to perform an action." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:779(glossterm) ./doc/glossary/glossary-terms.xml:781(primary) -msgid "authorization node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:785(para) -msgid "An Object Storage node that provides authorization services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:791(glossterm) ./doc/glossary/glossary-terms.xml:793(primary) -msgid "AuthZ" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:797(para) -msgid "The Identity Service component that provides high-level authorization services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:803(glossterm) ./doc/glossary/glossary-terms.xml:805(primary) -msgid "Auto ACK" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:809(para) -msgid "Configuration setting within RabbitMQ that enables or disables message acknowledgment. Enabled by default." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:815(glossterm) ./doc/glossary/glossary-terms.xml:817(primary) -msgid "auto declare" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:821(para) -msgid "A Compute RabbitMQ setting that determines whether a message exchange is automatically created when the program starts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:827(glossterm) ./doc/glossary/glossary-terms.xml:829(primary) -msgid "availability zone" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:833(para) -msgid "An Amazon EC2 concept of an isolated area that is used for fault tolerance. Do not confuse with an OpenStack Compute zone or cell." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:840(glossterm) -msgid "AWS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:842(primary) -msgid "AWS (Amazon Web Services)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:846(para) -msgid "Amazon Web Services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:851(glossterm) ./doc/glossary/glossary-terms.xml:853(primary) -msgid "AWS CloudFormation template" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:857(para) -msgid "AWS CloudFormation allows AWS users to create and manage a collection of related resources. The Orchestration module supports a CloudFormation-compatible format (CFN)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:869(title) -msgid "B" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:872(glossterm) -msgid "back end" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:874(primary) ./doc/glossary/glossary-terms.xml:889(primary) ./doc/glossary/glossary-terms.xml:905(primary) -msgid "back-end interactions" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:880(para) -msgid "Interactions and processes that are obfuscated from the user, such as Compute volume mount, data transmission to an iSCSI target by a daemon, or Object Storage object integrity checks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:887(glossterm) -msgid "back-end catalog" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:891(secondary) ./doc/glossary/glossary-terms.xml:1332(glossterm) ./doc/glossary/glossary-terms.xml:1334(primary) -msgid "catalog" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:895(para) -msgid "The storage method used by the Identity Service catalog service to store and retrieve information about API endpoints that are available to the client. Examples include a SQL database, LDAP database, or KVS back end." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:903(glossterm) -msgid "back-end store" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:907(secondary) -msgid "store" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:911(para) -msgid "The persistent data store used to save and retrieve information for a service, such as lists of Object Storage objects, current state of guest VMs, lists of user names, and so on. Also, the method that the Image Service uses to get and store VM images. Options include Object Storage, local file system, S3, and HTTP." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:920(glossterm) ./doc/glossary/glossary-terms.xml:922(primary) -msgid "bandwidth" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:928(para) -msgid "The amount of available data used by communication resources, such as the Internet. Represents the amount of data that is used to download things or the amount of data available to download." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:935(glossterm) -msgid "bare" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:937(primary) -msgid "bare, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:941(para) -msgid "An Image Service container format that indicates that no container exists for the VM image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:947(glossterm) ./doc/glossary/glossary-terms.xml:949(primary) -msgid "base image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:953(para) -msgid "An OpenStack-provided image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:958(glossterm) ./doc/glossary/glossary-terms.xml:960(primary) -msgid "Bell-LaPadula model" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:964(para) -msgid "A security model that focuses on data confidentiality and controlled access to classified information. This model divide the entities into subjects and objects. The clearance of a subject is compared to the classification of the object to determine if the subject is authorized for the specific access mode. The clearance or classification scheme is expressed in terms of a lattice." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:974(glossterm) ./doc/glossary/glossary-terms.xml:976(primary) -msgid "Bexar" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:980(para) -msgid "A grouped release of projects related to OpenStack that came out in February of 2011. It included only Compute (nova) and Object Storage (swift)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:983(para) -msgid "Bexar is the code name for the second release of OpenStack. The design summit took place in San Antonio, Texas, US, which is the county seat for Bexar county." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:990(glossterm) ./doc/glossary/glossary-terms.xml:992(primary) -msgid "binary" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:998(para) -msgid "Information that consists solely of ones and zeroes, which is the language of computers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1004(glossterm) -msgid "bit" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1006(primary) -msgid "bits, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1010(para) -msgid "A bit is a single digit number that is in base of 2 (either a zero or one). Bandwidth usage is measured in bits per second." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1016(glossterm) ./doc/glossary/glossary-terms.xml:1018(primary) -msgid "bits per second (BPS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1022(para) -msgid "The universal measurement of how quickly data is transferred from place to place." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1028(glossterm) ./doc/glossary/glossary-terms.xml:1030(primary) -msgid "block device" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1034(para) -msgid "A device that moves data in the form of blocks. These device nodes interface the devices, such as hard disks, CD-ROM drives, flash drives, and other addressable regions of memory." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1041(glossterm) ./doc/glossary/glossary-terms.xml:1043(primary) -msgid "block migration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1047(para) -msgid "A method of VM live migration used by KVM to evacuate instances from one host to another with very little downtime during a user-initiated switchover. Does not require shared storage. Supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1055(glossterm) ./doc/glossary/glossary-terms.xml:1057(primary) -msgid "Block Storage" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1061(para) -msgid "The OpenStack core project that enables management of volumes, volume snapshots, and volume types. The project name of Block Storage is cinder." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1068(glossterm) ./doc/glossary/glossary-terms.xml:1070(primary) -msgid "Block Storage API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1074(para) -msgid "An API on a separate endpoint for attaching, detaching, and creating block storage for compute VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1081(glossterm) -msgid "BMC" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1083(primary) -msgid "BMC (Baseboard Management Controller)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1087(para) -msgid "Baseboard Management Controller. The intelligence in the IPMI architecture, which is a specialized micro-controller that is embedded on the motherboard of a computer and acts as a server. Manages the interface between system management software and platform hardware." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1096(glossterm) ./doc/glossary/glossary-terms.xml:1098(primary) -msgid "bootable disk image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1102(para) -msgid "A type of VM image that exists as a single, bootable file." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1108(glossterm) ./doc/glossary/glossary-terms.xml:1110(primary) -msgid "Bootstrap Protocol (BOOTP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1114(para) -msgid "A network protocol used by a network client to obtain an IP address from a configuration server. Provided in Compute through the dnsmasq daemon when using either the FlatDHCP manager or VLAN manager network manager." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1122(glossterm) ./doc/glossary/glossary-terms.xml:1124(primary) -msgid "Border Gateway Protocol (BGP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1127(para) -msgid "The Border Gateway Protocol is a dynamic routing protocol that connects autonomous systems. Considered the backbone of the Internet, this protocol connects disparate networks to form a larger network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1136(glossterm) -msgid "browser" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1138(primary) -msgid "browsers, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1142(para) -msgid "Any client software that enables a computer or device to access the Internet." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1148(glossterm) -msgid "builder file" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1150(primary) -msgid "builder files" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1154(para) -msgid "Contains configuration information that Object Storage uses to reconfigure a ring or to re-create it from scratch after a serious failure." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1161(glossterm) ./doc/glossary/glossary-terms.xml:1163(primary) -msgid "bursting" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1167(para) -msgid "The practice of utilizing a secondary environment to elastically build instances on-demand when the primary environment is resource constrained." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1176(glossterm) -msgid "button class" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1178(primary) -msgid "button classes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1182(para) -msgid "A group of related button types within horizon. Buttons to start, stop, and suspend VMs are in one class. Buttons to associate and disassociate floating IP addresses are in another class, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1190(glossterm) -msgid "byte" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1192(primary) -msgid "bytes, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1196(para) -msgid "Set of bits that make up a single character; there are usually 8 bits to a byte." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1205(title) -msgid "C" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1208(glossterm) -msgid "CA" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1210(primary) -msgid "CA (Certificate/Certification Authority)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1214(para) -msgid "Certificate Authority or Certification Authority. In cryptography, an entity that issues digital certificates. The digital certificate certifies the ownership of a public key by the named subject of the certificate. This enables others (relying parties) to rely upon signatures or assertions made by the private key that corresponds to the certified public key. In this model of trust relationships, a CA is a trusted third party for both the subject (owner) of the certificate and the party relying upon the certificate. CAs are characteristic of many public key infrastructure (PKI) schemes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1228(glossterm) -msgid "cache pruner" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1230(primary) -msgid "cache pruners" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1234(para) -msgid "A program that keeps the Image Service VM image cache at or below its configured maximum size." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1240(glossterm) ./doc/glossary/glossary-terms.xml:1242(primary) -msgid "Cactus" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1246(para) -msgid "An OpenStack grouped release of projects that came out in the spring of 2011. It included Compute (nova), Object Storage (swift), and the Image Service (glance)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1249(para) -msgid "Cactus is a city in Texas, US and is the code name for the third release of OpenStack. When OpenStack releases went from three to six months long, the code name of the release changed to match a geography nearest the previous summit." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1257(glossterm) -msgid "CADF" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1259(para) -msgid "Cloud Auditing Data Federation (CADF) is a specification for audit event data. CADF is supported by OpenStack Identity." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1268(glossterm) ./doc/glossary/glossary-terms.xml:1270(primary) -msgid "CALL" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1274(para) -msgid "One of the RPC primitives used by the OpenStack message queue software. Sends a message and waits for a response." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1280(glossterm) ./doc/glossary/glossary-terms.xml:1282(primary) -msgid "capability" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1288(para) -msgid "Defines resources for a cell, including CPU, storage, and networking. Can apply to the specific services within a cell or a whole cell." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1295(glossterm) ./doc/glossary/glossary-terms.xml:1297(primary) -msgid "capacity cache" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1301(para) -msgid "A Compute back-end database table that contains the current workload, amount of free RAM, and number of VMs running on each host. Used to determine on which VM a host starts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1308(glossterm) ./doc/glossary/glossary-terms.xml:1310(primary) -msgid "capacity updater" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1314(para) -msgid "A notification driver that monitors VM instances and updates the capacity cache as needed." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1320(glossterm) -msgid "CAST" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1322(primary) -msgid "CAST (RPC primitive)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1326(para) -msgid "One of the RPC primitives used by the OpenStack message queue software. Sends a message and does not wait for a response." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1338(para) -msgid "A list of API endpoints that are available to a user after authentication with the Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1344(glossterm) ./doc/glossary/glossary-terms.xml:1346(primary) -msgid "catalog service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1350(para) -msgid "An Identity Service that lists API endpoints that are available to a user after authentication with the Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1356(glossterm) ./doc/glossary/glossary-terms.xml:1358(primary) -msgid "ceilometer" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1362(para) -msgid "The project name for the Telemetry service, which is an integrated project that provides metering and measuring facilities for OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1369(glossterm) -msgid "cell" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1371(primary) ./doc/glossary/glossary-terms.xml:1387(primary) ./doc/glossary/glossary-terms.xml:1402(primary) ./doc/glossary/glossary-terms.xml:1514(primary) ./doc/glossary/glossary-terms.xml:6010(primary) -msgid "cells" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1377(para) -msgid "Provides logical partitioning of Compute resources in a child and parent relationship. Requests are passed from parent cells to child cells if the parent cannot provide the requested resource." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1385(glossterm) ./doc/glossary/glossary-terms.xml:1389(secondary) -msgid "cell forwarding" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1393(para) -msgid "A Compute option that enables parent cells to pass resource requests to child cells if the parent cannot provide the requested resource." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1400(glossterm) -msgid "cell manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1404(secondary) -msgid "cell managers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1408(para) -msgid "The Compute component that contains a list of the current capabilities of each host within the cell and routes requests as appropriate." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1415(glossterm) ./doc/glossary/glossary-terms.xml:1417(primary) -msgid "CentOS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1421(para) ./doc/glossary/glossary-terms.xml:2278(para) ./doc/glossary/glossary-terms.xml:5960(para) ./doc/glossary/glossary-terms.xml:6789(para) ./doc/glossary/glossary-terms.xml:7820(para) -msgid "A Linux distribution that is compatible with OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1426(glossterm) ./doc/glossary/glossary-terms.xml:1428(primary) -msgid "Ceph" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1432(para) -msgid "Massively scalable distributed storage system that consists of an object store, block store, and POSIX-compatible distributed file system. Compatible with OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1439(glossterm) ./doc/glossary/glossary-terms.xml:1441(primary) -msgid "CephFS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1445(para) -msgid "The POSIX-compliant file system provided by Ceph." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1450(glossterm) -msgid "certificate authority" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1452(primary) -msgid "certificate authority (Compute)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1456(para) -msgid "A simple certificate authority provided by Compute for cloudpipe VPNs and VM image decryption." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1462(glossterm) ./doc/glossary/glossary-terms.xml:1465(primary) -msgid "Challenge-Handshake Authentication Protocol (CHAP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1470(para) -msgid "An iSCSI authentication method supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1475(glossterm) ./doc/glossary/glossary-terms.xml:1477(primary) -msgid "chance scheduler" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1481(para) -msgid "A scheduling method used by Compute that randomly chooses an available host from the pool." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1487(glossterm) ./doc/glossary/glossary-terms.xml:1489(primary) -msgid "changes since" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1493(para) -msgid "A Compute API parameter that downloads changes to the requested item since your last request, instead of downloading a new, fresh set of data and comparing it against the old data." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1500(glossterm) ./doc/glossary/glossary-terms.xml:1502(primary) -msgid "Chef" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1506(para) -msgid "An operating system configuration management tool supporting OpenStack deployments." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1512(glossterm) -msgid "child cell" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1516(secondary) ./doc/glossary/glossary-terms.xml:1519(primary) -msgid "child cells" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1523(para) -msgid "If a requested resource such as CPU time, disk storage, or memory is not available in the parent cell, the request is forwarded to its associated child cells. If the child cell can fulfill the request, it does. Otherwise, it attempts to pass the request to any of its children." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1532(glossterm) ./doc/glossary/glossary-terms.xml:1534(primary) -msgid "cinder" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1538(para) -msgid "A core OpenStack project that provides block storage services for VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1544(glossterm) ./doc/glossary/glossary-terms.xml:1546(primary) -msgid "CirrOS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1550(para) -msgid "A minimal Linux distribution designed for use as a test image on clouds such as OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1556(glossterm) ./doc/glossary/glossary-terms.xml:1558(primary) -msgid "Cisco neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1562(para) -msgid "A Networking plug-in for Cisco devices and technologies, including UCS and Nexus." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1568(glossterm) ./doc/glossary/glossary-terms.xml:1570(primary) -msgid "cloud architect" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1574(para) -msgid "A person who plans, designs, and oversees the creation of clouds." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1580(glossterm) ./doc/glossary/glossary-terms.xml:1582(primary) ./doc/glossary/glossary-terms.xml:1598(primary) ./doc/glossary/glossary-terms.xml:1614(primary) -msgid "cloud computing" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1588(para) -msgid "A model that enables access to a shared pool of configurable computing resources, such as networks, servers, storage, applications, and services, that can be rapidly provisioned and released with minimal management effort or service provider interaction." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1596(glossterm) -msgid "cloud controller" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1600(secondary) -msgid "cloud controllers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1604(para) -msgid "Collection of Compute components that represent the global state of the cloud; talks to services, such as Identity Service authentication, Object Storage, and node/storage workers through a queue." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1612(glossterm) -msgid "cloud controller node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1616(secondary) -msgid "cloud controller nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1620(para) -msgid "A node that runs network, volume, API, scheduler, and image services. Each service may be broken out into separate nodes for scalability or availability." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1627(glossterm) ./doc/glossary/glossary-terms.xml:1630(primary) -msgid "Cloud Data Management Interface (CDMI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1634(para) -msgid "SINA standard that defines a RESTful API for managing objects in the cloud, currently unsupported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1640(glossterm) ./doc/glossary/glossary-terms.xml:1643(primary) -msgid "Cloud Infrastructure Management Interface (CIMI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1647(para) -msgid "An in-progress specification for cloud management. Currently unsupported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1653(glossterm) ./doc/glossary/glossary-terms.xml:1655(primary) ./doc/glossary/glossary-terms.xml:1682(see) -msgid "cloud-init" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1659(para) -msgid "A package commonly installed in VM images that performs initialization of an instance after boot using information that it retrieves from the metadata service, such as the SSH public key and user data." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1667(glossterm) ./doc/glossary/glossary-terms.xml:1669(primary) -msgid "cloudadmin" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1673(para) -msgid "One of the default roles in the Compute RBAC system. Grants complete system access." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1679(glossterm) ./doc/glossary/glossary-terms.xml:1681(primary) -msgid "Cloudbase-Init" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1686(para) -msgid "A Windows port of cloud-init." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1691(glossterm) ./doc/glossary/glossary-terms.xml:1693(primary) ./doc/glossary/glossary-terms.xml:1707(primary) -msgid "cloudpipe" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1699(para) -msgid "A compute service that creates VPNs on a per-project basis." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1705(glossterm) ./doc/glossary/glossary-terms.xml:1709(secondary) -msgid "cloudpipe image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1713(para) -msgid "A pre-made VM image that serves as a cloudpipe server. Essentially, OpenVPN running on Linux." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1719(glossterm) -msgid "CMDB" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1721(primary) -msgid "CMDB (Configuration Management Database)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1725(para) -msgid "Configuration Management Database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1730(glossterm) -msgid "command filter" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1732(primary) -msgid "command filters" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1736(para) -msgid "Lists allowed commands within the Compute rootwrap facility." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1742(glossterm) -msgid "community project" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1744(primary) -msgid "community projects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1748(para) -msgid "A project that is not officially endorsed by the OpenStack Foundation. If the project is successful enough, it might be elevated to an incubated project and then to a core project, or it might be merged with the main code trunk." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1756(glossterm) ./doc/glossary/glossary-terms.xml:1758(primary) -msgid "compression" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1762(para) -msgid "Reducing the size of files by special encoding, the file can be decompressed again to its original content. OpenStack supports compression at the Linux file system level but does not support compression for things such as Object Storage objects or Image Service VM images." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1771(glossterm) ./doc/glossary/glossary-terms.xml:1773(primary) ./doc/glossary/glossary-terms.xml:1787(primary) ./doc/glossary/glossary-terms.xml:1802(primary) ./doc/glossary/glossary-terms.xml:1816(primary) ./doc/glossary/glossary-terms.xml:1845(primary) ./doc/glossary/glossary-terms.xml:1858(primary) -msgid "Compute" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1779(para) -msgid "The OpenStack core project that provides compute services. The project name of Compute service is nova." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1785(glossterm) ./doc/glossary/glossary-terms.xml:1789(secondary) ./doc/glossary/glossary-terms.xml:5653(secondary) -msgid "Compute API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1793(para) -msgid "The nova-api daemon provides access to nova services. Can communicate with other APIs, such as the Amazon EC2 API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1800(glossterm) ./doc/glossary/glossary-terms.xml:1804(secondary) -msgid "compute controller" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1808(para) -msgid "The Compute component that chooses suitable hosts on which to start VM instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1814(glossterm) ./doc/glossary/glossary-terms.xml:1818(secondary) -msgid "compute host" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1822(para) -msgid "Physical host dedicated to running compute nodes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1827(glossterm) -msgid "compute node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1829(primary) -msgid "compute nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1835(para) -msgid "A node that runs the nova-compute daemon that manages VM instances that provide a wide range of services, such as web applications and analytics." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1843(glossterm) ./doc/glossary/glossary-terms.xml:1847(secondary) -msgid "Compute service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1851(para) -msgid "Name for the Compute component that manages VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1856(glossterm) ./doc/glossary/glossary-terms.xml:1860(secondary) -msgid "compute worker" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1864(para) -msgid "The Compute component that runs on each compute node and manages the VM instance life cycle, including run, reboot, terminate, attach/detach volumes, and so on. Provided by the nova-compute daemon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1872(glossterm) -msgid "concatenated object" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1874(primary) ./doc/glossary/glossary-terms.xml:4996(primary) ./doc/glossary/glossary-terms.xml:5685(primary) ./doc/glossary/glossary-terms.xml:5699(primary) ./doc/glossary/glossary-terms.xml:5713(primary) ./doc/glossary/glossary-terms.xml:5728(primary) ./doc/glossary/glossary-terms.xml:5741(primary) ./doc/glossary/glossary-terms.xml:5755(primary) ./doc/glossary/glossary-terms.xml:5769(primary) ./doc/glossary/glossary-terms.xml:5824(primary) ./doc/glossary/glossary-terms.xml:7272(primary) -msgid "objects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1876(secondary) ./doc/glossary/glossary-terms.xml:1879(primary) -msgid "concatenated objects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1883(para) -msgid "A set of segment objects that Object Storage combines and sends to the client." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1889(glossterm) -msgid "conductor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1891(primary) -msgid "conductors" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1895(para) -msgid "In Compute, conductor is the process that proxies database requests from the compute process. Using conductor improves security because compute nodes do not need direct access to the database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1903(glossterm) ./doc/glossary/glossary-terms.xml:1905(primary) -msgid "consistency window" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1909(para) -msgid "The amount of time it takes for a new Object Storage object to become accessible to all clients." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1915(glossterm) -msgid "console log" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1917(primary) -msgid "console logs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1921(para) -msgid "Contains the output from a Linux VM console in Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1926(glossterm) -msgid "container" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1928(primary) ./doc/glossary/glossary-terms.xml:1943(primary) ./doc/glossary/glossary-terms.xml:1958(primary) ./doc/glossary/glossary-terms.xml:1973(primary) ./doc/glossary/glossary-terms.xml:1988(primary) ./doc/glossary/glossary-terms.xml:2001(primary) -msgid "containers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1934(para) -msgid "Organizes and stores objects in Object Storage. Similar to the concept of a Linux directory but cannot be nested. Alternative term for an Image Service container format." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1941(glossterm) -msgid "container auditor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1945(secondary) -msgid "container auditors" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1949(para) -msgid "Checks for missing replicas or incorrect objects in specified Object Storage containers through queries to the SQLite back-end database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1956(glossterm) -msgid "container database" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1960(secondary) -msgid "container databases" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1964(para) -msgid "A SQLite database that stores Object Storage containers and container metadata. The container server accesses this database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1971(glossterm) ./doc/glossary/glossary-terms.xml:1975(secondary) -msgid "container format" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1979(para) -msgid "A wrapper used by the Image Service that contains a VM image and its associated metadata, such as machine state, OS disk size, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1986(glossterm) -msgid "container server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1990(secondary) -msgid "container servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1994(para) -msgid "An Object Storage server that manages containers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:1999(glossterm) ./doc/glossary/glossary-terms.xml:2003(secondary) -msgid "container service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2007(para) -msgid "The Object Storage component that provides container services, such as create, delete, list, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2013(glossterm) ./doc/glossary/glossary-terms.xml:2015(primary) -msgid "content delivery network (CDN)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2019(para) -msgid "A content delivery network is a specialized network that is used to distribute content to clients, typically located close to the client for increased performance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2029(glossterm) -msgid "controller node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2031(primary) -msgid "controller nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2033(see) -msgid "under cloud computing" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2037(para) -msgid "Alternative term for a cloud controller node." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2042(glossterm) ./doc/glossary/glossary-terms.xml:2044(primary) -msgid "core API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2048(para) -msgid "Depending on context, the core API is either the OpenStack API or the main API of a specific core project, such as Compute, Networking, Image Service, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2055(glossterm) ./doc/glossary/glossary-terms.xml:2057(primary) -msgid "core project" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2061(para) -msgid "An official OpenStack project. Currently consists of Compute (nova), Object Storage (swift), Image Service (glance), Identity (keystone), Dashboard (horizon), Networking (neutron), and Block Storage (cinder). The Telemetry module (ceilometer) and Orchestration module (heat) are integrated projects as of the Havana release. In the Icehouse release, the Database module (trove) gains integrated project status." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2072(glossterm) ./doc/glossary/glossary-terms.xml:2074(primary) -msgid "cost" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2078(para) -msgid "Under the Compute distributed scheduler, this is calculated by looking at the capabilities of each host relative to the flavor of the VM instance being requested." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2085(glossterm) ./doc/glossary/glossary-terms.xml:2087(primary) -msgid "credentials" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2091(para) -msgid "Data that is only known to or accessible by a user and used to verify that the user is who he says he is. Credentials are presented to the server during authentication. Examples include a password, secret key, digital certificate, and fingerprint." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2099(glossterm) ./doc/glossary/glossary-terms.xml:2101(primary) -msgid "Crowbar" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2105(para) -msgid "An open source community project by Dell that aims to provide all necessary services to quickly deploy clouds." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2111(glossterm) ./doc/glossary/glossary-terms.xml:2113(primary) -msgid "current workload" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2117(para) -msgid "An element of the Compute capacity cache that is calculated based on the number of build, snapshot, migrate, and resize operations currently in progress on a given host." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2124(glossterm) -msgid "customer" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2126(primary) -msgid "customers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2128(see) -msgid "tenants" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2132(para) -msgid "Alternative term for tenant." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2137(glossterm) ./doc/glossary/glossary-terms.xml:2139(primary) -msgid "customization module" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2143(para) -msgid "A user-created Python module that is loaded by horizon to change the look and feel of the dashboard." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2152(title) -msgid "D" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2155(glossterm) -msgid "daemon" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2157(primary) -msgid "daemons" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2163(para) -msgid "A process that runs in the background and waits for requests. May or may not listen on a TCP or UDP port. Do not confuse with a worker." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2170(glossterm) -msgid "DAC" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2172(primary) -msgid "DAC (discretionary access control)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2176(para) -msgid "Discretionary access control. Governs the ability of subjects to access objects, while enabling users to make policy decisions and assign security attributes. The traditional UNIX system of users, groups, and read-write-execute permissions is an example of DAC." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2185(glossterm) ./doc/glossary/glossary-terms.xml:2187(primary) -msgid "dashboard" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2191(para) -msgid "The web-based management interface for OpenStack. An alternative name for horizon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2197(glossterm) ./doc/glossary/glossary-terms.xml:2201(secondary) -msgid "data encryption" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2199(primary) -msgid "data" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2205(para) -msgid "Both Image Service and Compute support encrypted virtual machine (VM) images (but not instances). In-transit data encryption is supported in OpenStack using technologies such as HTTPS, SSL, TLS, and SSH. Object Storage does not support object encryption at the application level but may support storage that uses disk encryption." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2215(glossterm) ./doc/glossary/glossary-terms.xml:2219(secondary) -msgid "database ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2217(primary) ./doc/glossary/glossary-terms.xml:2231(primary) -msgid "databases" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2223(para) -msgid "A unique ID given to each replica of an Object Storage database." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2229(glossterm) -msgid "database replicator" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2233(secondary) -msgid "database replicators" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2237(para) -msgid "An Object Storage component that copies changes in the account, container, and object databases to other nodes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2243(glossterm) ./doc/glossary/glossary-terms.xml:2245(primary) -msgid "Database Service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2249(para) -msgid "An integrated project that provide scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines. The project name of Database Service is trove." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2259(glossterm) -msgid "deallocate" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2261(primary) -msgid "deallocate, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2265(para) -msgid "The process of removing the association between a floating IP address and a fixed IP address. Once this association is removed, the floating IP returns to the address pool." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2272(glossterm) ./doc/glossary/glossary-terms.xml:2274(primary) -msgid "Debian" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2283(glossterm) ./doc/glossary/glossary-terms.xml:2285(primary) -msgid "deduplication" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2289(para) -msgid "The process of finding duplicate data at the disk block, file, and/or object level to minimize storage usecurrently unsupported within OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2296(glossterm) -msgid "default panel" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2298(primary) -msgid "default panels" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2302(para) -msgid "The default panel that is displayed when a user accesses the horizon dashboard." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2308(glossterm) -msgid "default tenant" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2310(primary) -msgid "default tenants" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2314(para) -msgid "New users are assigned to this tenant if no tenant is specified when a user is created." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2320(glossterm) -msgid "default token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2322(primary) -msgid "default tokens" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2326(para) -msgid "An Identity Service token that is not associated with a specific tenant and is exchanged for a scoped token." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2332(glossterm) ./doc/glossary/glossary-terms.xml:2334(primary) -msgid "delayed delete" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2338(para) -msgid "An option within Image Service so that an image is deleted after a predefined number of seconds instead of immediately." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2344(glossterm) ./doc/glossary/glossary-terms.xml:2346(primary) -msgid "delivery mode" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2350(para) -msgid "Setting for the Compute RabbitMQ message delivery mode; can be set to either transient or persistent." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2356(glossterm) ./doc/glossary/glossary-terms.xml:2358(primary) -msgid "denial of service (DoS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2362(para) -msgid "Denial of service (DoS) is a short form for denial-of-service attack. This is a malicious attempt to prevent legitimate users from using a service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2370(glossterm) ./doc/glossary/glossary-terms.xml:2372(primary) -msgid "deprecated auth" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2376(para) -msgid "An option within Compute that enables administrators to create and manage users through the nova-manage command as opposed to using the Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2383(glossterm) ./doc/glossary/glossary-terms.xml:2385(primary) -msgid "Desktop-as-a-Service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2389(para) -msgid "A platform that provides a suite of desktop environments that users may log in to receive a desktop experience from any location. This may provide general use, development, or even homogeneous testing environments." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2399(glossterm) ./doc/glossary/glossary-terms.xml:2401(primary) -msgid "developer" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2405(para) -msgid "One of the default roles in the Compute RBAC system and the default role assigned to a new user." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2411(glossterm) ./doc/glossary/glossary-terms.xml:2413(primary) -msgid "device ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2417(para) -msgid "Maps Object Storage partitions to physical storage devices." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2423(glossterm) ./doc/glossary/glossary-terms.xml:2425(primary) -msgid "device weight" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2429(para) -msgid "Distributes partitions proportionately across Object Storage devices based on the storage capacity of each device." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2435(glossterm) ./doc/glossary/glossary-terms.xml:2437(primary) -msgid "DevStack" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2443(para) -msgid "Community project that uses shell scripts to quickly build complete OpenStack development environments." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2449(glossterm) -msgid "DHCP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2451(primary) -msgid "DHCP (Dynamic Host Configuration Protocol)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2453(secondary) ./doc/glossary/glossary-terms.xml:3834(secondary) ./doc/glossary/glossary-terms.xml:3964(secondary) ./doc/glossary/glossary-terms.xml:4045(secondary) ./doc/glossary/glossary-terms.xml:5919(secondary) ./doc/glossary/glossary-terms.xml:6754(secondary) -msgid "basics of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2457(para) -msgid "Dynamic Host Configuration Protocol. A network protocol that configures devices that are connected to a network so that they can communicate on that network by using the Internet Protocol (IP). The protocol is implemented in a client-server model where DHCP clients request configuration data, such as an IP address, a default route, and one or more DNS server addresses from a DHCP server." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2467(glossterm) ./doc/glossary/glossary-terms.xml:2469(primary) -msgid "DHCP agent" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2473(para) -msgid "OpenStack Networking agent that provides DHCP services for virtual networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2479(glossterm) ./doc/glossary/glossary-terms.xml:2481(primary) -msgid "Diablo" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2485(para) -msgid "A grouped release of projects related to OpenStack that came out in the fall of 2011, the fourth release of OpenStack. It included Compute (nova 2011.3), Object Storage (swift 1.4.3), and the Image Service (glance)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2489(para) -msgid "Diablo is the code name for the fourth release of OpenStack. The design summit took place in in the Bay Area near Santa Clara, California, US and Diablo is a nearby city." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2497(glossterm) -msgid "direct consumer" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2499(primary) -msgid "direct consumers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2503(para) -msgid "An element of the Compute RabbitMQ that comes to life when a RPC call is executed. It connects to a direct exchange through a unique exclusive queue, sends the message, and terminates." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2510(glossterm) -msgid "direct exchange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2512(primary) -msgid "direct exchanges" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2516(para) -msgid "A routing table that is created within the Compute RabbitMQ during RPC calls; one is created for each RPC call that is invoked." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2523(glossterm) -msgid "direct publisher" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2525(primary) -msgid "direct publishers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2529(para) -msgid "Element of RabbitMQ that provides a response to an incoming MQ message." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2535(glossterm) ./doc/glossary/glossary-terms.xml:2537(primary) -msgid "disassociate" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2541(para) -msgid "The process of removing the association between a floating IP address and fixed IP and thus returning the floating IP address to the address pool." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2548(glossterm) ./doc/glossary/glossary-terms.xml:2550(primary) -msgid "disk encryption" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2554(para) -msgid "The ability to encrypt data at the file system, disk partition, or whole-disk level. Supported within Compute VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2560(glossterm) ./doc/glossary/glossary-terms.xml:2562(primary) -msgid "disk format" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2566(para) -msgid "The underlying format that a disk image for a VM is stored as within the Image Service back-end store. For example, AMI, ISO, QCOW2, VMDK, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2573(glossterm) ./doc/glossary/glossary-terms.xml:2575(primary) -msgid "dispersion" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2579(para) -msgid "In Object Storage, tools to test and ensure dispersion of objects and containers to ensure fault tolerance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2585(glossterm) ./doc/glossary/glossary-terms.xml:2587(primary) -msgid "distributed virtual router (DVR)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2591(para) -msgid "Mechanism for highly-available multi-host routing when using OpenStack Networking (neutron)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2597(glossterm) ./doc/glossary/glossary-terms.xml:2599(primary) -msgid "Django" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2603(para) -msgid "A web framework used extensively in horizon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2609(glossterm) -msgid "DNS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2611(primary) ./doc/glossary/glossary-terms.xml:2627(primary) -msgid "DNS (Domain Name Server, Service or System)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2613(secondary) -msgid "definitions of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2617(para) -msgid "Domain Name Server. A hierarchical and distributed naming system for computers, services, and resources connected to the Internet or a private network. Associates a human-friendly names to IP addresses." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2625(glossterm) -msgid "DNS record" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2629(secondary) -msgid "DNS records" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2633(para) -msgid "A record that specifies information about a particular domain and belongs to the domain." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2639(glossterm) ./doc/glossary/glossary-terms.xml:2641(primary) -msgid "dnsmasq" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2645(para) -msgid "Daemon that provides DNS, DHCP, BOOTP, and TFTP services, used by the Compute VLAN manager and FlatDHCP manager." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2651(glossterm) -msgid "domain" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2653(primary) -msgid "domain, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2657(para) -msgid "Separates a website from other sites. Often, the domain name has two or more parts that are separated by dots. For example, yahoo.com, usa.gov, harvard.edu, or mail.yahoo.com." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2661(para) -msgid "A domain is an entity or container of all DNS-related information containing one or more records." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2667(glossterm) -msgid "Domain Name Service (DNS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2670(para) -msgid "In Compute, the support that enables associating DNS entries with floating IP addresses, nodes, or cells so that hostnames are consistent across reboots." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2677(glossterm) -msgid "Domain Name System (DNS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2680(para) -msgid "A system by which Internet domain name-to-address and address-to-name resolutions are determined." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2683(para) -msgid "DNS helps navigate the Internet by translating the IP address into an address that is easier to remember For example, translating 111.111.111.1 into www.yahoo.com." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2687(para) -msgid "All domains and their components, such as mail servers, utilize DNS to resolve to the appropriate locations. DNS servers are usually set up in a master-slave relationship such that failure of the master invokes the slave. DNS servers might also be clustered or replicated such that changes made to one DNS server are automatically propagated to other active servers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2697(glossterm) -msgid "download" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2699(primary) -msgid "download, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2703(para) -msgid "The transfer of data, usually in the form of files, from one computer to another." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2709(glossterm) -msgid "DRTM" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2711(primary) -msgid "DRTM (dynamic root of trust measurement)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2715(para) -msgid "Dynamic root of trust measurement." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2720(glossterm) ./doc/glossary/glossary-terms.xml:2722(primary) -msgid "durable exchange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2726(para) -msgid "The Compute RabbitMQ message exchange that remains active when the server restarts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2732(glossterm) ./doc/glossary/glossary-terms.xml:2734(primary) -msgid "durable queue" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2738(para) -msgid "A Compute RabbitMQ message queue that remains active when the server restarts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2744(glossterm) -msgid "Dynamic Host Configuration Protocol (DHCP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2747(para) -msgid "A method to automatically configure networking for a host at boot time. Provided by both Networking and Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2753(glossterm) -msgid "Dynamic HyperText Markup Language (DHTML)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2756(primary) -msgid "DHTML (Dynamic HyperText Markup Language)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2760(para) -msgid "Pages that use HTML, JavaScript, and Cascading Style Sheets to enable users to interact with a web page or show simple animation." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2770(title) -msgid "E" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2773(glossterm) ./doc/glossary/glossary-terms.xml:2775(primary) -msgid "east-west traffic" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2779(para) -msgid "Network traffic between servers in the same cloud or data center. See also north-south traffic." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2786(glossterm) ./doc/glossary/glossary-terms.xml:2788(primary) -msgid "EBS boot volume" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2792(para) -msgid "An Amazon EBS storage volume that contains a bootable VM image, currently unsupported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2798(glossterm) ./doc/glossary/glossary-terms.xml:2800(primary) ./doc/glossary/glossary-terms.xml:3028(glossterm) ./doc/glossary/glossary-terms.xml:3030(primary) -msgid "ebtables" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2804(para) -msgid "Used in Compute along with arptables, iptables, and ip6tables to create firewalls and to ensure isolation of network communications." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2811(glossterm) ./doc/glossary/glossary-terms.xml:2822(primary) ./doc/glossary/glossary-terms.xml:2836(primary) ./doc/glossary/glossary-terms.xml:2850(primary) ./doc/glossary/glossary-terms.xml:2864(primary) -msgid "EC2" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2814(para) -msgid "The Amazon commercial compute product, similar to Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2820(glossterm) ./doc/glossary/glossary-terms.xml:2824(secondary) -msgid "EC2 access key" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2828(para) -msgid "Used along with an EC2 secret key to access the Compute EC2 API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2834(glossterm) ./doc/glossary/glossary-terms.xml:2838(secondary) -msgid "EC2 API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2842(para) -msgid "OpenStack supports accessing the Amazon EC2 API through Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2848(glossterm) -msgid "EC2 Compatibility API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2852(secondary) -msgid "EC2 compatibility API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2856(para) -msgid "A Compute component that enables OpenStack to communicate with Amazon EC2." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2862(glossterm) ./doc/glossary/glossary-terms.xml:2866(secondary) -msgid "EC2 secret key" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2870(para) -msgid "Used along with an EC2 access key when communicating with the Compute EC2 API; used to digitally sign each request." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2876(glossterm) ./doc/glossary/glossary-terms.xml:2878(primary) -msgid "Elastic Block Storage (EBS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2882(para) -msgid "The Amazon commercial block storage product." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2887(glossterm) -msgid "encryption" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2889(primary) -msgid "encryption, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2893(para) -msgid "OpenStack supports encryption technologies such as HTTPS, SSH, SSL, TLS, digital certificates, and data encryption." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2899(glossterm) -msgid "endpoint" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2902(para) -msgid "See API endpoint." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2907(glossterm) ./doc/glossary/glossary-terms.xml:2911(secondary) -msgid "endpoint registry" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2915(para) -msgid "Alternative term for an Identity Service catalog." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2920(glossterm) ./doc/glossary/glossary-terms.xml:2922(primary) -msgid "encapsulation" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2926(para) -msgid "The practice of placing one packet type within another for the purposes of abstracting or securing data. Examples include GRE, MPLS, or IPsec." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2935(glossterm) -msgid "endpoint template" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2939(secondary) -msgid "endpoint templates" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2943(para) -msgid "A list of URL and port number endpoints that indicate where a service, such as Object Storage, Compute, Identity, and so on, can be accessed." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2950(glossterm) -msgid "entity" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2952(primary) -msgid "entity, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2956(para) -msgid "Any piece of hardware or software that wants to connect to the network services provided by Networking, the network connectivity service. An entity can make use of Networking by implementing a VIF." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2964(glossterm) -msgid "ephemeral image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2966(primary) -msgid "ephemeral images" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2970(para) -msgid "A VM image that does not save changes made to its volumes and reverts them to their original state after the instance is terminated." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2977(glossterm) ./doc/glossary/glossary-terms.xml:2979(primary) ./doc/glossary/glossary-terms.xml:5618(see) -msgid "ephemeral volume" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2983(para) -msgid "Volume that does not save the changes made to it and reverts to its original state when the current user relinquishes control." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2989(glossterm) ./doc/glossary/glossary-terms.xml:2991(primary) -msgid "Essex" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2995(para) -msgid "A grouped release of projects related to OpenStack that came out in April 2012, the fifth release of OpenStack. It included Compute (nova 2012.1), Object Storage (swift 1.4.8), Image (glance), Identity (keystone), and Dashboard (horizon)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:2999(para) -msgid "Essex is the code name for the fifth release of OpenStack. The design summit took place in Boston, Massachusetts, US and Essex is a nearby city." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3006(glossterm) -msgid "ESX" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3008(primary) -msgid "ESX hypervisor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3012(para) ./doc/glossary/glossary-terms.xml:3023(para) ./doc/glossary/glossary-terms.xml:4870(para) ./doc/glossary/glossary-terms.xml:8313(para) ./doc/glossary/glossary-terms.xml:8546(para) ./doc/glossary/glossary-terms.xml:8776(para) ./doc/glossary/glossary-terms.xml:8877(para) ./doc/glossary/glossary-terms.xml:8904(para) -msgid "An OpenStack-supported hypervisor." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3017(glossterm) -msgid "ESXi" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3019(primary) -msgid "ESXi hypervisor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3034(para) -msgid "Filtering tool for a Linux bridging firewall, enabling filtering of network traffic passing through a Linux bridge. Used to restrict communications between hosts and/or nodes in OpenStack Compute along with iptables, arptables, and ip6tables." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3042(glossterm) ./doc/glossary/glossary-terms.xml:3044(primary) -msgid "ETag" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3048(para) -msgid "MD5 hash of an object within Object Storage, used to ensure data integrity." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3054(glossterm) ./doc/glossary/glossary-terms.xml:3056(primary) -msgid "euca2ools" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3060(para) -msgid "A collection of command-line tools for administering VMs; most are compatible with OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3066(glossterm) ./doc/glossary/glossary-terms.xml:3068(primary) -msgid "Eucalyptus Kernel Image (EKI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3072(para) -msgid "Used along with an ERI to create an EMI." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3077(glossterm) ./doc/glossary/glossary-terms.xml:3079(primary) -msgid "Eucalyptus Machine Image (EMI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3083(para) -msgid "VM image container format supported by Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3088(glossterm) ./doc/glossary/glossary-terms.xml:3090(primary) -msgid "Eucalyptus Ramdisk Image (ERI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3094(para) -msgid "Used along with an EKI to create an EMI." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3099(glossterm) -msgid "evacuate" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3101(primary) -msgid "evacuation, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3105(para) -msgid "The process of migrating one or all virtual machine (VM) instances from one host to another, compatible with both shared storage live migration and block migration." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3112(glossterm) ./doc/glossary/glossary-terms.xml:3114(primary) -msgid "exchange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3118(para) -msgid "Alternative term for a RabbitMQ message exchange." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3123(glossterm) -msgid "exchange type" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3125(primary) -msgid "exchange types" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3129(para) -msgid "A routing algorithm in the Compute RabbitMQ." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3134(glossterm) -msgid "exclusive queue" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3136(primary) ./doc/glossary/glossary-terms.xml:8198(primary) -msgid "queues" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3138(secondary) ./doc/glossary/glossary-terms.xml:3141(primary) -msgid "exclusive queues" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3145(para) -msgid "Connected to by a direct consumer in RabbitMQCompute, the message can be consumed only by the current connection." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3151(glossterm) ./doc/glossary/glossary-terms.xml:3153(primary) -msgid "extended attributes (xattrs)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3157(para) -msgid "File system option that enables storage of additional information beyond owner, group, permissions, modification time, and so on. The underlying Object Storage file system must support extended attributes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3165(glossterm) -msgid "extension" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3167(primary) -msgid "extensions" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3173(para) -msgid "Alternative term for an API extension or plug-in. In the context of Identity Service, this is a call that is specific to the implementation, such as adding support for OpenID." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3180(glossterm) -msgid "external network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3182(primary) -msgid "external network, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3186(para) -msgid "A network segment typically used for instance Internet access." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3192(glossterm) -msgid "extra specs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3194(primary) -msgid "extra specs, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3198(para) -msgid "Specifies additional requirements when Compute determines where to start a new instance. Examples include a minimum amount of network bandwidth or a GPU." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3208(title) -msgid "F" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3211(glossterm) ./doc/glossary/glossary-terms.xml:3213(primary) -msgid "FakeLDAP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3217(para) -msgid "An easy method to create a local LDAP directory for testing Identity Service and Compute. Requires Redis." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3223(glossterm) ./doc/glossary/glossary-terms.xml:3225(primary) -msgid "fan-out exchange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3229(para) -msgid "Within RabbitMQ and Compute, it is the messaging interface that is used by the scheduler service to receive capability messages from the compute, volume, and network nodes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3236(glossterm) ./doc/glossary/glossary-terms.xml:3238(primary) -msgid "Fedora" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3242(para) -msgid "A Linux distribution compatible with OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3247(glossterm) ./doc/glossary/glossary-terms.xml:3249(primary) -msgid "Fibre Channel" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3253(para) -msgid "Storage protocol similar in concept to TCP/IP; encapsulates SCSI commands and data." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3259(glossterm) ./doc/glossary/glossary-terms.xml:3261(primary) -msgid "Fibre Channel over Ethernet (FCoE)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3265(para) -msgid "The fibre channel protocol tunneled within Ethernet." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3270(glossterm) ./doc/glossary/glossary-terms.xml:3272(primary) -msgid "fill-first scheduler" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3276(para) -msgid "The Compute scheduling method that attempts to fill a host with VMs rather than starting new VMs on a variety of hosts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3282(glossterm) -msgid "filter" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3284(primary) ./doc/glossary/glossary-terms.xml:4259(primary) -msgid "filtering" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3290(para) -msgid "The step in the Compute scheduling process when hosts that cannot run VMs are eliminated and not chosen." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3296(glossterm) -msgid "firewall" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3298(primary) -msgid "firewalls" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3302(para) -msgid "Used to restrict communications between hosts and/or nodes, implemented in Compute using iptables, arptables, ip6tables, and etables." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3309(glossterm) ./doc/glossary/glossary-terms.xml:3311(primary) -msgid "Firewall-as-a-Service (FWaaS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3315(para) -msgid "A Networking extension that provides perimeter firewall functionality." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3321(glossterm) -msgid "fixed IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3323(primary) ./doc/glossary/glossary-terms.xml:3419(primary) ./doc/glossary/glossary-terms.xml:4485(primary) ./doc/glossary/glossary-terms.xml:6264(primary) ./doc/glossary/glossary-terms.xml:6453(primary) ./doc/glossary/glossary-terms.xml:7450(primary) ./doc/glossary/glossary-terms.xml:7678(primary) -msgid "IP addresses" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3325(secondary) -msgid "fixed" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3328(primary) -msgid "fixed IP addresses" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3332(para) -msgid "An IP address that is associated with the same instance each time that instance boots, is generally not accessible to end users or the public Internet, and is used for management of the instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3340(glossterm) ./doc/glossary/glossary-terms.xml:3342(primary) -msgid "Flat Manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3346(para) -msgid "The Compute component that gives IP addresses to authorized nodes and assumes DHCP, DNS, and routing configuration and services are provided by something else." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3353(glossterm) ./doc/glossary/glossary-terms.xml:3355(primary) -msgid "flat mode injection" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3359(para) -msgid "A Compute networking method where the OS network configuration information is injected into the VM image before the instance starts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3366(glossterm) ./doc/glossary/glossary-terms.xml:3368(primary) -msgid "flat network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3372(para) -msgid "The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A flat network is a private network interface, which is controlled by the flat_interface option with flat managers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3382(glossterm) ./doc/glossary/glossary-terms.xml:3384(primary) -msgid "FlatDHCP Manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3388(para) -msgid "The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, TFTP) and radvd (routing) services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3394(glossterm) ./doc/glossary/glossary-terms.xml:3396(primary) -msgid "flavor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3400(para) -msgid "Alternative term for a VM instance type." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3405(glossterm) ./doc/glossary/glossary-terms.xml:3407(primary) -msgid "flavor ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3411(para) -msgid "UUID for each Compute or Image Service VM flavor or instance type." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3417(glossterm) ./doc/glossary/glossary-terms.xml:3424(primary) -msgid "floating IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3421(secondary) -msgid "floating" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3428(para) -msgid "An IP address that a project can associate with a VM so that the instance has the same public IP address each time that it boots. You create a pool of floating IP addresses and assign them to instances as they are launched to maintain a consistent IP address for maintaining DNS assignment." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3437(glossterm) ./doc/glossary/glossary-terms.xml:3439(primary) -msgid "Folsom" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3443(para) -msgid "A grouped release of projects related to OpenStack that came out in the fall of 2012, the sixth release of OpenStack. It includes Compute (nova), Object Storage (swift), Identity (keystone), Networking (neutron), Image Service (glance), and Volumes or Block Storage (cinder)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3448(para) -msgid "Folsom is the code name for the sixth release of OpenStack. The design summit took place in San Francisco, California, US and Folsom is a nearby city." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3456(glossterm) ./doc/glossary/glossary-terms.xml:3458(primary) -msgid "FormPost" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3462(para) -msgid "Object Storage middleware that uploads (posts) an image through a form on a web page." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3468(glossterm) -msgid "front end" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3470(primary) -msgid "front end, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3474(para) -msgid "The point where a user interacts with a service; can be an API endpoint, the horizon dashboard, or a command-line tool." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3483(title) -msgid "G" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3486(glossterm) ./doc/glossary/glossary-terms.xml:3488(primary) -msgid "gateway" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3492(para) -msgid "An IP address, typically assigned to a router, that passes network traffic between different networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3498(glossterm) ./doc/glossary/glossary-terms.xml:3500(primary) -msgid "Generic Receive Offload (GRO)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3503(para) -msgid "Feature of certain network interface drivers that combines many smaller received packets into a large packet before delivery to the kernel IP stack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3510(glossterm) ./doc/glossary/glossary-terms.xml:3512(primary) -msgid "generic routing encapsulation (GRE)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3515(para) -msgid "Protocol that encapsulates a wide variety of network layer protocols inside virtual point-to-point links." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3522(glossterm) ./doc/glossary/glossary-terms.xml:3532(primary) ./doc/glossary/glossary-terms.xml:3547(primary) -msgid "glance" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3525(para) -msgid "A core project that provides the OpenStack Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3530(glossterm) ./doc/glossary/glossary-terms.xml:3534(secondary) -msgid "glance API server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3538(para) -msgid "Processes client requests for VMs, updates Image Service metadata on the registry server, and communicates with the store adapter to upload VM images from the back-end store." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3545(glossterm) ./doc/glossary/glossary-terms.xml:3549(secondary) -msgid "glance registry" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3553(para) -msgid "Alternative term for the Image Service image registry." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3558(glossterm) ./doc/glossary/glossary-terms.xml:3562(secondary) ./doc/glossary/glossary-terms.xml:3565(primary) -msgid "global endpoint template" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3569(para) -msgid "The Identity Service endpoint template that contains services available to all tenants." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3575(glossterm) ./doc/glossary/glossary-terms.xml:3577(primary) -msgid "GlusterFS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3581(para) -msgid "A file system designed to aggregate NAS hosts, compatible with OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3587(glossterm) ./doc/glossary/glossary-terms.xml:3589(primary) -msgid "golden image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3593(para) -msgid "A method of operating system installation where a finalized disk image is created and then used by all nodes without modification." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3600(glossterm) ./doc/glossary/glossary-terms.xml:3602(primary) -msgid "Graphic Interchange Format (GIF)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3606(para) -msgid "A type of image file that is commonly used for animated images on web pages." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3612(glossterm) ./doc/glossary/glossary-terms.xml:3614(primary) -msgid "Graphics Processing Unit (GPU)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3618(para) -msgid "Choosing a host based on the existence of a GPU is currently unsupported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3624(glossterm) ./doc/glossary/glossary-terms.xml:3626(primary) -msgid "Green Threads" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3630(para) -msgid "The cooperative threading model used by Python; reduces race conditions and only context switches when specific library calls are made. Each OpenStack service is its own thread." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3637(glossterm) ./doc/glossary/glossary-terms.xml:3639(primary) -msgid "Grizzly" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3643(para) -msgid "The code name for the seventh release of OpenStack. The design summit took place in San Diego, California, US and Grizzly is an element of the state flag of California." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3651(glossterm) ./doc/glossary/glossary-terms.xml:3653(primary) -msgid "guest OS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3657(para) -msgid "An operating system instance running under the control of a hypervisor." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3666(title) -msgid "H" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3669(glossterm) ./doc/glossary/glossary-terms.xml:3671(primary) -msgid "Hadoop" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3675(para) -msgid "Apache Hadoop is an open source software framework that supports data-intensive distributed applications." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3681(glossterm) ./doc/glossary/glossary-terms.xml:3683(primary) -msgid "handover" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3687(para) -msgid "An object state in Object Storage where a new replica of the object is automatically created due to a drive failure." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3693(glossterm) ./doc/glossary/glossary-terms.xml:3695(primary) -msgid "hard reboot" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3699(para) -msgid "A type of reboot where a physical or virtual power button is pressed as opposed to a graceful, proper shutdown of the operating system." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3706(glossterm) ./doc/glossary/glossary-terms.xml:3708(primary) -msgid "Havana" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3712(para) -msgid "The code name for the eighth release of OpenStack. The design summit took place in Portland, Oregon, US and Havana is an unincorporated community in Oregon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3719(glossterm) ./doc/glossary/glossary-terms.xml:3721(primary) -msgid "heat" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3725(para) -msgid "An integrated project that aims to orchestrate multiple cloud applications for OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3731(glossterm) ./doc/glossary/glossary-terms.xml:3733(primary) ./doc/glossary/glossary-terms.xml:7652(see) -msgid "Heat Orchestration Template (HOT)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3737(para) -msgid "Heat input in the format native to OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3742(glossterm) ./doc/glossary/glossary-terms.xml:3744(primary) -msgid "health monitor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3748(para) -msgid "Determines whether back-end members of a VIP pool can process a request. A pool can have several health monitors associated with it. When a pool has several monitors associated with it, all monitors check each member of the pool. All monitors must declare a member to be healthy for it to stay active." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3758(glossterm) ./doc/glossary/glossary-terms.xml:3760(primary) -msgid "high availability (HA)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3764(para) -msgid "A high availability system design approach and associated service implementation ensures that a prearranged level of operational performance will be met during a contractual measurement period. High availability systems seeks to minimize system downtime and data loss." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3775(glossterm) -msgid "horizon" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3778(para) -msgid "OpenStack project that provides a dashboard, which is a web interface." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3784(glossterm) -msgid "horizon plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3786(primary) -msgid "horizon plug-ins" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3790(para) -msgid "A plug-in for the OpenStack dashboard (horizon)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3795(glossterm) -msgid "host" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3797(primary) -msgid "hosts, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3801(para) -msgid "A physical computer, not a VM instance (node)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3806(glossterm) ./doc/glossary/glossary-terms.xml:3808(primary) -msgid "host aggregate" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3812(para) -msgid "A method to further subdivide availability zones into hypervisor pools, a collection of common hosts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3818(glossterm) ./doc/glossary/glossary-terms.xml:3820(primary) -msgid "Host Bus Adapter (HBA)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3824(para) -msgid "Device plugged into a PCI slot, such as a fibre channel or network card." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3830(glossterm) -msgid "HTTP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3832(primary) -msgid "HTTP (Hypertext Transfer Protocol)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3838(para) -msgid "Hypertext Transfer Protocol. HTTP is an application protocol for distributed, collaborative, hypermedia information systems. It is the foundation of data communication for the World Wide Web. Hypertext is structured text that uses logical links (hyperlinks) between nodes containing text. HTTP is the protocol to exchange or transfer hypertext." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3848(glossterm) -msgid "HTTPS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3850(primary) -msgid "HTTPS (Hypertext Transfer Protocol Secure)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3854(para) -msgid "Hypertext Transfer Protocol Secure (HTTPS) is a communications protocol for secure communication over a computer network, with especially wide deployment on the Internet. Technically, it is not a protocol in and of itself; rather, it is the result of simply layering the Hypertext Transfer Protocol (HTTP) on top of the SSL/TLS protocol, thus adding the security capabilities of SSL/TLS to standard HTTP communications." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3865(glossterm) ./doc/glossary/glossary-terms.xml:3867(primary) -msgid "hybrid cloud" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3871(para) -msgid "A hybrid cloud is a composition of two or more clouds (private, community or public) that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect colocation, managed and/or dedicated services with cloud resources." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3883(glossterm) ./doc/glossary/glossary-terms.xml:3885(primary) -msgid "Hyper-V" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3889(para) -msgid "One of the hypervisors supported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3894(glossterm) ./doc/glossary/glossary-terms.xml:3896(primary) -msgid "hyperlink" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3900(para) -msgid "Any kind of text that contains a link to some other site, commonly found in documents where clicking on a word or words opens up a different website." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3907(glossterm) -msgid "Hypertext Transfer Protocol (HTTP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3910(para) -msgid "The protocol that tells browsers where to go to find information." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3916(glossterm) -msgid "Hypertext Transfer Protocol Secure (HTTPS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3919(para) -msgid "Encrypted HTTP communications using SSL or TLS; most OpenStack API endpoints and many inter-component communications support HTTPS communication." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3926(glossterm) -msgid "hypervisor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3928(primary) ./doc/glossary/glossary-terms.xml:3942(primary) -msgid "hypervisors" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3934(para) -msgid "Software that arbitrates and controls VM access to the actual underlying hardware." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3940(glossterm) -msgid "hypervisor pool" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3944(secondary) -msgid "hypervisor pools" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3948(para) -msgid "A collection of hypervisors grouped together through host aggregates." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3957(title) -msgid "I" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3960(glossterm) -msgid "IaaS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3962(primary) -msgid "IaaS (Infrastructure-as-a-Service)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3968(para) -msgid "Infrastructure-as-a-Service. IaaS is a provisioning model in which an organization outsources physical components of a data center, such as storage, hardware, servers, and networking components. A service provider owns the equipment and is responsible for housing, operating and maintaining it. The client typically pays on a per-use basis. IaaS is a model for providing cloud services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3978(glossterm) ./doc/glossary/glossary-terms.xml:3980(primary) -msgid "Icehouse" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3986(para) -msgid "The code name for the ninth release of OpenStack. The design summit took place in Hong Kong and Ice House is a street in that city." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3993(glossterm) -msgid "ICMP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3995(primary) -msgid "Internet Control Message Protocol (ICMP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:3999(para) -msgid "Internet Control Message Protocol, used by network devices for control messages. For example, uses ICMP to test connectivity." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4007(glossterm) ./doc/glossary/glossary-terms.xml:4009(primary) -msgid "ID number" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4013(para) -msgid "Unique numeric ID associated with each user in Identity Service, conceptually similar to a Linux or LDAP UID." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4019(glossterm) -msgid "Identity API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4022(para) -msgid "Alternative term for the Identity Service API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4027(glossterm) ./doc/glossary/glossary-terms.xml:4031(secondary) -msgid "Identity back end" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4029(primary) ./doc/glossary/glossary-terms.xml:4041(glossterm) ./doc/glossary/glossary-terms.xml:4043(primary) ./doc/glossary/glossary-terms.xml:4060(primary) ./doc/glossary/glossary-terms.xml:4131(primary) -msgid "Identity Service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4035(para) -msgid "The source used by Identity Service to retrieve user information; an OpenLDAP server, for example." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4049(para) -msgid "The OpenStack core project that provides a central directory of users mapped to the OpenStack services they can access. It also registers endpoints for OpenStack services. It acts as a common authentication system. The project name of the Identity Service is keystone." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4058(glossterm) ./doc/glossary/glossary-terms.xml:4062(secondary) -msgid "Identity Service API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4066(para) -msgid "The API used to access the OpenStack Identity Service provided through keystone." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4072(glossterm) -msgid "IDS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4074(primary) -msgid "IDS (Intrusion Detection System)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4078(para) -msgid "Intrusion Detection System." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4083(glossterm) -msgid "image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4085(primary) -msgid "images" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4091(para) -msgid "A collection of files for a specific operating system (OS) that you use to create or rebuild a server. OpenStack provides pre-built images. You can also create custom images, or snapshots, from servers that you have launched. Custom images can be used for data backups or as \"gold\" images for additional servers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4100(glossterm) -msgid "Image API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4102(primary) ./doc/glossary/glossary-terms.xml:4116(primary) ./doc/glossary/glossary-terms.xml:4145(primary) ./doc/glossary/glossary-terms.xml:4159(primary) ./doc/glossary/glossary-terms.xml:4173(primary) ./doc/glossary/glossary-terms.xml:4185(glossterm) ./doc/glossary/glossary-terms.xml:4205(primary) ./doc/glossary/glossary-terms.xml:4219(primary) ./doc/glossary/glossary-terms.xml:4233(primary) ./doc/glossary/glossary-terms.xml:6436(primary) -msgid "Image Service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4104(secondary) ./doc/glossary/glossary-terms.xml:4195(glossterm) -msgid "Image Service API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4108(para) -msgid "The Image Service API endpoint for management of VM images." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4114(glossterm) ./doc/glossary/glossary-terms.xml:4118(secondary) -msgid "image cache" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4122(para) -msgid "Used by Image Service to obtain images on the local host rather than re-downloading them from the image server each time one is requested." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4129(glossterm) ./doc/glossary/glossary-terms.xml:4133(secondary) -msgid "image ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4137(para) -msgid "Combination of a URI and UUID used to access Image Service VM images through the image API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4143(glossterm) ./doc/glossary/glossary-terms.xml:4147(secondary) -msgid "image membership" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4151(para) ./doc/glossary/glossary-terms.xml:5082(para) -msgid "A list of tenants that can access a given VM image within Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4157(glossterm) ./doc/glossary/glossary-terms.xml:4161(secondary) -msgid "image owner" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4165(para) -msgid "The tenant who owns an Image Service virtual machine image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4171(glossterm) ./doc/glossary/glossary-terms.xml:4175(secondary) -msgid "image registry" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4179(para) -msgid "A list of VM images that are available through Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4188(para) -msgid "An OpenStack core project that provides discovery, registration, and delivery services for disk and server images. The project name of the Image Service is glance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4198(para) -msgid "Alternative name for the glance image API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4203(glossterm) ./doc/glossary/glossary-terms.xml:4207(secondary) -msgid "image status" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4211(para) -msgid "The current status of a VM image in Image Service, not to be confused with the status of a running instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4217(glossterm) ./doc/glossary/glossary-terms.xml:4221(secondary) -msgid "image store" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4225(para) -msgid "The back-end store used by Image Service to store VM images, options include Object Storage, local file system, S3, or HTTP." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4231(glossterm) ./doc/glossary/glossary-terms.xml:4235(secondary) -msgid "image UUID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4239(para) -msgid "UUID used by Image Service to uniquely identify each VM image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4245(glossterm) -msgid "incubated project" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4247(primary) -msgid "incubated projects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4251(para) -msgid "A community project may be elevated to this status and is then promoted to a core project." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4257(glossterm) ./doc/glossary/glossary-terms.xml:4261(secondary) ./doc/glossary/glossary-terms.xml:4264(primary) -msgid "ingress filtering" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4268(para) -msgid "The process of filtering incoming network traffic. Supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4274(glossterm) ./doc/glossary/glossary-terms.xml:4276(primary) -msgid "INI" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4279(para) -msgid "The OpenStack configuration files use an INI format to describe options and their values. It consists of sections and key value pairs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4288(glossterm) ./doc/glossary/glossary-terms.xml:4290(primary) -msgid "injection" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4294(para) -msgid "The process of putting a file into a virtual machine image before the instance is started." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4300(glossterm) -msgid "instance" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4302(primary) ./doc/glossary/glossary-terms.xml:4316(primary) ./doc/glossary/glossary-terms.xml:4329(primary) ./doc/glossary/glossary-terms.xml:4353(primary) ./doc/glossary/glossary-terms.xml:4368(primary) ./doc/glossary/glossary-terms.xml:4381(primary) -msgid "instances" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4308(para) -msgid "A running VM, or a VM in a known state such as suspended, that can be used like a hardware server." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4314(glossterm) ./doc/glossary/glossary-terms.xml:4318(secondary) -msgid "instance ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4322(para) -msgid "Alternative term for instance UUID." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4327(glossterm) ./doc/glossary/glossary-terms.xml:4331(secondary) -msgid "instance state" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4335(para) -msgid "The current state of a guest VM image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4340(glossterm) ./doc/glossary/glossary-terms.xml:4342(primary) -msgid "instance tunnels network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4345(para) -msgid "A network segment used for instance traffic tunnels between compute nodes and the network node." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4351(glossterm) ./doc/glossary/glossary-terms.xml:4355(secondary) -msgid "instance type" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4359(para) -msgid "Describes the parameters of the various virtual machine images that are available to users; includes parameters such as CPU, storage, and memory. Alternative term for flavor." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4366(glossterm) ./doc/glossary/glossary-terms.xml:4370(secondary) -msgid "instance type ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4374(para) -msgid "Alternative term for a flavor ID." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4379(glossterm) ./doc/glossary/glossary-terms.xml:4383(secondary) -msgid "instance UUID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4387(para) ./doc/glossary/glossary-terms.xml:7325(para) -msgid "Unique ID assigned to each guest VM instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4393(glossterm) ./doc/glossary/glossary-terms.xml:4395(primary) -msgid "interface" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4399(para) -msgid "A physical or virtual device that provides connectivity to another device or medium." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4405(glossterm) ./doc/glossary/glossary-terms.xml:4407(primary) -msgid "interface ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4411(para) -msgid "Unique ID for a Networking VIF or vNIC in the form of a UUID." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4417(glossterm) ./doc/glossary/glossary-terms.xml:4419(primary) -msgid "internet protocol (IP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4423(para) -msgid "Principal communications protocol in the internet protocol suite for relaying datagrams across network boundaries." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4429(glossterm) ./doc/glossary/glossary-terms.xml:4431(primary) -msgid "Internet Service Provider (ISP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4435(para) -msgid "Any business that provides Internet access to individuals or businesses." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4441(glossterm) ./doc/glossary/glossary-terms.xml:4443(primary) -msgid "Internet Small Computer System Interface (iSCSI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4447(para) -msgid "Storage protocol that encapsulates SCSI frames for transport over IP networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4453(glossterm) ./doc/glossary/glossary-terms.xml:4455(primary) -msgid "ironic" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4459(para) -msgid "OpenStack project that provisions bare metal, as opposed to virtual, machines." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4465(glossterm) ./doc/glossary/glossary-terms.xml:4467(primary) -msgid "IOPS" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4473(para) -msgid "IOPS (Input/Output Operations Per Second) are a common performance measurement used to benchmark computer storage devices like hard disk drives, solid state drives, and storage area networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4483(glossterm) -msgid "IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4491(para) -msgid "Number that is unique to every computer system on the Internet. Two versions of the Internet Protocol (IP) are in use for addresses: IPv4 and IPv6." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4498(glossterm) ./doc/glossary/glossary-terms.xml:4500(primary) -msgid "IP Address Management (IPAM)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4504(para) -msgid "The process of automating IP address allocation, deallocation, and management. Currently provided by Compute, melange, and Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4511(glossterm) -msgid "IPL" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4513(primary) -msgid "IPL (Initial Program Loader)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4517(para) -msgid "Initial Program Loader." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4522(glossterm) -msgid "IPMI" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4524(primary) -msgid "IPMI (Intelligent Platform Management Interface)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4528(para) -msgid "Intelligent Platform Management Interface. IPMI is a standardized computer system interface used by system administrators for out-of-band management of computer systems and monitoring of their operation. In layman's terms, it is a way to manage a computer using a direct network connection, whether it is turned on or not; connecting to the hardware rather than an operating system or login shell." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4539(glossterm) ./doc/glossary/glossary-terms.xml:4541(primary) -msgid "ip6tables" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4545(para) -msgid "Tool used to set up, maintain, and inspect the tables of IPv6 packet filter rules in the Linux kernel. In OpenStack Compute, ip6tables is used along with arptables, ebtables, and iptables to create firewalls for both nodes and VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4553(glossterm) ./doc/glossary/glossary-terms.xml:4555(primary) -msgid "iptables" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4559(para) -msgid "Used along with arptables and ebtables, iptables create firewalls in Compute. iptables are the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores. Different kernel modules and programs are currently used for different protocols: iptables applies to IPv4, ip6tables to IPv6, arptables to ARP, and ebtables to Ethernet frames. Requires root privilege to manipulate." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4570(glossterm) -msgid "iSCSI" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4572(primary) -msgid "iSCSI protocol" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4576(para) -msgid "The SCSI disk protocol tunneled within Ethernet, supported by Compute, Object Storage, and Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4582(glossterm) -msgid "ISO9960" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4584(primary) -msgid "ISO9960 format" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4588(para) ./doc/glossary/glossary-terms.xml:6534(para) ./doc/glossary/glossary-terms.xml:8370(para) ./doc/glossary/glossary-terms.xml:8382(para) ./doc/glossary/glossary-terms.xml:8590(para) -msgid "One of the VM image disk formats supported by Image Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4594(glossterm) ./doc/glossary/glossary-terms.xml:4596(primary) -msgid "itsec" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4600(para) -msgid "A default role in the Compute RBAC system that can quarantine an instance in any project." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4609(title) -msgid "J" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4612(glossterm) ./doc/glossary/glossary-terms.xml:4614(primary) -msgid "Java" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4618(para) -msgid "A programming language that is used to create systems that involve more than one computer by way of a network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4624(glossterm) ./doc/glossary/glossary-terms.xml:4626(primary) -msgid "JavaScript" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4630(para) -msgid "A scripting language that is used to build web pages." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4635(glossterm) ./doc/glossary/glossary-terms.xml:4637(primary) -msgid "JavaScript Object Notation (JSON)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4641(para) -msgid "One of the supported response formats in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4646(glossterm) ./doc/glossary/glossary-terms.xml:4648(primary) -msgid "Jenkins" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4652(para) -msgid "Tool used to run jobs automatically for OpenStack development." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4658(glossterm) ./doc/glossary/glossary-terms.xml:4660(primary) -msgid "jumbo frame" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4664(para) -msgid "Feature in modern Ethernet networks that supports frames up to approximately 9000 bytes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4670(glossterm) ./doc/glossary/glossary-terms.xml:4672(primary) -msgid "Juno" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4676(para) -msgid "The code name for the tenth release of OpenStack. The design summit took place in Atlanta, Georgia, US and Juno is an unincorporated community in Georgia." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4686(title) -msgid "K" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4689(glossterm) -msgid "kernel-based VM (KVM)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4691(primary) -msgid "kernel-based VM (KVM) hypervisor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4695(para) -msgid "An OpenStack-supported hypervisor. KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V), ARM, IBM Power, and IBM zSeries. It consists of a loadable kernel module, that provides the core virtualization infrastructure and a processor specific module." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4707(glossterm) ./doc/glossary/glossary-terms.xml:4709(primary) -msgid "keystone" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4713(para) -msgid "The project that provides OpenStack Identity services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4718(glossterm) ./doc/glossary/glossary-terms.xml:4720(primary) -msgid "Kickstart" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4724(para) -msgid "A tool to automate system configuration and installation on Red Hat, Fedora, and CentOS-based Linux distributions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4730(glossterm) ./doc/glossary/glossary-terms.xml:4732(primary) -msgid "Kilo" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4736(para) -msgid "The code name for the eleventh release of OpenStack. The design summit took place in Paris, France. Due to delays in the name selection, the release was known only as K. Because k is the unit symbol for kilo and the reference artifact is stored near Paris in the Pavillon de Breteuil in Sèvres, the community chose Kilo as the release name." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4749(title) -msgid "L" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4752(glossterm) ./doc/glossary/glossary-terms.xml:4754(primary) -msgid "large object" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4758(para) -msgid "An object within Object Storage that is larger than 5GB." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4763(glossterm) ./doc/glossary/glossary-terms.xml:4765(primary) -msgid "Launchpad" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4769(para) -msgid "The collaboration site for OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4774(glossterm) ./doc/glossary/glossary-terms.xml:4776(primary) -msgid "Layer-2 network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4780(para) -msgid "Term used in the OSI network architecture for the data link layer. The data link layer is responsible for media access control, flow control and detecting and possibly correcting erros that may occur in the physical layer." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4790(glossterm) ./doc/glossary/glossary-terms.xml:4792(primary) -msgid "Layer-3 network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4796(para) -msgid "Term used in the OSI network architecture for the network layer. The network layer is responsible for packet forwarding including routing from one node to another." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4805(glossterm) ./doc/glossary/glossary-terms.xml:4807(primary) -msgid "Layer-2 (L2) agent" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4811(para) -msgid "OpenStack Networking agent that provides layer-2 connectivity for virtual networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4817(glossterm) ./doc/glossary/glossary-terms.xml:4819(primary) -msgid "Layer-3 (L3) agent" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4823(para) -msgid "OpenStack Networking agent that provides layer-3 (routing) services for virtual networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4829(glossterm) ./doc/glossary/glossary-terms.xml:4831(primary) -msgid "libvirt" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4835(para) -msgid "Virtualization API library used by OpenStack to interact with many of its supported hypervisors." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4841(glossterm) -msgid "Linux bridge" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4844(para) -msgid "Software that enables multiple VMs to share a single physical NIC within Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4850(glossterm) -msgid "Linux Bridge neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4852(primary) -msgid "Linux Bridge" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4854(secondary) ./doc/glossary/glossary-terms.xml:5894(secondary) -msgid "neutron plug-in for" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4858(para) -msgid "Enables a Linux bridge to understand a Networking port, interface attachment, and other abstractions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4864(glossterm) ./doc/glossary/glossary-terms.xml:4866(primary) -msgid "Linux containers (LXC)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4875(glossterm) ./doc/glossary/glossary-terms.xml:4877(primary) -msgid "live migration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4881(para) -msgid "The ability within Compute to move running virtual machine instances from one host to another with only a small service interruption during switchover." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4888(glossterm) -msgid "load balancer" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4891(para) -msgid "A load balancer is a logical device that belongs to a cloud account. It is used to distribute workloads between multiple back-end systems or services, based on the criteria defined as part of its configuration." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4899(glossterm) ./doc/glossary/glossary-terms.xml:4901(primary) -msgid "load balancing" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4905(para) -msgid "The process of spreading client requests between two or more nodes to improve performance and availability." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4911(glossterm) ./doc/glossary/glossary-terms.xml:4914(primary) -msgid "Load-Balancer-as-a-Service (LBaaS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4918(para) -msgid "Enables Networking to distribute incoming requests evenly between designated instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4924(glossterm) ./doc/glossary/glossary-terms.xml:4926(primary) -msgid "Logical Volume Manager (LVM)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4930(para) -msgid "Provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4940(title) -msgid "M" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4943(glossterm) ./doc/glossary/glossary-terms.xml:4945(primary) -msgid "management API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4951(para) -msgid "Alternative term for an admin API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4956(glossterm) ./doc/glossary/glossary-terms.xml:4958(primary) -msgid "management network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4962(para) -msgid "A network segment used for administration, not accessible to the public Internet." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4968(glossterm) ./doc/glossary/glossary-terms.xml:4970(primary) -msgid "manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4974(para) -msgid "Logical groupings of related code, such as the Block Storage volume manager or network manager." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4980(glossterm) -msgid "manifest" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4982(primary) ./doc/glossary/glossary-terms.xml:5001(primary) -msgid "manifests" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4988(para) -msgid "Used to track segments of a large object within Object Storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4994(glossterm) -msgid "manifest object" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:4998(secondary) ./doc/glossary/glossary-terms.xml:5003(secondary) -msgid "manifest objects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5007(para) -msgid "A special Object Storage object that contains the manifest for a large object." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5013(glossterm) ./doc/glossary/glossary-terms.xml:5015(primary) -msgid "marconi" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5019(para) -msgid "OpenStack project that provides a queue service to applications." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5025(glossterm) ./doc/glossary/glossary-terms.xml:5027(primary) -msgid "maximum transmission unit (MTU)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5031(para) -msgid "Maximum frame or packet size for a particular network medium. Typically 1500 bytes for Ethernet networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5037(glossterm) ./doc/glossary/glossary-terms.xml:5039(primary) -msgid "mechanism driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5043(para) -msgid "A driver for the Modular Layer 2 (ML2) neutron plug-in that provides layer-2 connectivity for virtual instances. A single OpenStack installation can use multiple mechanism drivers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5052(glossterm) ./doc/glossary/glossary-terms.xml:5054(primary) -msgid "melange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5058(para) -msgid "Project name for OpenStack Network Information Service. To be merged with Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5064(glossterm) ./doc/glossary/glossary-terms.xml:5066(primary) -msgid "membership" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5070(para) -msgid "The association between an Image Service VM image and a tenant. Enables images to be shared with specified tenants." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5076(glossterm) -msgid "membership list" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5078(primary) -msgid "membership lists" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5088(glossterm) ./doc/glossary/glossary-terms.xml:5090(primary) -msgid "memcached" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5094(para) -msgid "A distributed memory object caching system that is used by Object Storage for caching." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5100(glossterm) ./doc/glossary/glossary-terms.xml:5102(primary) -msgid "memory overcommit" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5106(para) -msgid "The ability to start new VM instances based on the actual memory usage of a host, as opposed to basing the decision on the amount of RAM each running instance thinks it has available. Also known as RAM overcommit." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5114(glossterm) -msgid "message broker" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5116(primary) -msgid "message brokers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5120(para) -msgid "The software package used to provide AMQP messaging capabilities within Compute. Default package is RabbitMQ." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5126(glossterm) ./doc/glossary/glossary-terms.xml:5128(primary) -msgid "message bus" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5132(para) -msgid "The main virtual communication line used by all AMQP messages for inter-cloud communications within Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5138(glossterm) ./doc/glossary/glossary-terms.xml:5140(primary) -msgid "message queue" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5144(para) -msgid "Passes requests from clients to the appropriate workers and returns the output to the client after the job completes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5150(glossterm) ./doc/glossary/glossary-terms.xml:5152(primary) -msgid "Metadata agent" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5156(para) -msgid "OpenStack Networking agent that provides metadata services for instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5162(glossterm) ./doc/glossary/glossary-terms.xml:5164(primary) -msgid "Meta-Data Server (MDS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5168(para) -msgid "Stores CephFS metadata." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5173(glossterm) ./doc/glossary/glossary-terms.xml:5175(primary) -msgid "migration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5179(para) -msgid "The process of moving a VM instance from one host to another." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5185(glossterm) ./doc/glossary/glossary-terms.xml:5187(primary) -msgid "multi-host" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5191(para) -msgid "High-availability mode for legacy (nova) networking. Each compute node handles NAT and DHCP and acts as a gateway for all of the VMs on it. A networking failure on one compute node doesn't affect VMs on other compute nodes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5199(glossterm) -msgid "multinic" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5202(para) -msgid "Facility in Compute that allows each virtual machine instance to have more than one VIF connected to it." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5208(glossterm) ./doc/glossary/glossary-terms.xml:5211(primary) -msgid "Modular Layer 2 (ML2) neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5215(para) -msgid "Can concurrently use multiple layer-2 networking technologies, such as 802.1Q and VXLAN, in Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5221(glossterm) ./doc/glossary/glossary-terms.xml:5223(primary) -msgid "Monitor (LBaaS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5227(para) -msgid "LBaaS feature that provides availability monitoring using the ping command, TCP, and HTTP/HTTPS GET." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5233(glossterm) ./doc/glossary/glossary-terms.xml:5235(primary) -msgid "Monitor (Mon)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5239(para) -msgid "A Ceph component that communicates with external clients, checks data state and consistency, and performs quorum functions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5245(glossterm) ./doc/glossary/glossary-terms.xml:5247(primary) -msgid "multi-factor authentication" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5251(para) -msgid "Authentication method that uses two or more credentials, such as a password and a private key. Currently not supported in Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5258(glossterm) ./doc/glossary/glossary-terms.xml:5260(primary) -msgid "MultiNic" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5264(para) -msgid "Facility in Compute that enables a virtual machine instance to have more than one VIF connected to it." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5273(title) -msgid "N" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5276(glossterm) ./doc/glossary/glossary-terms.xml:5278(primary) -msgid "Nebula" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5282(para) -msgid "Released as open source by NASA in 2010 and is the basis for Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5288(glossterm) ./doc/glossary/glossary-terms.xml:5290(primary) -msgid "netadmin" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5294(para) -msgid "One of the default roles in the Compute RBAC system. Enables the user to allocate publicly accessible IP addresses to instances and change firewall rules." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5301(glossterm) ./doc/glossary/glossary-terms.xml:5303(primary) -msgid "NetApp volume driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5307(para) -msgid "Enables Compute to communicate with NetApp storage devices through the NetApp OnCommand Provisioning Manager." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5314(glossterm) -msgid "network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5316(primary) ./doc/glossary/glossary-terms.xml:5332(primary) ./doc/glossary/glossary-terms.xml:5346(primary) ./doc/glossary/glossary-terms.xml:5361(primary) ./doc/glossary/glossary-terms.xml:5375(primary) ./doc/glossary/glossary-terms.xml:5389(primary) ./doc/glossary/glossary-terms.xml:5403(primary) ./doc/glossary/glossary-terms.xml:5416(primary) ./doc/glossary/glossary-terms.xml:5430(primary) ./doc/glossary/glossary-terms.xml:5444(primary) ./doc/glossary/glossary-terms.xml:5458(primary) ./doc/glossary/glossary-terms.xml:6281(primary) ./doc/glossary/glossary-terms.xml:6481(primary) ./doc/glossary/glossary-terms.xml:8417(primary) ./doc/glossary/glossary-terms.xml:8565(primary) -msgid "networks" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5322(para) -msgid "A virtual network that provides connectivity between entities. For example, a collection of virtual ports that share network connectivity. In Networking terminology, a network is always a layer-2 network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5330(glossterm) ./doc/glossary/glossary-terms.xml:5334(secondary) -msgid "Network Address Translation (NAT)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5338(para) -msgid "The process of modifying IP address information while in transit. Supported by Compute and Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5344(glossterm) -msgid "network controller" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5348(secondary) -msgid "network controllers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5352(para) -msgid "A Compute daemon that orchestrates the network configuration of nodes, including IP addresses, VLANs, and bridging. Also manages routing for both public and private networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5359(glossterm) ./doc/glossary/glossary-terms.xml:5363(secondary) -msgid "Network File System (NFS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5367(para) -msgid "A method for making file systems available over the network. Supported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5373(glossterm) -msgid "network ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5377(secondary) -msgid "network IDs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5381(para) -msgid "Unique ID assigned to each network segment within Networking. Same as network UUID." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5387(glossterm) -msgid "network manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5391(secondary) -msgid "network managers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5395(para) -msgid "The Compute component that manages various network components, such as firewall rules, IP address allocation, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5401(glossterm) -msgid "network node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5405(secondary) -msgid "network nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5409(para) -msgid "Any compute node that runs the network worker daemon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5414(glossterm) -msgid "network segment" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5418(secondary) -msgid "network segments" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5422(para) -msgid "Represents a virtual, isolated OSI layer-2 subnet in Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5428(glossterm) ./doc/glossary/glossary-terms.xml:5432(secondary) -msgid "Network Time Protocol (NTP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5436(para) -msgid "A method of keeping a clock for a host or node correct through communications with a trusted, accurate time source." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5442(glossterm) ./doc/glossary/glossary-terms.xml:5446(secondary) -msgid "network UUID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5450(para) -msgid "Unique ID for a Networking network segment." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5456(glossterm) -msgid "network worker" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5460(secondary) -msgid "network workers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5464(para) -msgid "The nova-network worker daemon; provides services such as giving an IP address to a booting nova instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5471(glossterm) -msgid "Networking" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5474(para) -msgid "A core OpenStack project that provides a network connectivity abstraction layer to OpenStack Compute. The project name of Networking is neutron." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5481(glossterm) ./doc/glossary/glossary-terms.xml:5483(primary) ./doc/glossary/glossary-terms.xml:5506(secondary) -msgid "Networking API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5487(para) -msgid "API used to access OpenStack Networking. Provides an extensible architecture to enable custom plug-in creation." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5493(glossterm) ./doc/glossary/glossary-terms.xml:5504(primary) ./doc/glossary/glossary-terms.xml:5517(primary) ./doc/glossary/glossary-terms.xml:5531(primary) -msgid "neutron" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5496(para) -msgid "A core OpenStack project that provides a network connectivity abstraction layer to OpenStack Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5502(glossterm) -msgid "neutron API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5510(para) -msgid "An alternative name for Networking API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5515(glossterm) ./doc/glossary/glossary-terms.xml:5519(secondary) -msgid "neutron manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5523(para) -msgid "Enables Compute and Networking integration, which enables Networking to perform network management for guest VMs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5529(glossterm) ./doc/glossary/glossary-terms.xml:5533(secondary) -msgid "neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5537(para) -msgid "Interface within Networking that enables organizations to create custom plug-ins for advanced features, such as QoS, ACLs, or IDS." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5544(glossterm) ./doc/glossary/glossary-terms.xml:5546(primary) -msgid "Nexenta volume driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5550(para) -msgid "Provides support for NexentaStor devices in Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5555(glossterm) ./doc/glossary/glossary-terms.xml:5557(primary) -msgid "No ACK" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5561(para) -msgid "Disables server-side message acknowledgment in the Compute RabbitMQ. Increases performance but decreases reliability." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5567(glossterm) -msgid "node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5569(primary) ./doc/glossary/glossary-terms.xml:6385(primary) ./doc/glossary/glossary-terms.xml:7718(primary) ./doc/glossary/glossary-terms.xml:7918(primary) -msgid "nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5575(para) -msgid "A VM instance that runs on a host." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5580(glossterm) -msgid "non-durable exchange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5582(primary) ./doc/glossary/glossary-terms.xml:5599(primary) ./doc/glossary/glossary-terms.xml:6107(primary) ./doc/glossary/glossary-terms.xml:8181(primary) -msgid "messages" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5584(secondary) ./doc/glossary/glossary-terms.xml:5587(primary) ./doc/glossary/glossary-terms.xml:8161(see) -msgid "non-durable exchanges" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5591(para) -msgid "Message exchange that is cleared when the service restarts. Its data is not written to persistent storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5597(glossterm) ./doc/glossary/glossary-terms.xml:5604(primary) -msgid "non-durable queue" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5601(secondary) -msgid "non-durable queues" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5608(para) -msgid "Message queue that is cleared when the service restarts. Its data is not written to persistent storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5614(glossterm) ./doc/glossary/glossary-terms.xml:5616(primary) -msgid "non-persistent volume" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5622(para) -msgid "Alternative term for an ephemeral volume." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5627(glossterm) ./doc/glossary/glossary-terms.xml:5629(primary) -msgid "north-south traffic" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5633(para) -msgid "Network traffic between a user or client (north) and a server (south), or traffic into the cloud (south) and out of the cloud (north). See also east-west traffic." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5641(glossterm) ./doc/glossary/glossary-terms.xml:5651(primary) ./doc/glossary/glossary-terms.xml:5664(primary) -msgid "nova" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5644(para) -msgid "OpenStack project that provides compute services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5649(glossterm) -msgid "Nova API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5657(para) -msgid "Alternative term for the Compute API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5662(glossterm) ./doc/glossary/glossary-terms.xml:5666(secondary) -msgid "nova-network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5670(para) -msgid "A Compute component that manages IP address allocation, firewalls, and other network-related tasks. This is the legacy networking option and an alternative to Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5680(title) -msgid "O" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5683(glossterm) -msgid "object" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5691(para) -msgid "A BLOB of data held by Object Storage; can be in any format." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5697(glossterm) -msgid "object auditor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5701(secondary) -msgid "object auditors" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5705(para) -msgid "Opens all objects for an object server and verifies the MD5 hash, size, and metadata for each object." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5711(glossterm) ./doc/glossary/glossary-terms.xml:5715(secondary) -msgid "object expiration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5719(para) -msgid "A configurable option within Object Storage to automatically delete objects after a specified amount of time has passed or a certain date is reached." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5726(glossterm) ./doc/glossary/glossary-terms.xml:5730(secondary) -msgid "object hash" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5734(para) -msgid "Uniquely ID for an Object Storage object." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5739(glossterm) ./doc/glossary/glossary-terms.xml:5743(secondary) -msgid "object path hash" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5747(para) -msgid "Used by Object Storage to determine the location of an object in the ring. Maps objects to partitions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5753(glossterm) -msgid "object replicator" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5757(secondary) -msgid "object replicators" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5761(para) -msgid "An Object Storage component that copies an object to remote partitions for fault tolerance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5767(glossterm) -msgid "object server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5771(secondary) -msgid "object servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5775(para) -msgid "An Object Storage component that is responsible for managing objects." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5781(glossterm) ./doc/glossary/glossary-terms.xml:5798(primary) ./doc/glossary/glossary-terms.xml:5811(primary) -msgid "Object Storage" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5784(para) -msgid "The OpenStack core project that provides eventually consistent and redundant storage and retrieval of fixed digital content. The project name of OpenStack Object Storage is swift." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5791(glossterm) ./doc/glossary/glossary-terms.xml:5795(secondary) ./doc/glossary/glossary-terms.xml:5800(secondary) -msgid "Object Storage API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5793(primary) ./doc/glossary/glossary-terms.xml:7862(glossterm) ./doc/glossary/glossary-terms.xml:7885(primary) ./doc/glossary/glossary-terms.xml:7899(primary) ./doc/glossary/glossary-terms.xml:7923(primary) -msgid "swift" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5804(para) -msgid "API used to access OpenStack Object Storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5809(glossterm) ./doc/glossary/glossary-terms.xml:5813(secondary) -msgid "Object Storage Device (OSD)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5817(para) -msgid "The Ceph storage daemon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5822(glossterm) ./doc/glossary/glossary-terms.xml:5826(secondary) -msgid "object versioning" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5830(para) -msgid "Allows a user to set a flag on an Object Storage container so that all objects within the container are versioned." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5836(glossterm) ./doc/glossary/glossary-terms.xml:5838(primary) -msgid "Oldie" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5842(para) -msgid "Term for an Object Storage process that runs for a long time. Can indicate a hung process." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5848(glossterm) ./doc/glossary/glossary-terms.xml:5851(primary) -msgid "Open Cloud Computing Interface (OCCI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5855(para) -msgid "A standardized interface for managing compute, data, and network resources, currently unsupported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5861(glossterm) ./doc/glossary/glossary-terms.xml:5863(primary) -msgid "Open Virtualization Format (OVF)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5867(para) -msgid "Standard for packaging VM images. Supported in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5872(glossterm) ./doc/glossary/glossary-terms.xml:5874(primary) ./doc/glossary/glossary-terms.xml:5892(primary) -msgid "Open vSwitch" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5878(para) -msgid "Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (for example NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag)." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5890(glossterm) -msgid "Open vSwitch neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5898(para) -msgid "Provides support for Open vSwitch in Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5903(glossterm) ./doc/glossary/glossary-terms.xml:5905(primary) -msgid "OpenLDAP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5909(para) -msgid "An open source LDAP server. Supported by both Compute and Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5915(glossterm) ./doc/glossary/glossary-terms.xml:5917(primary) ./doc/glossary/glossary-terms.xml:5935(primary) -msgid "OpenStack" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5923(para) -msgid "OpenStack is a cloud operating system that controls large pools of compute, storage, and networking resources throughout a data center, all managed through a dashboard that gives administrators control while empowering their users to provision resources through a web interface. OpenStack is an open source project licensed under the Apache License 2.0." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5933(glossterm) -msgid "OpenStack code name" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5936(secondary) -msgid "code name" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5940(para) -msgid "Each OpenStack release has a code name. Code names ascend in alphabetical order: Austin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, Juno, and Kilo. Code names are cities or counties near where the corresponding OpenStack design summit took place. An exception, called the Waldon exception, is granted to elements of the state flag that sound especially cool. Code names are chosen by popular vote." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5954(glossterm) ./doc/glossary/glossary-terms.xml:5956(primary) -msgid "openSUSE" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5965(glossterm) ./doc/glossary/glossary-terms.xml:5967(primary) -msgid "operator" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5971(para) -msgid "The person responsible for planning and maintaining an OpenStack installation." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5977(glossterm) ./doc/glossary/glossary-terms.xml:5979(primary) -msgid "Orchestration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5983(para) -msgid "An integrated project that orchestrates multiple cloud applications for OpenStack. The project name of Orchestration is heat." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5990(glossterm) -msgid "orphan" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5992(primary) -msgid "orphans" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:5996(para) -msgid "In the context of Object Storage, this is a process that is not terminated after an upgrade, restart, or reload of the service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6005(title) -msgid "P" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6008(glossterm) -msgid "parent cell" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6012(secondary) ./doc/glossary/glossary-terms.xml:6015(primary) -msgid "parent cells" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6019(para) -msgid "If a requested resource, such as CPU time, disk storage, or memory, is not available in the parent cell, the request is forwarded to associated child cells." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6026(glossterm) -msgid "partition" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6028(primary) ./doc/glossary/glossary-terms.xml:6043(primary) ./doc/glossary/glossary-terms.xml:6057(primary) -msgid "partitions" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6034(para) -msgid "A unit of storage within Object Storage used to store objects. It exists on top of devices and is replicated for fault tolerance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6041(glossterm) ./doc/glossary/glossary-terms.xml:6045(secondary) -msgid "partition index" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6049(para) -msgid "Contains the locations of all Object Storage partitions within the ring." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6055(glossterm) -msgid "partition shift value" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6059(secondary) -msgid "partition index value" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6063(para) -msgid "Used by Object Storage to determine which partition data should reside on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6069(glossterm) ./doc/glossary/glossary-terms.xml:6071(primary) -msgid "path MTU discovery (PMTUD)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6075(para) -msgid "Mechanism in IP networks to detect end-to-end MTU and adjust packet size accordingly." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6081(glossterm) ./doc/glossary/glossary-terms.xml:6083(primary) -msgid "pause" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6087(para) -msgid "A VM state where no changes occur (no changes in memory, network communications stop, etc); the VM is frozen but not shut down." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6093(glossterm) ./doc/glossary/glossary-terms.xml:6095(primary) -msgid "PCI passthrough" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6099(para) -msgid "Gives guest VMs exclusive access to a PCI device. Currently supported in OpenStack Havana and later releases." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6105(glossterm) -msgid "persistent message" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6109(secondary) ./doc/glossary/glossary-terms.xml:6112(primary) -msgid "persistent messages" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6116(para) -msgid "A message that is stored both in memory and on disk. The message is not lost after a failure or restart." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6122(glossterm) ./doc/glossary/glossary-terms.xml:6124(primary) -msgid "persistent volume" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6128(para) -msgid "Changes to these types of disk volumes are saved." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6133(glossterm) ./doc/glossary/glossary-terms.xml:6135(primary) -msgid "personality file" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6139(para) -msgid "A file used to customize a Compute instance. It can be used to inject SSH keys or a specific network configuration." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6145(glossterm) ./doc/glossary/glossary-terms.xml:6147(primary) -msgid "Platform-as-a-Service (PaaS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6151(para) -msgid "Provides to the consumer the ability to deploy applications through a programming language or tools supported by the cloud platform provider. An example of Platform-as-a-Service is an Eclipse/Java programming platform provided with no downloads required." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6160(glossterm) -msgid "plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6162(primary) -msgid "plug-ins, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6166(para) -msgid "Software component providing the actual implementation for Networking APIs, or for Compute APIs, depending on the context." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6172(glossterm) ./doc/glossary/glossary-terms.xml:6174(primary) -msgid "policy service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6178(para) -msgid "Component of Identity Service that provides a rule-management interface and a rule-based authorization engine." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6184(glossterm) ./doc/glossary/glossary-terms.xml:6186(primary) -msgid "pool" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6190(para) -msgid "A logical set of devices, such as web servers, that you group together to receive and process traffic. The load balancing function chooses which member of the pool handles the new requests or connections received on the VIP address. Each VIP has one pool." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6199(glossterm) ./doc/glossary/glossary-terms.xml:6201(primary) -msgid "pool member" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6205(para) -msgid "An application that runs on the back-end server in a load-balancing system." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6211(glossterm) -msgid "port" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6213(primary) ./doc/glossary/glossary-terms.xml:6227(primary) ./doc/glossary/glossary-terms.xml:8473(primary) -msgid "ports" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6219(para) -msgid "A virtual network port within Networking; VIFs / vNICs are connected to a port." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6225(glossterm) ./doc/glossary/glossary-terms.xml:6229(secondary) -msgid "port UUID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6233(para) -msgid "Unique ID for a Networking port." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6238(glossterm) -msgid "preseed" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6240(primary) -msgid "preseed, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6244(para) -msgid "A tool to automate system configuration and installation on Debian-based Linux distributions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6250(glossterm) ./doc/glossary/glossary-terms.xml:6252(primary) -msgid "private image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6256(para) -msgid "An Image Service VM image that is only available to specified tenants." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6262(glossterm) ./doc/glossary/glossary-terms.xml:6269(primary) -msgid "private IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6266(secondary) -msgid "private" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6273(para) -msgid "An IP address used for management and administration, not available to the public Internet." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6279(glossterm) -msgid "private network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6283(secondary) ./doc/glossary/glossary-terms.xml:6286(primary) -msgid "private networks" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6290(para) -msgid "The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A private network interface can be a flat or VLAN network interface. A flat network interface is controlled by the flat_interface with flat managers. A VLAN network interface is controlled by the vlan_interface option with VLAN managers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6302(glossterm) -msgid "project" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6304(primary) ./doc/glossary/glossary-terms.xml:6318(primary) ./doc/glossary/glossary-terms.xml:6332(primary) -msgid "projects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6310(para) -msgid "A logical grouping of users within Compute; defines quotas and access to VM images." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6316(glossterm) ./doc/glossary/glossary-terms.xml:6320(secondary) -msgid "project ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6324(para) -msgid "User-defined alphanumeric string in Compute; the name of a project." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6330(glossterm) ./doc/glossary/glossary-terms.xml:6334(secondary) -msgid "project VPN" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6338(para) -msgid "Alternative term for a cloudpipe." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6343(glossterm) ./doc/glossary/glossary-terms.xml:6345(primary) -msgid "promiscuous mode" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6349(para) -msgid "Causes the network interface to pass all traffic it receives to the host rather than passing only the frames addressed to it." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6356(glossterm) ./doc/glossary/glossary-terms.xml:6358(primary) -msgid "protected property" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6362(para) -msgid "Generally, extra properties on an Image Service image to which only cloud administrators have access. Limits which user roles can perform CRUD operations on that property. The cloud administrator can configure any image property as protected." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6371(glossterm) ./doc/glossary/glossary-terms.xml:6373(primary) -msgid "provider" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6377(para) -msgid "An administrator who has access to all hosts and instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6383(glossterm) -msgid "proxy node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6387(secondary) ./doc/glossary/glossary-terms.xml:6390(primary) -msgid "proxy nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6394(para) -msgid "A node that provides the Object Storage proxy service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6399(glossterm) -msgid "proxy server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6403(secondary) ./doc/glossary/glossary-terms.xml:6406(primary) -msgid "proxy servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6410(para) -msgid "Users of Object Storage interact with the service through the proxy server, which in turn looks up the location of the requested data within the ring and returns the results to the user." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6417(glossterm) ./doc/glossary/glossary-terms.xml:6424(primary) -msgid "public API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6421(secondary) -msgid "public APIs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6428(para) -msgid "An API endpoint used for both service-to-service communication and end-user interactions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6434(glossterm) ./doc/glossary/glossary-terms.xml:6441(primary) -msgid "public image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6438(secondary) -msgid "public images" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6445(para) -msgid "An Image Service VM image that is available to all tenants." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6451(glossterm) ./doc/glossary/glossary-terms.xml:6458(primary) -msgid "public IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6455(secondary) ./doc/glossary/glossary-terms.xml:6483(secondary) -msgid "public" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6462(para) -msgid "An IP address that is accessible to end-users." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6467(glossterm) ./doc/glossary/glossary-terms.xml:6469(primary) -msgid "public key authentication" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6473(para) -msgid "Authentication method that uses keys rather than passwords." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6479(glossterm) ./doc/glossary/glossary-terms.xml:6486(primary) -msgid "public network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6490(para) -msgid "The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. The public network interface is controlled by the public_interface option." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6499(glossterm) ./doc/glossary/glossary-terms.xml:6501(primary) -msgid "Puppet" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6505(para) -msgid "An operating system configuration-management tool supported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6511(glossterm) ./doc/glossary/glossary-terms.xml:6513(primary) -msgid "Python" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6517(para) -msgid "Programming language used extensively in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6525(title) -msgid "Q" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6528(glossterm) ./doc/glossary/glossary-terms.xml:6530(primary) -msgid "QEMU Copy On Write 2 (QCOW2)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6540(glossterm) ./doc/glossary/glossary-terms.xml:6542(primary) -msgid "Qpid" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6546(para) -msgid "Message queue software supported by OpenStack; an alternative to RabbitMQ." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6552(glossterm) ./doc/glossary/glossary-terms.xml:6554(primary) -msgid "quarantine" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6558(para) -msgid "If Object Storage finds objects, containers, or accounts that are corrupt, they are placed in this state, are not replicated, cannot be read by clients, and a correct copy is re-replicated." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6565(glossterm) ./doc/glossary/glossary-terms.xml:6567(primary) -msgid "Quick EMUlator (QEMU)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6571(para) -msgid "QEMU is a generic and open source machine emulator and virtualizer." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6574(para) -msgid "One of the hypervisors supported by OpenStack, generally used for development purposes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6580(glossterm) -msgid "quota" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6582(primary) -msgid "quotas" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6586(para) -msgid "In Compute and Block Storage, the ability to set resource limits on a per-project basis." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6595(title) -msgid "R" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6598(glossterm) ./doc/glossary/glossary-terms.xml:6600(primary) -msgid "RabbitMQ" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6604(para) -msgid "The default message queue software used by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6609(glossterm) ./doc/glossary/glossary-terms.xml:6611(primary) -msgid "Rackspace Cloud Files" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6615(para) -msgid "Released as open source by Rackspace in 2010; the basis for Object Storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6621(glossterm) ./doc/glossary/glossary-terms.xml:6623(primary) -msgid "RADOS Block Device (RBD)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6627(para) -msgid "Ceph component that enables a Linux block device to be striped over multiple distributed data stores." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6633(glossterm) ./doc/glossary/glossary-terms.xml:6635(primary) -msgid "radvd" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6639(para) -msgid "The router advertisement daemon, used by the Compute VLAN manager and FlatDHCP manager to provide routing services for VM instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6646(glossterm) ./doc/glossary/glossary-terms.xml:6648(primary) -msgid "RAM filter" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6652(para) -msgid "The Compute setting that enables or disables RAM overcommitment." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6658(glossterm) ./doc/glossary/glossary-terms.xml:6660(primary) -msgid "RAM overcommit" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6664(para) -msgid "The ability to start new VM instances based on the actual memory usage of a host, as opposed to basing the decision on the amount of RAM each running instance thinks it has available. Also known as memory overcommit." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6672(glossterm) -msgid "rate limit" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6674(primary) -msgid "rate limits" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6678(para) -msgid "Configurable option within Object Storage to limit database writes on a per-account and/or per-container basis." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6684(glossterm) -msgid "raw" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6686(primary) -msgid "raw format" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6690(para) -msgid "One of the VM image disk formats supported by Image Service; an unstructured disk image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6696(glossterm) -msgid "rebalance" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6698(primary) -msgid "rebalancing" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6702(para) -msgid "The process of distributing Object Storage partitions across all drives in the ring; used during initial ring creation and after ring reconfiguration." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6709(glossterm) ./doc/glossary/glossary-terms.xml:6711(primary) ./doc/glossary/glossary-terms.xml:7568(primary) -msgid "reboot" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6713(secondary) ./doc/glossary/glossary-terms.xml:7570(secondary) -msgid "hard vs. soft" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6717(para) -msgid "Either a soft or hard reboot of a server. With a soft reboot, the operating system is signaled to restart, which enables a graceful shutdown of all processes. A hard reboot is the equivalent of power cycling the server. The virtualization platform should ensure that the reboot action has completed successfully, even in cases in which the underlying domain/VM is paused or halted/stopped." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6727(glossterm) -msgid "rebuild" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6729(primary) -msgid "rebuilding" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6733(para) -msgid "Removes all data on the server and replaces it with the specified image. Server ID and IP addresses remain the same." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6739(glossterm) ./doc/glossary/glossary-terms.xml:6741(primary) -msgid "Recon" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6745(para) -msgid "An Object Storage component that collects metrics." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6750(glossterm) -msgid "record" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6752(primary) ./doc/glossary/glossary-terms.xml:6771(primary) -msgid "records" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6758(para) -msgid "Belongs to a particular domain and is used to specify information about the domain. There are several types of DNS records. Each record type contains particular information used to describe the purpose of that record. Examples include mail exchange (MX) records, which specify the mail server for a particular domain; and name server (NS) records, which specify the authoritative name servers for a domain." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6769(glossterm) -msgid "record ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6773(secondary) -msgid "record IDs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6777(para) -msgid "A number within a database that is incremented each time a change is made. Used by Object Storage when replicating." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6783(glossterm) ./doc/glossary/glossary-terms.xml:6785(primary) -msgid "Red Hat Enterprise Linux (RHEL)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6794(glossterm) ./doc/glossary/glossary-terms.xml:6796(primary) -msgid "reference architecture" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6800(para) -msgid "A recommended architecture for an OpenStack cloud." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6805(glossterm) ./doc/glossary/glossary-terms.xml:6807(primary) -msgid "region" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6811(para) -msgid "A discrete OpenStack environment with dedicated API endpoints that typically shares only the Identity Service (keystone) with other regions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6818(glossterm) ./doc/glossary/glossary-terms.xml:6820(primary) -msgid "registry" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6822(see) -msgid "under Image Service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6826(para) -msgid "Alternative term for the Image Service registry." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6831(glossterm) -msgid "registry server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6835(secondary) ./doc/glossary/glossary-terms.xml:6838(primary) -msgid "registry servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6842(para) -msgid "An Image Service that provides VM image metadata information to clients." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6848(glossterm) ./doc/glossary/glossary-terms.xml:6851(primary) -msgid "Reliable, Autonomic Distributed Object Store (RADOS)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6856(para) -msgid "A collection of components that provides object storage within Ceph. Similar to OpenStack Object Storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6862(glossterm) ./doc/glossary/glossary-terms.xml:6864(primary) -msgid "Remote Procedure Call (RPC)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6868(para) -msgid "The method used by the Compute RabbitMQ for intra-service communications." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6874(glossterm) -msgid "replica" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6876(primary) ./doc/glossary/glossary-terms.xml:6891(primary) ./doc/glossary/glossary-terms.xml:6903(glossterm) ./doc/glossary/glossary-terms.xml:6914(primary) -msgid "replication" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6882(para) -msgid "Provides data redundancy and fault tolerance by creating copies of Object Storage objects, accounts, and containers so that they are not lost when the underlying storage fails." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6889(glossterm) ./doc/glossary/glossary-terms.xml:6893(secondary) -msgid "replica count" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6897(para) -msgid "The number of replicas of the data in an Object Storage ring." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6906(para) -msgid "The process of copying data to a separate physical device for fault tolerance and performance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6912(glossterm) -msgid "replicator" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6916(secondary) -msgid "replicators" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6920(para) -msgid "The Object Storage back-end process that creates and manages object replicas." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6926(glossterm) -msgid "request ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6928(primary) -msgid "request IDs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6932(para) -msgid "Unique ID assigned to each request sent to Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6937(glossterm) -msgid "rescue image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6939(primary) -msgid "rescue images" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6943(para) -msgid "A special type of VM image that is booted when an instance is placed into rescue mode. Allows an administrator to mount the file systems for an instance to correct the problem." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6950(glossterm) -msgid "resize" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6952(primary) -msgid "resizing" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6956(para) -msgid "Converts an existing server to a different flavor, which scales the server up or down. The original server is saved to enable rollback if a problem occurs. All resizes must be tested and explicitly confirmed, at which time the original server is removed." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6964(glossterm) -msgid "RESTful" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6966(primary) -msgid "RESTful web services" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6970(para) -msgid "A kind of web service API that uses REST, or Representational State Transfer. REST is the style of architecture for hypermedia systems that is used for the World Wide Web." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6977(glossterm) -msgid "ring" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6979(primary) ./doc/glossary/glossary-terms.xml:6994(primary) -msgid "rings" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6985(para) -msgid "An entity that maps Object Storage data to partitions. A separate ring exists for each service, such as account, object, and container." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6992(glossterm) -msgid "ring builder" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:6996(secondary) -msgid "ring builders" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7000(para) -msgid "Builds and manages rings within Object Storage, assigns partitions to devices, and pushes the configuration to other storage nodes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7007(glossterm) ./doc/glossary/glossary-terms.xml:7009(primary) -msgid "Role Based Access Control (RBAC)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7013(para) -msgid "Provides a predefined list of actions that the user can perform, such as start or stop VMs, reset passwords, and so on. Supported in both Identity Service and Compute and can be configured using the horizon dashboard." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7021(glossterm) -msgid "role" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7023(primary) ./doc/glossary/glossary-terms.xml:7038(primary) -msgid "roles" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7029(para) -msgid "A personality that a user assumes to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and privileges." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7036(glossterm) ./doc/glossary/glossary-terms.xml:7040(secondary) -msgid "role ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7044(para) -msgid "Alphanumeric ID assigned to each Identity Service role." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7049(glossterm) ./doc/glossary/glossary-terms.xml:7051(primary) -msgid "rootwrap" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7055(para) -msgid "A feature of Compute that allows the unprivileged \"nova\" user to run a specified list of commands as the Linux root user." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7061(glossterm) ./doc/glossary/glossary-terms.xml:7068(primary) -msgid "round-robin scheduler" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7063(primary) ./doc/glossary/glossary-terms.xml:7611(primary) -msgid "schedulers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7065(secondary) -msgid "round-robin" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7072(para) -msgid "Type of Compute scheduler that evenly distributes instances among available hosts." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7078(glossterm) ./doc/glossary/glossary-terms.xml:7080(primary) -msgid "router" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7084(para) -msgid "A physical or virtual network device that passes network traffic between different networks." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7090(glossterm) -msgid "routing key" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7092(primary) -msgid "routing keys" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7096(para) -msgid "The Compute direct exchanges, fanout exchanges, and topic exchanges use this key to determine how to process a message; processing varies depending on exchange type." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7103(glossterm) -msgid "RPC driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7105(primary) -msgid "drivers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7107(secondary) ./doc/glossary/glossary-terms.xml:7110(primary) -msgid "RPC drivers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7114(para) -msgid "Modular system that allows the underlying message queue software of Compute to be changed. For example, from RabbitMQ to ZeroMQ or Qpid." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7121(glossterm) ./doc/glossary/glossary-terms.xml:7123(primary) -msgid "rsync" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7127(para) -msgid "Used by Object Storage to push object replicas." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7132(glossterm) -msgid "RXTX cap" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7134(primary) -msgid "RXTX cap/quota" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7138(para) -msgid "Absolute limit on the amount of network traffic a Compute VM instance can send and receive." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7144(glossterm) -msgid "RXTX quota" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7147(para) -msgid "Soft limit on the amount of network traffic a Compute VM instance can send and receive." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7153(glossterm) ./doc/glossary/glossary-terms.xml:7155(primary) -msgid "Ryu neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7159(para) -msgid "Enables the Ryu network operating system to function as a Networking OpenFlow controller." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7168(title) -msgid "S" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7171(glossterm) -msgid "S3" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7173(primary) -msgid "S3 storage service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7177(para) -msgid "Object storage service by Amazon; similar in function to Object Storage, it can act as a back-end store for Image Service VM images." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7184(glossterm) ./doc/glossary/glossary-terms.xml:7186(primary) -msgid "sahara" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7190(para) -msgid "OpenStack project that provides a scalable data-processing stack and associated management interfaces." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7196(glossterm) ./doc/glossary/glossary-terms.xml:7198(primary) -msgid "scheduler manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7202(para) -msgid "A Compute component that determines where VM instances should start. Uses modular design to support a variety of scheduler types." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7209(glossterm) -msgid "scoped token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7211(primary) -msgid "scoped tokens" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7215(para) -msgid "An Identity Service API access token that is associated with a specific tenant." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7221(glossterm) -msgid "scrubber" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7223(primary) -msgid "scrubbers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7227(para) -msgid "Checks for and deletes unused VMs; the component of Image Service that implements delayed delete." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7233(glossterm) -msgid "secret key" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7235(primary) -msgid "secret keys" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7239(para) -msgid "String of text known only by the user; used along with an access key to make requests to the Compute API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7245(glossterm) ./doc/glossary/glossary-terms.xml:7247(primary) -msgid "secure shell (SSH)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7251(para) -msgid "Open source tool used to access remote hosts through an encrypted communications channel, SSH key injection is supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7258(glossterm) -msgid "security group" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7260(primary) -msgid "security groups" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7264(para) -msgid "A set of network traffic filtering rules that are applied to a Compute instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7270(glossterm) -msgid "segmented object" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7274(secondary) ./doc/glossary/glossary-terms.xml:7277(primary) -msgid "segmented objects" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7281(para) -msgid "An Object Storage large object that has been broken up into pieces. The re-assembled object is called a concatenated object." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7288(glossterm) -msgid "server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7296(para) -msgid "Computer that provides explicit services to the client software running on that system, often managing a variety of computer operations." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7300(para) -msgid "A server is a VM instance in the Compute system. Flavor and image are requisite elements when creating a server." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7306(glossterm) ./doc/glossary/glossary-terms.xml:7308(primary) -msgid "server image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7312(para) -msgid "Alternative term for a VM image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7317(glossterm) ./doc/glossary/glossary-terms.xml:7321(secondary) -msgid "server UUID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7331(glossterm) -msgid "service" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7333(primary) -msgid "services" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7339(para) -msgid "An OpenStack service, such as Compute, Object Storage, or Image Service. Provides one or more endpoints through which users can access resources and perform operations." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7346(glossterm) ./doc/glossary/glossary-terms.xml:7348(primary) -msgid "service catalog" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7352(para) -msgid "Alternative term for the Identity Service catalog." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7357(glossterm) ./doc/glossary/glossary-terms.xml:7359(primary) -msgid "service ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7363(para) -msgid "Unique ID assigned to each service that is available in the Identity Service catalog." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7369(glossterm) ./doc/glossary/glossary-terms.xml:7371(primary) -msgid "service registration" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7375(para) -msgid "An Identity Service feature that enables services, such as Compute, to automatically register with the catalog." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7381(glossterm) ./doc/glossary/glossary-terms.xml:7383(primary) -msgid "service tenant" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7387(para) -msgid "Special tenant that contains all services that are listed in the catalog." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7393(glossterm) ./doc/glossary/glossary-terms.xml:7395(primary) -msgid "service token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7399(para) -msgid "An administrator-defined token used by Compute to communicate securely with the Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7405(glossterm) ./doc/glossary/glossary-terms.xml:7409(secondary) -msgid "session back end" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7407(primary) ./doc/glossary/glossary-terms.xml:7421(primary) ./doc/glossary/glossary-terms.xml:7436(primary) -msgid "sessions" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7413(para) -msgid "The method of storage used by horizon to track client sessions, such as local memory, cookies, a database, or memcached." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7419(glossterm) ./doc/glossary/glossary-terms.xml:7423(secondary) -msgid "session persistence" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7427(para) -msgid "A feature of the load-balancing service. It attempts to force subsequent connections to a service to be redirected to the same node as long as it is online." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7434(glossterm) ./doc/glossary/glossary-terms.xml:7438(secondary) -msgid "session storage" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7442(para) -msgid "A horizon component that stores and tracks client session information. Implemented through the Django sessions framework." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7448(glossterm) ./doc/glossary/glossary-terms.xml:7455(primary) -msgid "shared IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7452(secondary) -msgid "shared" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7459(para) -msgid "An IP address that can be assigned to a VM instance within the shared IP group. Public IP addresses can be shared across multiple servers for use in various high-availability scenarios. When an IP address is shared to another server, the cloud network restrictions are modified to enable each server to listen to and respond on that IP address. You can optionally specify that the target server network configuration be modified. Shared IP addresses can be used with many standard heartbeat facilities, such as keepalive, that monitor for failure and manage IP failover." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7472(glossterm) -msgid "shared IP group" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7474(primary) -msgid "shared IP groups" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7478(para) -msgid "A collection of servers that can share IPs with other members of the group. Any server in a group can share one or more public IPs with any other server in the group. With the exception of the first server in a shared IP group, servers must be launched into shared IP groups. A server may be a member of only one shared IP group." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7487(glossterm) ./doc/glossary/glossary-terms.xml:7489(primary) -msgid "shared storage" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7493(para) -msgid "Block storage that is simultaneously accessible by multiple clients, for example, NFS." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7499(glossterm) ./doc/glossary/glossary-terms.xml:7501(primary) -msgid "Sheepdog" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7505(para) -msgid "Distributed block storage system for QEMU, supported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7511(glossterm) ./doc/glossary/glossary-terms.xml:7514(primary) -msgid "Simple Cloud Identity Management (SCIM)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7518(para) -msgid "Specification for managing identity in the cloud, currently unsupported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7524(glossterm) ./doc/glossary/glossary-terms.xml:7527(primary) -msgid "Single-root I/O Virtualization (SR-IOV)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7531(para) -msgid "A specification that, when implemented by a physical PCIe device, enables it to appear as multiple separate PCIe devices. This enables multiple virtualized guests to share direct access to the physical device, offering improved performance over an equivalent virtual device. Currently supported in OpenStack Havana and later releases." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7541(glossterm) ./doc/glossary/glossary-terms.xml:7543(primary) -msgid "SmokeStack" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7547(para) -msgid "Runs automated tests against the core OpenStack API; written in Rails." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7553(glossterm) ./doc/glossary/glossary-terms.xml:7555(primary) -msgid "snapshot" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7559(para) -msgid "A point-in-time copy of an OpenStack storage volume or image. Use storage volume snapshots to back up volumes. Use image snapshots to back up data, or as \"gold\" images for additional servers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7566(glossterm) ./doc/glossary/glossary-terms.xml:7573(primary) -msgid "soft reboot" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7577(para) -msgid "A controlled reboot where a VM instance is properly restarted through operating system commands." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7583(glossterm) ./doc/glossary/glossary-terms.xml:7585(primary) -msgid "SolidFire Volume Driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7589(para) -msgid "The Block Storage driver for the SolidFire iSCSI storage appliance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7595(glossterm) -msgid "SPICE" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7597(primary) -msgid "SPICE (Simple Protocol for Independent Computing Environments)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7602(para) -msgid "The Simple Protocol for Independent Computing Environments (SPICE) provides remote desktop access to guest virtual machines. It is an alternative to VNC. SPICE is supported by OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7609(glossterm) ./doc/glossary/glossary-terms.xml:7616(primary) -msgid "spread-first scheduler" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7613(secondary) -msgid "spread-first" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7620(para) -msgid "The Compute VM scheduling algorithm that attempts to start a new VM on the host with the least amount of load." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7626(glossterm) ./doc/glossary/glossary-terms.xml:7628(primary) -msgid "SQL-Alchemy" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7632(para) -msgid "An open source SQL toolkit for Python, used in OpenStack." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7637(glossterm) ./doc/glossary/glossary-terms.xml:7639(primary) -msgid "SQLite" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7643(para) -msgid "A lightweight SQL database, used as the default persistent storage method in many OpenStack services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7649(glossterm) ./doc/glossary/glossary-terms.xml:7651(primary) -msgid "stack" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7656(para) -msgid "A set of OpenStack resources created and managed by the Orchestration service according to a given template (either an AWS CloudFormation template or a Heat Orchestration Template (HOT))." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7664(glossterm) ./doc/glossary/glossary-terms.xml:7666(primary) -msgid "StackTach" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7670(para) -msgid "Community project that captures Compute AMQP communications; useful for debugging." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7676(glossterm) -msgid "static IP address" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7680(secondary) -msgid "static" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7683(primary) -msgid "static IP addresses" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7687(para) -msgid "Alternative term for a fixed IP address." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7692(glossterm) ./doc/glossary/glossary-terms.xml:7694(primary) -msgid "StaticWeb" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7698(para) -msgid "WSGI middleware component of Object Storage that serves container data as a static web page." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7704(glossterm) ./doc/glossary/glossary-terms.xml:7706(primary) -msgid "storage back end" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7710(para) -msgid "The method that a service uses for persistent storage, such as iSCSI, NFS, or local disk." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7716(glossterm) ./doc/glossary/glossary-terms.xml:7723(primary) -msgid "storage node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7720(secondary) -msgid "storage nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7727(para) -msgid "An Object Storage node that provides container services, account services, and object services; controls the account databases, container databases, and object storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7734(glossterm) ./doc/glossary/glossary-terms.xml:7738(secondary) -msgid "storage manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7736(primary) ./doc/glossary/glossary-terms.xml:7750(primary) ./doc/glossary/glossary-terms.xml:7764(primary) ./doc/glossary/glossary-terms.xml:7913(primary) -msgid "storage" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7742(para) -msgid "A XenAPI component that provides a pluggable interface to support a wide variety of persistent storage back ends." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7748(glossterm) ./doc/glossary/glossary-terms.xml:7752(secondary) -msgid "storage manager back end" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7756(para) -msgid "A persistent storage method supported by XenAPI, such as iSCSI or NFS." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7762(glossterm) ./doc/glossary/glossary-terms.xml:7766(secondary) -msgid "storage services" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7770(para) -msgid "Collective name for the Object Storage object services, container services, and account services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7776(glossterm) ./doc/glossary/glossary-terms.xml:7778(primary) -msgid "strategy" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7782(para) -msgid "Specifies the authentication source used by Image Service or Identity Service." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7788(glossterm) -msgid "subdomain" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7790(primary) -msgid "subdomains" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7794(para) -msgid "A domain within a parent domain. Subdomains cannot be registered. Subdomains enable you to delegate domains. Subdomains can themselves have subdomains, so third-level, fourth-level, fifth-level, and deeper levels of nesting are possible." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7802(glossterm) ./doc/glossary/glossary-terms.xml:7804(primary) -msgid "subnet" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7808(para) -msgid "Logical subdivision of an IP network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7813(glossterm) ./doc/glossary/glossary-terms.xml:7816(primary) -msgid "SUSE Linux Enterprise Server (SLES)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7825(glossterm) -msgid "suspend" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7827(primary) -msgid "suspend, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7831(para) -msgid "Alternative term for a paused VM instance." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7837(glossterm) -msgid "swap" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7839(primary) -msgid "swap, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7843(para) -msgid "Disk-based virtual memory used by operating systems to provide more memory than is actually available on the system." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7849(glossterm) ./doc/glossary/glossary-terms.xml:7851(primary) -msgid "swawth" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7855(para) -msgid "An authentication and authorization service for Object Storage, implemented through WSGI middleware; uses Object Storage itself as the persistent backing store." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7865(para) -msgid "An OpenStack core project that provides object storage services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7871(glossterm) ./doc/glossary/glossary-terms.xml:7873(primary) -msgid "swift All in One (SAIO)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7877(para) -msgid "Creates a full Object Storage development environment within a single VM." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7883(glossterm) ./doc/glossary/glossary-terms.xml:7887(secondary) -msgid "swift middleware" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7891(para) -msgid "Collective term for Object Storage components that provide additional functionality." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7897(glossterm) ./doc/glossary/glossary-terms.xml:7901(secondary) -msgid "swift proxy server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7905(para) -msgid "Acts as the gatekeeper to Object Storage and is responsible for authenticating the user." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7911(glossterm) -msgid "swift storage node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7915(secondary) ./doc/glossary/glossary-terms.xml:7920(secondary) ./doc/glossary/glossary-terms.xml:7925(secondary) -msgid "swift storage nodes" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7929(para) -msgid "A node that runs Object Storage account, container, and object services." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7935(glossterm) ./doc/glossary/glossary-terms.xml:7937(primary) -msgid "sync point" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7941(para) -msgid "Point in time since the last container and accounts database sync among nodes within Object Storage." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7947(glossterm) ./doc/glossary/glossary-terms.xml:7949(primary) -msgid "sysadmin" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7953(para) -msgid "One of the default roles in the Compute RBAC system. Enables a user to add other users to a project, interact with VM images that are associated with the project, and start and stop VM instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7960(glossterm) ./doc/glossary/glossary-terms.xml:7962(primary) -msgid "system usage" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7966(para) -msgid "A Compute component that, along with the notification system, collects metrics and usage information. This information can be used for billing." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7976(title) -msgid "T" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7979(glossterm) ./doc/glossary/glossary-terms.xml:7981(primary) -msgid "Telemetry" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7985(para) -msgid "An integrated project that provides metering and measuring facilities for OpenStack. The project name of Telemetry is ceilometer." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7992(glossterm) ./doc/glossary/glossary-terms.xml:7994(primary) -msgid "TempAuth" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:7998(para) -msgid "An authentication facility within Object Storage that enables Object Storage itself to perform authentication and authorization. Frequently used in testing and development." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8005(glossterm) ./doc/glossary/glossary-terms.xml:8007(primary) -msgid "Tempest" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8011(para) -msgid "Automated software test suite designed to run against the trunk of the OpenStack core project." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8017(glossterm) ./doc/glossary/glossary-terms.xml:8019(primary) -msgid "TempURL" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8023(para) -msgid "An Object Storage middleware component that enables creation of URLs for temporary object access." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8029(glossterm) ./doc/glossary/glossary-terms.xml:8040(primary) ./doc/glossary/glossary-terms.xml:8058(primary) ./doc/glossary/glossary-terms.xml:8072(primary) -msgid "tenant" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8032(para) -msgid "A group of users; used to isolate access to Compute resources. An alternative term for a project." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8038(glossterm) ./doc/glossary/glossary-terms.xml:8042(secondary) -msgid "Tenant API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8046(para) -msgid "An API that is accessible to tenants." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8051(glossterm) ./doc/glossary/glossary-terms.xml:8055(secondary) ./doc/glossary/glossary-terms.xml:8060(secondary) -msgid "tenant endpoint" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8064(para) -msgid "An Identity Service API endpoint that is associated with one or more tenants." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8070(glossterm) ./doc/glossary/glossary-terms.xml:8074(secondary) -msgid "tenant ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8078(para) -msgid "Unique ID assigned to each tenant within the Identity Service. The project IDs map to the tenant IDs." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8084(glossterm) -msgid "token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8086(primary) -msgid "tokens" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8090(para) -msgid "An alpha-numeric string of text used to access OpenStack APIs and resources." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8096(glossterm) ./doc/glossary/glossary-terms.xml:8098(primary) -msgid "token services" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8102(para) -msgid "An Identity Service component that manages and validates tokens after a user or tenant has been authenticated." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8108(glossterm) ./doc/glossary/glossary-terms.xml:8110(primary) -msgid "tombstone" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8113(para) -msgid "Used to mark Object Storage objects that have been deleted; ensures that the object is not updated on another node after it has been deleted." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8121(glossterm) ./doc/glossary/glossary-terms.xml:8123(primary) -msgid "topic publisher" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8127(para) -msgid "A process that is created when a RPC call is executed; used to push the message to the topic exchange." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8133(glossterm) ./doc/glossary/glossary-terms.xml:8135(primary) -msgid "Torpedo" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8139(para) -msgid "Community project used to run automated tests against the OpenStack API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8145(glossterm) -msgid "transaction ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8147(primary) -msgid "transaction IDs" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8151(para) -msgid "Unique ID assigned to each Object Storage request; used for debugging and tracing." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8157(glossterm) -msgid "transient" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8159(primary) -msgid "transient exchanges" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8165(para) -msgid "Alternative term for non-durable." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8170(glossterm) -msgid "transient exchange" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8173(para) -msgid "Alternative term for a non-durable exchange." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8179(glossterm) -msgid "transient message" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8183(secondary) ./doc/glossary/glossary-terms.xml:8186(primary) -msgid "transient messages" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8190(para) -msgid "A message that is stored in memory and is lost after the server is restarted." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8196(glossterm) -msgid "transient queue" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8200(secondary) ./doc/glossary/glossary-terms.xml:8203(primary) -msgid "transient queues" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8207(para) -msgid "Alternative term for a non-durable queue." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8212(glossterm) ./doc/glossary/glossary-terms.xml:8214(primary) -msgid "TripleO" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8218(para) -msgid "OpenStack-on-OpenStack program. The code name for the OpenStack Deployment program." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8226(glossterm) ./doc/glossary/glossary-terms.xml:8228(primary) -msgid "trove" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8232(para) -msgid "OpenStack project that provides database services to applications." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8241(title) -msgid "U" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8244(glossterm) ./doc/glossary/glossary-terms.xml:8246(primary) -msgid "Ubuntu" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8250(para) -msgid "A Debian-based Linux distribution." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8255(glossterm) ./doc/glossary/glossary-terms.xml:8257(primary) -msgid "unscoped token" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8261(para) -msgid "Alternative term for an Identity Service default token." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8266(glossterm) -msgid "updater" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8268(primary) -msgid "updaters" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8272(para) -msgid "Collective term for a group of Object Storage components that processes queued and failed updates for containers and objects." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8278(glossterm) -msgid "user" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8280(primary) -msgid "users, definition of" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8284(para) -msgid "In Identity Service, each user is associated with one or more tenants, and in Compute can be associated with roles, projects, or both." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8291(glossterm) ./doc/glossary/glossary-terms.xml:8293(primary) -msgid "user data" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8297(para) -msgid "A blob of data that the user can specify when they launch an instance. The instance can access this data through the metadata service or config drive. config drive Commonly used to pass a shell script that the instance runs on boot." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8307(glossterm) ./doc/glossary/glossary-terms.xml:8309(primary) -msgid "User Mode Linux (UML)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8321(title) -msgid "V" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8324(glossterm) ./doc/glossary/glossary-terms.xml:8326(primary) -msgid "VIF UUID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8330(para) -msgid "Unique ID assigned to each Networking VIF." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8335(glossterm) ./doc/glossary/glossary-terms.xml:8337(primary) -msgid "VIP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8341(para) -msgid "The primary load balancing configuration object. Specifies the virtual IP address and port where client traffic is received. Also defines other details such as the load balancing method to be used, protocol, and so on. This entity is sometimes known in load-balancing products as a virtual server, vserver, or listener." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8351(glossterm) ./doc/glossary/glossary-terms.xml:8354(primary) -msgid "Virtual Central Processing Unit (vCPU)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8358(para) -msgid "Subdivides physical CPUs. Instances can then use those divisions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8364(glossterm) ./doc/glossary/glossary-terms.xml:8366(primary) -msgid "Virtual Disk Image (VDI)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8376(glossterm) ./doc/glossary/glossary-terms.xml:8378(primary) -msgid "Virtual Hard Disk (VHD)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8388(glossterm) ./doc/glossary/glossary-terms.xml:8390(primary) -msgid "virtual IP" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8394(para) -msgid "An Internet Protocol (IP) address configured on the load balancer for use by clients connecting to a service that is load balanced. Incoming connections are distributed to back-end nodes based on the configuration of the load balancer." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8402(glossterm) ./doc/glossary/glossary-terms.xml:8404(primary) -msgid "virtual machine (VM)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8408(para) -msgid "An operating system instance that runs on top of a hypervisor. Multiple VMs can run at the same time on the same physical host." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8415(glossterm) ./doc/glossary/glossary-terms.xml:8422(primary) -msgid "virtual network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8419(secondary) ./doc/glossary/glossary-terms.xml:8475(secondary) ./doc/glossary/glossary-terms.xml:8505(secondary) -msgid "virtual" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8426(para) -msgid "An L2 network segment within Networking." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8431(glossterm) ./doc/glossary/glossary-terms.xml:8433(primary) -msgid "virtual networking" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8437(para) -msgid "A generic term for virtualization of network functions such as switching, routing, load balancing, and security using a combination of VMs and overlays on physical network infrastructure." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8446(glossterm) ./doc/glossary/glossary-terms.xml:8448(primary) -msgid "Virtual Network Computing (VNC)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8452(para) -msgid "Open source GUI and CLI tools used for remote console access to VMs. Supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8458(glossterm) ./doc/glossary/glossary-terms.xml:8460(primary) -msgid "Virtual Network InterFace (VIF)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8464(para) -msgid "An interface that is plugged into a port in a Networking network. Typically a virtual network interface belonging to a VM." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8471(glossterm) ./doc/glossary/glossary-terms.xml:8478(primary) -msgid "virtual port" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8482(para) -msgid "Attachment point where a virtual interface connects to a virtual network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8488(glossterm) ./doc/glossary/glossary-terms.xml:8490(primary) -msgid "virtual private network (VPN)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8494(para) -msgid "Provided by Compute in the form of cloudpipes, specialized instances that are used to create VPNs on a per-project basis." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8501(glossterm) -msgid "virtual server" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8508(primary) -msgid "virtual servers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8512(para) -msgid "Alternative term for a VM or guest." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8517(glossterm) ./doc/glossary/glossary-terms.xml:8519(primary) -msgid "virtual switch (vSwitch)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8523(para) -msgid "Software that runs on a host or node and provides the features and functions of a hardware-based network switch." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8529(glossterm) ./doc/glossary/glossary-terms.xml:8531(primary) -msgid "virtual VLAN" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8535(para) -msgid "Alternative term for a virtual network." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8540(glossterm) ./doc/glossary/glossary-terms.xml:8542(primary) -msgid "VirtualBox" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8551(glossterm) ./doc/glossary/glossary-terms.xml:8553(primary) -msgid "VLAN manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8557(para) -msgid "A Compute component that provides dnsmasq and radvd and sets up forwarding to and from cloudpipe instances." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8563(glossterm) ./doc/glossary/glossary-terms.xml:8570(primary) -msgid "VLAN network" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8567(secondary) -msgid "VLAN" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8574(para) -msgid "The Network Controller provides virtual networks to enable compute servers to interact with each other and with the public network. All machines must have a public and private network interface. A VLAN network is a private network interface, which is controlled by the vlan_interface option with VLAN managers." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8584(glossterm) ./doc/glossary/glossary-terms.xml:8586(primary) -msgid "VM disk (VMDK)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8596(glossterm) ./doc/glossary/glossary-terms.xml:8598(primary) -msgid "VM image" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8602(para) -msgid "Alternative term for an image." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8607(glossterm) ./doc/glossary/glossary-terms.xml:8609(primary) -msgid "VM Remote Control (VMRC)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8613(para) -msgid "Method to access VM instance consoles using a web browser. Supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8619(glossterm) ./doc/glossary/glossary-terms.xml:8621(primary) -msgid "VMware API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8625(para) -msgid "Supports interaction with VMware products in Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8630(glossterm) -msgid "VMware NSX Neutron plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8633(para) -msgid "Provides support for VMware NSX in Neutron." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8638(glossterm) ./doc/glossary/glossary-terms.xml:8640(primary) -msgid "VNC proxy" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8644(para) -msgid "A Compute component that provides users access to the consoles of their VM instances through VNC or VMRC." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8650(glossterm) ./doc/glossary/glossary-terms.xml:8662(primary) ./doc/glossary/glossary-terms.xml:8675(primary) ./doc/glossary/glossary-terms.xml:8689(primary) ./doc/glossary/glossary-terms.xml:8702(primary) ./doc/glossary/glossary-terms.xml:8716(primary) ./doc/glossary/glossary-terms.xml:8730(primary) ./doc/glossary/glossary-terms.xml:8744(primary) -msgid "volume" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8653(para) -msgid "Disk-based data storage generally represented as an iSCSI target with a file system that supports extended attributes; can be persistent or ephemeral." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8660(glossterm) ./doc/glossary/glossary-terms.xml:8664(secondary) -msgid "Volume API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8668(para) -msgid "Alternative name for the Block Storage API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8673(glossterm) ./doc/glossary/glossary-terms.xml:8677(secondary) -msgid "volume controller" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8681(para) -msgid "A Block Storage component that oversees and coordinates storage volume actions." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8687(glossterm) ./doc/glossary/glossary-terms.xml:8691(secondary) -msgid "volume driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8695(para) -msgid "Alternative term for a volume plug-in." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8700(glossterm) ./doc/glossary/glossary-terms.xml:8704(secondary) -msgid "volume ID" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8708(para) -msgid "Unique ID applied to each storage volume under the Block Storage control." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8714(glossterm) ./doc/glossary/glossary-terms.xml:8718(secondary) -msgid "volume manager" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8722(para) -msgid "A Block Storage component that creates, attaches, and detaches persistent storage volumes." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8728(glossterm) ./doc/glossary/glossary-terms.xml:8732(secondary) -msgid "volume node" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8736(para) -msgid "A Block Storage node that runs the cinder-volume daemon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8742(glossterm) ./doc/glossary/glossary-terms.xml:8746(secondary) -msgid "volume plug-in" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8750(para) -msgid "Provides support for new and specialized types of back-end storage for the Block Storage volume manager." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8756(glossterm) -msgid "volume worker" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8758(primary) -msgid "volume workers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8762(para) -msgid "A cinder component that interacts with back-end storage to manage the creation and deletion of volumes and the creation of compute volumes, provided by the cinder-volume daemon." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8770(glossterm) ./doc/glossary/glossary-terms.xml:8772(primary) -msgid "vSphere" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8784(title) -msgid "W" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8787(glossterm) ./doc/glossary/glossary-terms.xml:8789(primary) -msgid "weighting" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8793(para) -msgid "A Compute process that determines the suitability of the VM instances for a job for a particular host. For example, not enough RAM on the host, too many CPUs on the host, and so on." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8800(glossterm) ./doc/glossary/glossary-terms.xml:8802(primary) -msgid "weight" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8806(para) -msgid "Used by Object Storage devices to determine which storage devices are suitable for the job. Devices are weighted by size." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8812(glossterm) ./doc/glossary/glossary-terms.xml:8814(primary) -msgid "weighted cost" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8818(para) -msgid "The sum of each cost used when deciding where to start a new VM instance in Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8824(glossterm) -msgid "worker" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8826(primary) -msgid "workers" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8830(para) -msgid "A daemon that listens to a queue and carries out tasks in response to messages. For example, the cinder-volume worker manages volume creation and deletion on storage arrays." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8841(title) -msgid "X" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8844(glossterm) ./doc/glossary/glossary-terms.xml:8846(primary) -msgid "Xen" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8850(para) -msgid "Xen is a hypervisor using a microkernel design, providing services that allow multiple computer operating systems to execute on the same computer hardware concurrently." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8860(glossterm) ./doc/glossary/glossary-terms.xml:8871(primary) ./doc/glossary/glossary-terms.xml:8884(primary) ./doc/glossary/glossary-terms.xml:8898(primary) -msgid "Xen API" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8863(para) -msgid "The Xen administrative API, which is supported by Compute." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8869(glossterm) ./doc/glossary/glossary-terms.xml:8873(secondary) -msgid "Xen Cloud Platform (XCP)" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8882(glossterm) ./doc/glossary/glossary-terms.xml:8886(secondary) -msgid "Xen Storage Manager Volume Driver" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8890(para) -msgid "A Block Storage volume plug-in that enables communication with the Xen Storage Manager API." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8896(glossterm) -msgid "XenServer" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8900(secondary) -msgid "XenServer hypervisor" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8912(title) -msgid "Y" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8926(title) -msgid "Z" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8929(glossterm) ./doc/glossary/glossary-terms.xml:8931(primary) -msgid "ZeroMQ" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8935(para) -msgid "Message queue software supported by OpenStack. An alternative to RabbitMQ. Also spelled 0MQ." -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8941(glossterm) ./doc/glossary/glossary-terms.xml:8943(primary) -msgid "Zuul" -msgstr "" - -#: ./doc/glossary/glossary-terms.xml:8947(para) -msgid "Tool used in OpenStack development to ensure correctly ordered testing of changes in parallel." -msgstr "" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -#: ./doc/glossary/glossary-terms.xml:0(None) -msgid "translator-credits" -msgstr "" - diff --git a/doc/glossary/locale/ja.po b/doc/glossary/locale/ja.po deleted file mode 100644 index 6ab099aa..00000000 --- a/doc/glossary/locale/ja.po +++ /dev/null @@ -1,7607 +0,0 @@ -# Translators: -# yfukuda , 2014 -# nao nishijima , 2015 -# Tomoyuki KATO , 2013-2015 -# yfukuda , 2014 -# -# -# Akihiro Motoki , 2015. #zanata -# KATO Tomoyuki , 2015. #zanata -# OpenStack Infra , 2015. #zanata -# Shu Muto , 2015. #zanata -# Akihiro Motoki , 2016. #zanata -# KATO Tomoyuki , 2016. #zanata -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2016-03-28 05:25+0000\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"PO-Revision-Date: 2016-03-27 01:25+0000\n" -"Last-Translator: Akihiro Motoki \n" -"Language: ja\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Zanata 3.7.3\n" -"Language-Team: Japanese\n" - -msgid "6to4" -msgstr "6to4" - -msgid "A" -msgstr "A" - -msgid "A BLOB of data held by Object Storage; can be in any format." -msgstr "" -"Object Storage により保持されるデータの BLOB。あらゆる形式の可能性がある。" - -msgid "" -"A Block Storage component that creates, attaches, and detaches persistent " -"storage volumes." -msgstr "" -"永続ストレージボリュームを作成、接続、切断する Block Storage コンポーネント。" - -msgid "" -"A Block Storage component that oversees and coordinates storage volume " -"actions." -msgstr "" -"ストレージボリュームの操作を監督、調整する、Block Storage のコンポーネント。" - -msgid "" -"A Block Storage node that runs the cinder-" -"volume daemon." -msgstr "" -"cinder-volume デーモンを実行する " -"Block Storage ノード。" - -msgid "" -"A Block Storage volume plug-in that enables communication with the Xen " -"Storage Manager API." -msgstr "" -"Xen Storage Manager API と通信できる Block Storage ボリュームプラグイン。" - -msgid "" -"A Ceph component that communicates with external clients, checks data state " -"and consistency, and performs quorum functions." -msgstr "" -"外部クライアントと通信し、データの状態と整合性を確認し、クォーラム機能を実行" -"する、Ceph コンポーネント。" - -msgid "" -"A Compute API parameter that downloads changes to the requested item since " -"your last request, instead of downloading a new, fresh set of data and " -"comparing it against the old data." -msgstr "" -"Compute API のパラメーター。古いデータと比較するために、新しいデータ群をダウ" -"ンロードする代わりに、最後に要求した後に実行された、要求した項目への変更をダ" -"ウンロードする。" - -msgid "" -"A Compute RabbitMQ message queue that remains active when the server " -"restarts." -msgstr "" -"サーバーの再起動時に有効なままとなる、Compute RabbitMQ メッセージキュー。" - -msgid "" -"A Compute RabbitMQ setting that determines whether a message exchange is " -"automatically created when the program starts." -msgstr "" -"メッセージ交換がプログラム起動時に自動的に作成されるかどうかを決める、" -"Compute の RabbitMQ の設定。" - -msgid "" -"A Compute back-end database table that contains the current workload, amount " -"of free RAM, and number of VMs running on each host. Used to determine on " -"which host a VM starts." -msgstr "" -"Computeバックエンドデータベースのテーブルには現在のワークロード、RAMの空き" -"量、各ホストで起動しているVMの数が含まれている。VMがどのホストで開始するのか" -"を決めるのに利用される。" - -msgid "" -"A Compute component that determines where VM instances should start. Uses " -"modular design to support a variety of scheduler types." -msgstr "" -"仮想マシンインスタンスが起動する場所を決める、Compute のコンポーネント。さま" -"ざまな種類のスケジューラーをサポートするために、モジュール型設計を使用する。" - -msgid "" -"A Compute component that enables OpenStack to communicate with Amazon EC2." -msgstr "" -"OpenStack が Amazon EC2 を利用できるようにするための Compute のコンポーネン" -"ト。" - -msgid "" -"A Compute component that manages IP address allocation, firewalls, and other " -"network-related tasks. This is the legacy networking option and an " -"alternative to Networking." -msgstr "" -"IP アドレス割り当て、ファイアウォール、その他ネットワーク関連タスクを管理す" -"る Compute のコンポーネント。レガシーネットワークのオプション。Networking の" -"代替。" - -msgid "" -"A Compute component that provides dnsmasq and radvd and sets up forwarding " -"to and from cloudpipe instances." -msgstr "" -"dnsmasq と radvd を提供し、cloudpipe インスタンスとの転送処理をセットアップす" -"る、Compute のコンポーネント。" - -msgid "" -"A Compute component that provides users access to the consoles of their VM " -"instances through VNC or VMRC." -msgstr "" -"ユーザーが VNC や VMRC 経由で仮想マシンインスタンスのコンソールにアクセスでき" -"るようにする Compute のコンポーネント。" - -msgid "" -"A Compute component that, along with the notification system, collects " -"meters and usage information. This information can be used for billing." -msgstr "" -"通知システムと一緒に動作し、計測項目と使用状況を収集する、Compute のコンポー" -"ネント。この情報は課金のために使用できる。" - -msgid "" -"A Compute daemon that orchestrates the network configuration of nodes, " -"including IP addresses, VLANs, and bridging. Also manages routing for both " -"public and private networks." -msgstr "" -"IP アドレス、VLAN、ブリッジなど、ノードのネットワーク設定をオーケストレーショ" -"ンする Compute のデーモン。また、パブリックネットワークとプライベートネット" -"ワークのルーティングを管理する。" - -msgid "" -"A Compute networking method where the OS network configuration information " -"is injected into the VM image before the instance starts." -msgstr "" -"インスタンスの起動前に、OS のネットワーク設定情報を仮想マシンイメージ内に注入" -"する、Compute のネットワーク方式。" - -msgid "" -"A Compute option that enables parent cells to pass resource requests to " -"child cells if the parent cannot provide the requested resource." -msgstr "" -"親が要求されたリソースを提供できない場合、親セルがリソース要求を子セルに渡す" -"事を可能にする Compute のオプション。" - -msgid "" -"A Compute process that determines the suitability of the VM instances for a " -"job for a particular host. For example, not enough RAM on the host, too many " -"CPUs on the host, and so on." -msgstr "" -"特定のホストがあるジョブ向けの仮想マシンインスタンスに対して適切かどうかを判" -"断する、Compute の処理。例えば、ホストのメモリー不足、ホストの CPU 過剰など。" - -msgid "A Debian-based Linux distribution." -msgstr "Debian ベースの Linux ディストリビューション。" - -msgid "A Java program that can be embedded into a web page." -msgstr "Web ページの中に組み込める Java プログラム。" - -msgid "A Linux distribution compatible with OpenStack." -msgstr "OpenStack と互換性のある Linux ディストリビューション。" - -msgid "A Linux distribution that is compatible with OpenStack." -msgstr "OpenStack と互換性のある Linux ディストリビューション。" - -msgid "A Networking extension that provides perimeter firewall functionality." -msgstr "境界ファイアウォール機能を提供する Networking 拡張。" - -msgid "" -"A Networking plug-in for Cisco devices and technologies, including UCS and " -"Nexus." -msgstr "UCS や Nexus などの Cisco デバイスや技術の Networking プラグイン。" - -msgid "" -"A SQLite database that contains Object Storage accounts and related metadata " -"and that the accounts server accesses." -msgstr "" -"Object Storage のアカウントと関連メタデータを保持し、アカウントサーバーがアク" -"セスする、SQLite データベース。" - -msgid "" -"A SQLite database that stores Object Storage containers and container " -"metadata. The container server accesses this database." -msgstr "" -"Object Storage コンテナーとコンテナーメタデータを保存する SQLite データベー" -"ス。コンテナーサーバーは、このデータベースにアクセスする。" - -msgid "" -"A Shared File Systems service that provides a stable RESTful API. The " -"service authenticates and routes requests throughout the Shared File Systems " -"service. There is python-manilaclient to interact with the API." -msgstr "" -"安定版の RESTful API を提供する Shared File Systems サービス。 Shared File " -"Systems サービスへのすべてのリクエストの認証と転送を行う。この API と通信する" -"ための python-manilaclient が提供されています。" - -msgid "" -"A VM image that does not save changes made to its volumes and reverts them " -"to their original state after the instance is terminated." -msgstr "" -"ボリュームへの変更が保存されない仮想マシンイメージ。インスタンスの終了後、元" -"の状態に戻される。" - -msgid "A VM instance that runs on a host." -msgstr "ホストで動作する仮想マシンインスタンス。" - -msgid "" -"A VM state where no changes occur (no changes in memory, network " -"communications stop, etc); the VM is frozen but not shut down." -msgstr "" -"変更が発生しない (メモリーの変更なし、ネットワーク通信の停止など)、仮想マシン" -"の状態。仮想マシンは停止するが、シャットダウンしない。" - -msgid "" -"A Windows project providing guest initialization features, similar to cloud-" -"init." -msgstr "cloud-init 同様のゲスト初期化機能を提供する Windows プロジェクト。" - -msgid "" -"A XenAPI component that provides a pluggable interface to support a wide " -"variety of persistent storage back ends." -msgstr "" -"さまざまな種類の永続ストレージバックエンドをサポートするために、プラグイン可" -"能なインターフェースを提供する XenAPI コンポーネント。" - -msgid "" -"A bit is a single digit number that is in base of 2 (either a zero or one). " -"Bandwidth usage is measured in bits per second." -msgstr "" -"ビットは、2 を基数とする単一のデジタル数値 (0 または 1)。帯域使用量は、ビット" -"毎秒 (bps) で計測される。" - -msgid "" -"A blob of data that the user can specify when they launch an instance. The " -"instance can access this data through the metadata service or config drive. " -"config drive " -"Commonly used to pass a shell script that the instance runs on boot." -msgstr "" -"インスタンス起動時にユーザが指定できる BLOB データ。インスタンスはこのデータ" -"にメタデータサービスやコンフィグドライブ経由でアクセスできる。コンフィグドライブ 通常、イ" -"ンスタンスがブート時に実行するシェルスクリプトを渡すために使用される。" - -msgid "" -"A cinder component that interacts with back-end storage to manage the " -"creation and deletion of volumes and the creation of compute volumes, " -"provided by the cinder-volume " -"daemon." -msgstr "" -"ボリュームの作成や削除、コンピュートボリュームの作成を管理するために、バック" -"エンドのストレージと相互作用する cinder のコンポーネント。cinder-volume デーモンにより提供される。" - -msgid "" -"A collection of command-line tools for administering VMs; most are " -"compatible with OpenStack." -msgstr "" -"仮想マシンを管理するためのコマンドラインツール群。ほとんどは OpenStack と互換" -"性がある。" - -msgid "" -"A collection of components that provides object storage within Ceph. Similar " -"to OpenStack Object Storage." -msgstr "" -"Ceph 内にオブジェクトストレージを提供するコンポーネント群。OpenStack Object " -"Storage に似ている。" - -msgid "" -"A collection of files for a specific operating system (OS) that you use to " -"create or rebuild a server. OpenStack provides pre-built images. You can " -"also create custom images, or snapshots, from servers that you have " -"launched. Custom images can be used for data backups or as \"gold\" images " -"for additional servers." -msgstr "" -"サーバーの作成、再構築に使用する特定のオペレーティングシステム(OS)用のファ" -"イルの集合。OpenStack は構築済みイメージを提供する。起動したサーバーからカス" -"タムイメージ(又はスナップショット)を作成できる。" - -msgid "A collection of hypervisors grouped together through host aggregates." -msgstr "ホストアグリゲートにより一緒にグループ化されたハイパーバイザーの集合。" - -msgid "" -"A collection of servers that can share IPs with other members of the group. " -"Any server in a group can share one or more public IPs with any other server " -"in the group. With the exception of the first server in a shared IP group, " -"servers must be launched into shared IP groups. A server may be a member of " -"only one shared IP group." -msgstr "" -"グループの他のメンバーと IP を共有できるサーバー群。グループ内のサーバーは、" -"そのグループ内の他のサーバーと 1 つ以上のパブリック IP を共有できる。共有 IP " -"グループにおける 1 番目のサーバーを除き、サーバーは共有 IP グループの中で起動" -"する必要があります。サーバーは、共有 IP グループ 1 つだけのメンバーになれま" -"す。" - -msgid "" -"A collection of specifications used to access a service, application, or " -"program. Includes service calls, required parameters for each call, and the " -"expected return values." -msgstr "" -"サービス、アプリケーション、プログラムへのアクセスに使用される仕様の集合。" -"サービス呼出、各呼出に必要なパラメーター、想定される戻り値を含む。" - -msgid "" -"A community project may be elevated to this status and is then promoted to a " -"core project." -msgstr "" -"コミュニティプロジェクトがこの状態に昇格する事があり、その後コアプロジェクト" -"に昇格する。" - -msgid "A compute service that creates VPNs on a per-project basis." -msgstr "プロジェクトごとの VPN を作成するコンピュートのサービス。" - -msgid "" -"A configurable option within Object Storage to automatically delete objects " -"after a specified amount of time has passed or a certain date is reached." -msgstr "" -"指定された時間経過後、又は指定日になった際に自動的にオブジェクトを削除するた" -"めの Object Storage の設定オプション。" - -msgid "" -"A content delivery network is a specialized network that is used to " -"distribute content to clients, typically located close to the client for " -"increased performance." -msgstr "" -"コンテンツ配信ネットワークは、クライアントにコンテンツを配信するために使用さ" -"れる特別なネットワーク。一般的に、パフォーマンス改善のために、クライアントの" -"近くに置かれる。" - -msgid "" -"A controlled reboot where a VM instance is properly restarted through " -"operating system commands." -msgstr "" -"オペレーティングシステムのコマンド経由で、仮想マシンインスタンスが正常に再起" -"動する、制御された再起動。" - -msgid "" -"A core OpenStack project that provides a network connectivity abstraction " -"layer to OpenStack Compute." -msgstr "" -"OpenStack のコアプロジェクトで、OpenStack Compute に対してネットワーク接続の" -"抽象化レイヤーを提供する。" - -msgid "" -"A core OpenStack project that provides a network connectivity abstraction " -"layer to OpenStack Compute. The project name of Networking is neutron." -msgstr "" -"ネットワーク接続性の抽象化レイヤーを OpenStack Compute に提供する、OpenStack " -"コアプロジェクト。Networking のプロジェクト名は neutron。" - -msgid "A core OpenStack project that provides block storage services for VMs." -msgstr "" -"ブロックストレージサービスを仮想マシンに提供する、OpenStack のコアプロジェク" -"ト。" - -msgid "A core project that provides the OpenStack Image service." -msgstr "OpenStack Image サービスを提供するコアプロジェクト。" - -msgid "" -"A daemon that listens to a queue and carries out tasks in response to " -"messages. For example, the cinder-volume worker manages volume creation and deletion on storage arrays." -msgstr "" -"キューをリッスンし、メッセージに応じたタスクを実行するデーモン。例えば、" -"cinder-volume ワーカーは、ストレー" -"ジにおけるボリュームの作成と削除を管理します。" - -msgid "A database engine supported by the Database service." -msgstr "Database サービスがサポートしているデータベースエンジン。" - -msgid "" -"A default role in the Compute RBAC system that can quarantine an instance in " -"any project." -msgstr "" -"あらゆるプロジェクトにあるインスタンスを検疫できる、Compute RBAC システムにお" -"けるデフォルトのロール。" - -msgid "" -"A device that moves data in the form of blocks. These device nodes interface " -"the devices, such as hard disks, CD-ROM drives, flash drives, and other " -"addressable regions of memory." -msgstr "" -"ブロック状態のデータを移動するデバイス。これらのデバイスノードにはハードディ" -"スク、CD-ROM ドライブ、フラッシュドライブ、その他のアドレス可能なメモリの範囲" -"等がある。" - -msgid "" -"A directory service, which allows users to login with a user name and " -"password. It is a typical source of authentication tokens." -msgstr "" -"ユーザーがユーザー名とパスワードを用いてログインできるようにする、ディレクト" -"リーサービス。認証トークンの一般的な情報源。" - -msgid "" -"A discrete OpenStack environment with dedicated API endpoints that typically " -"shares only the Identity (keystone) with other regions." -msgstr "" -"専用の API エンドポイントを持つ、分離した OpenStack 環境。一般的に Identity " -"(keystone) のみを他のリージョンと共有する。" - -msgid "A disk storage protocol tunneled within Ethernet." -msgstr "Ethernet 内をトンネルされるディスクストレージプロトコル。" - -msgid "" -"A distributed memory object caching system that is used by Object Storage " -"for caching." -msgstr "" -"Object Storage がキャッシュのために使用する、メモリーオブジェクトの分散キャッ" -"シュシステム。" - -msgid "" -"A distributed, highly fault-tolerant file system designed to run on low-cost " -"commodity hardware." -msgstr "" -"低価格のコモディティーサーバー上で動作することを念頭に設計された、耐故障性に" -"優れた分散ファイルシステム。" - -msgid "" -"A domain within a parent domain. Subdomains cannot be registered. Subdomains " -"enable you to delegate domains. Subdomains can themselves have subdomains, " -"so third-level, fourth-level, fifth-level, and deeper levels of nesting are " -"possible." -msgstr "" -"親ドメイン内のドメイン。サブドメインは登録できない。サブドメインによりドメイ" -"ンを委譲できる。サブドメインは、サブドメインを持てるので、第 3 階層、第 4 階" -"層、第 5 階層と深い階層構造にできる。" - -msgid "" -"A driver for the Modular Layer 2 (ML2) neutron plug-in that provides layer-2 " -"connectivity for virtual instances. A single OpenStack installation can use " -"multiple mechanism drivers." -msgstr "" -"仮想インスタンス向けに L2 接続性を提供する、ML2 neutron プラグイン向けのドラ" -"イバー。単一の OpenStack インストール環境が、複数のメカニズムドライバーを使用" -"できます。" - -msgid "" -"A feature of Compute that allows the unprivileged \"nova\" user to run a " -"specified list of commands as the Linux root user." -msgstr "" -"非特権の「nova」ユーザーが Linux の root ユーザーとして指定したコマンド一覧を" -"実行できるようにする、Compute の機能。" - -msgid "" -"A feature of the load-balancing service. It attempts to force subsequent " -"connections to a service to be redirected to the same node as long as it is " -"online." -msgstr "" -"負荷分散サービスの機能の 1 つ。ノードがオンラインである限り、強制的に一連の接" -"続を同じノードにリダイレクトしようとする。" - -msgid "" -"A file sharing protocol. It is a public or open variation of the original " -"Server Message Block (SMB) protocol developed and used by Microsoft. Like " -"the SMB protocol, CIFS runs at a higher level and uses the TCP/IP protocol." -msgstr "" -"ファイル共有プロトコル。 Microsoft が開発し使用している Server Message Block " -"(SMB) プロトコルが公開されオープンになったものです。 SMB プロトコルと同様" -"に、 CIFS は上位レイヤーで動作し、TCP/IP プロトコルを使用します。" - -msgid "" -"A file system designed to aggregate NAS hosts, compatible with OpenStack." -msgstr "" -"NAS ホストを集約するために設計されたファイルシステム。OpenStack と互換性があ" -"る。" - -msgid "" -"A file used to customize a Compute instance. It can be used to inject SSH " -"keys or a specific network configuration." -msgstr "" -"Compute インスタンスをカスタマイズするために使用されるファイル。SSH 鍵や特定" -"のネットワーク設定を注入するために使用できます。" - -msgid "" -"A generic term for virtualization of network functions such as switching, " -"routing, load balancing, and security using a combination of VMs and " -"overlays on physical network infrastructure." -msgstr "" -"複数の仮想マシンを使用して、物理ネットワーク上にオーバーレイされる、スイッチ" -"ング、ルーティング、負荷分散、セキュリティーなどのネットワーク機能の仮想化に" -"関する一般的な用語。" - -msgid "" -"A group of fixed and/or floating IP addresses that are assigned to a project " -"and can be used by or assigned to the VM instances in a project." -msgstr "" -"プロジェクトに割り当てられ、プロジェクトの仮想マシンインスタンスに使用でき" -"る、 Fixed IP アドレスと Floating IP アドレスのグループ。" - -msgid "" -"A group of interrelated web development techniques used on the client-side " -"to create asynchronous web applications. Used extensively in horizon." -msgstr "" -"非同期 Web アプリケーションを作成する為にクライアント側で使用される相互関係の" -"ある Web 開発技術の集合。Horizon で広く使用されている。" - -msgid "" -"A group of related button types within horizon. Buttons to start, stop, and " -"suspend VMs are in one class. Buttons to associate and disassociate floating " -"IP addresses are in another class, and so on." -msgstr "" -"Horizon 内で関連するボタン種別のグループ。仮想マシンを起動、停止、休止するボ" -"タンは、1 つのクラスにある。Floating IP アドレスを関連付ける、関連付けを解除" -"するボタンは、別のクラスにある。" - -msgid "" -"A group of users; used to isolate access to Compute resources. An " -"alternative term for a project." -msgstr "" -"ユーザーのグループ。Compute リソースへのアクセスを分離するために使用される。" -"プロジェクトの別名。" - -msgid "" -"A grouped release of projects related to OpenStack that came out in April " -"2012, the fifth release of OpenStack. It included Compute (nova 2012.1), " -"Object Storage (swift 1.4.8), Image (glance), Identity (keystone), and " -"Dashboard (horizon)." -msgstr "" -"2012年4月に登場した OpenStack 関連プロジェクトのリリース。Compute (nova " -"2012.1), Object Storage (swift 1.4.8), Image (glance), Identity (keystone), " -"Dashboard (horizon) が含まれる。" - -msgid "" -"A grouped release of projects related to OpenStack that came out in February " -"of 2011. It included only Compute (nova) and Object Storage (swift)." -msgstr "" -"OpenStack に関連するプロジェクトをグループ化したリリース。2011 年 2 月に公開" -"された。Compute (nova) と Object Storage (swift) のみが含まれる。" - -msgid "" -"A grouped release of projects related to OpenStack that came out in the fall " -"of 2011, the fourth release of OpenStack. It included Compute (nova 2011.3), " -"Object Storage (swift 1.4.3), and the Image service (glance)." -msgstr "" -"2011年秋に登場した OpenStack 関連プロジェクトのリリース。Compute (nova " -"2011.3), Object Storage (swift 1.4.3), Image service (glance) が含まれる。" - -msgid "" -"A grouped release of projects related to OpenStack that came out in the fall " -"of 2012, the sixth release of OpenStack. It includes Compute (nova), Object " -"Storage (swift), Identity (keystone), Networking (neutron), Image service " -"(glance), and Volumes or Block Storage (cinder)." -msgstr "" -"2012年秋に登場した OpenStack 関連プロジェクトのリリース。Compute (nova), " -"Object Storage (swift), Identity (keystone), Networking (neutron), Image " -"service (glance)、Volumes 又は Block Storage (cinder) が含まれる。" - -msgid "" -"A high availability system design approach and associated service " -"implementation ensures that a prearranged level of operational performance " -"will be met during a contractual measurement period. High availability " -"systems seeks to minimize system downtime and data loss." -msgstr "" -"高可用性システムの設計手法および関連サービスの実装により、契約された計測期間" -"中、合意された運用レベルを満たします。高可用性システムは、システムの停止時間" -"とデータ損失を最小化しようとします。" - -msgid "" -"A horizon component that stores and tracks client session information. " -"Implemented through the Django sessions framework." -msgstr "" -"クライアントセッションの保持と追跡を行う Horizon のコンポーネント。 Django の" -"セッションフレームワークを用いて実装されている。" - -msgid "" -"A hybrid cloud is a composition of two or more clouds (private, community or " -"public) that remain distinct entities but are bound together, offering the " -"benefits of multiple deployment models. Hybrid cloud can also mean the " -"ability to connect colocation, managed and/or dedicated services with cloud " -"resources." -msgstr "" -"ハイブリッドクラウドは、複数のクラウド (プライベート、コミュニティー、パブ" -"リック) の組み合わせ。別々のエンティティーのままですが、一緒にまとめられる。" -"複数の配備モデルの利点を提供する。ハイブリッドクラウドは、コロケーション、マ" -"ネージドサービス、専用サービスをクラウドのリソースに接続する機能を意味するこ" -"ともある。" - -msgid "" -"A kind of web service API that uses REST, or Representational State " -"Transfer. REST is the style of architecture for hypermedia systems that is " -"used for the World Wide Web." -msgstr "" -"REST を使用する Web サービス API の 1 種。REST は、WWW 向けに使用される、ハイ" -"パーメディアシステム向けのアーキテクチャーの形式である。" - -msgid "" -"A lightweight SQL database, used as the default persistent storage method in " -"many OpenStack services." -msgstr "" -"軽量 SQL データベース。多くの OpenStack サービスでデフォルトの永続ストレージ" -"として使用されている。" - -msgid "" -"A list of API endpoints that are available to a user after authentication " -"with the Identity service." -msgstr "Identity による認証後、ユーザーが利用可能な API エンドポイントの一覧。" - -msgid "" -"A list of URL and port number endpoints that indicate where a service, such " -"as Object Storage, Compute, Identity, and so on, can be accessed." -msgstr "" -"URL やポート番号のエンドポイントの一覧。Object Storage、Compute、Identity な" -"どのサービスがアクセスできる場所を意味する。" - -msgid "A list of VM images that are available through Image service." -msgstr "Image service 経由で利用可能な仮想マシンイメージの一覧。" - -msgid "" -"A list of permissions attached to an object. An ACL specifies which users or " -"system processes have access to objects. It also defines which operations " -"can be performed on specified objects. Each entry in a typical ACL specifies " -"a subject and an operation. For instance, the ACL entry (Alice, " -"delete) for a file gives Alice permission to delete the file." -msgstr "" -"オブジェクトに対する権限の一覧。オブジェクトに対して、アクセスできるユーザー" -"やシステムプロセスを特定する。また、特定のオブジェクトに対してどのような操作" -"が行えるかを定義する。アクセス制御リスト(ACL)の一般的な項目では対象項目と操" -"作を指定する。例えば、1つのファイルに対して(Alice, delete)という" -"ACL項目が定義されると、Aliceにファイルを削除する権限が与えられる。" - -msgid "" -"A list of tenants that can access a given VM image within Image service." -msgstr "" -"Image service 内で指定した仮想マシンイメージにアクセスできるテナントの一覧。" - -msgid "" -"A load balancer is a logical device that belongs to a cloud account. It is " -"used to distribute workloads between multiple back-end systems or services, " -"based on the criteria defined as part of its configuration." -msgstr "" -"負荷分散装置は、クラウドアカウントに属する論理デバイスである。その設定に定義" -"されている基準に基づき、複数のバックエンドのシステムやサービス間でワークロー" -"ドを分散するために使用される。" - -msgid "" -"A logical set of devices, such as web servers, that you group together to " -"receive and process traffic. The load balancing function chooses which " -"member of the pool handles the new requests or connections received on the " -"VIP address. Each VIP has one pool." -msgstr "" -"Web サーバーなどのデバイスの論理的な集合。一緒にトラフィックを受け、処理する" -"ために、グループ化する。負荷分散機能は、プール内のどのメンバーが仮想 IP アド" -"レスで受信した新規リクエストや接続を処理するかを選択します。各仮想 IP は 1 つ" -"のプールを持ちます。" - -msgid "" -"A mechanism that allows IPv6 packets to be transmitted over an IPv4 network, " -"providing a strategy for migrating to IPv6." -msgstr "" -"IPv6 パケットを IPv4 ネットワーク経由で送信するための機構。IPv6 に移行する手" -"段を提供する。" - -msgid "" -"A mechanism that allows many resources (for example, fonts, JavaScript) on a " -"web page to be requested from another domain outside the domain from which " -"the resource originated. In particular, JavaScript's AJAX calls can use the " -"XMLHttpRequest mechanism." -msgstr "" -"Web ページのさまざまなリソース (例: フォント、JavaScript) を、リソースのある" -"ドメインの外部から要求できるようになる機能。とくに、JavaScript の AJAX コール" -"が XMLHttpRequest 機能を使用できる。" - -msgid "" -"A message that is stored both in memory and on disk. The message is not lost " -"after a failure or restart." -msgstr "" -"メモリーとディスクの両方に保存されているメッセージ。メッセージは、故障や再起" -"動した後も失われません。" - -msgid "" -"A message that is stored in memory and is lost after the server is restarted." -msgstr "メモリーに保存され、サービスの再起動後に失われるメッセージ。" - -msgid "" -"A method for making file systems available over the network. Supported by " -"OpenStack." -msgstr "" -"ネットワーク経由でファイルシステムを利用可能にある方式。OpenStack によりサ" -"ポートされる。" - -msgid "" -"A method of VM live migration used by KVM to evacuate instances from one " -"host to another with very little downtime during a user-initiated " -"switchover. Does not require shared storage. Supported by Compute." -msgstr "" -"ユーザー操作によりあるホストから別のホストに切り替え中、わずかな停止時間でイ" -"ンスタンスを退避するために、KVM により使用される仮想マシンのライブマイグレー" -"ションの方法。共有ストレージ不要。Compute によりサポートされる。" - -msgid "" -"A method of operating system installation where a finalized disk image is " -"created and then used by all nodes without modification." -msgstr "" -"最終的なディスクイメージが作成され、すべてのノードで変更することなく使用され" -"る、オペレーティングシステムのインストール方法。" - -msgid "" -"A method to automatically configure networking for a host at boot time. " -"Provided by both Networking and Compute." -msgstr "" -"ホストの起動時にネットワークを自動的に設定する方式。Networking と Compute に" -"より提供される。" - -msgid "" -"A method to establish trusts between identity providers and the OpenStack " -"cloud." -msgstr "認証プロバイダーと OpenStack クラウド間で信頼を確立する方法。" - -msgid "" -"A method to further subdivide availability zones into hypervisor pools, a " -"collection of common hosts." -msgstr "" -"アベイラビリティーゾーンをさらに小さいハイパーバイザープールに分割するための" -"方法。一般的なホスト群。" - -msgid "" -"A minimal Linux distribution designed for use as a test image on clouds such " -"as OpenStack." -msgstr "" -"OpenStack などのクラウドでテストイメージとして使用するために設計された最小の " -"Linux ディストリビューション。" - -msgid "" -"A model that enables access to a shared pool of configurable computing " -"resources, such as networks, servers, storage, applications, and services, " -"that can be rapidly provisioned and released with minimal management effort " -"or service provider interaction." -msgstr "" -"ネットワーク、サーバー、ストレージ、アプリケーション、サービスなどの設定可能" -"なコンピューティングリソースの共有プールにアクセスできるモデル。最小限の管理" -"作業やサービスプロバイダーとのやりとりで、迅速に配備できてリリースできる。" - -msgid "" -"A network authentication protocol which works on the basis of tickets. " -"Kerberos allows nodes communication over a non-secure network, and allows " -"nodes to prove their identity to one another in a secure manner." -msgstr "" -"チケットベースで機能するネットワーク認証プロトコル。 Kerberos により、安全で" -"ないネットワークを通したノード通信ができ、ノードは安全な方法で互いに本人確認" -"ができるようになります。" - -msgid "" -"A network protocol used by a network client to obtain an IP address from a " -"configuration server. Provided in Compute through the dnsmasq daemon when " -"using either the FlatDHCP manager or VLAN manager network manager." -msgstr "" -"管理サーバーから IP アドレスを取得するために、ネットワーククライアントにより" -"使用されるネットワークプロトコル。FlatDHCP マネージャーや VLAN マネージャー使" -"用時、dnsmasq デーモン経由で Compute で提供される。" - -msgid "A network segment typically used for instance Internet access." -msgstr "" -"一般的にインスタンスのインターネットアクセスに使用されるネットワークセグメン" -"ト。" - -msgid "" -"A network segment used for administration, not accessible to the public " -"Internet." -msgstr "" -"管理のために使用されるネットワークセグメント。パブリックなインターネットから" -"アクセスできない。" - -msgid "" -"A network segment used for instance traffic tunnels between compute nodes " -"and the network node." -msgstr "" -"コンピュートノードとネットワークノード間で、インスタンスのトラフィックをトン" -"ネルするために使用されるネットワークセグメント。" - -msgid "" -"A network virtualization technology that attempts to reduce the scalability " -"problems associated with large cloud computing deployments. It uses a VLAN-" -"like encapsulation technique to encapsulate Ethernet frames within UDP " -"packets." -msgstr "" -"大規模なクラウドコンピューティング環境に関連するスケーラビリティー問題を削減" -"するためのネットワーク仮想化技術。VLAN のようなカプセル化技術を使用して、" -"Ethernet フレームを UDP パケット内にカプセル化する。" - -msgid "A node that provides the Object Storage proxy service." -msgstr "Object Storage プロキシサービスを提供するノード。" - -msgid "" -"A node that runs Object Storage account, container, and object services." -msgstr "" -"Object Storage のアカウントサービス、コンテナーサービス、オブジェクトサービス" -"を実行するノード。" - -msgid "" -"A node that runs network, volume, API, scheduler, and image services. Each " -"service may be broken out into separate nodes for scalability or " -"availability." -msgstr "" -"ネットワーク、ボリューム、API、スケジューラー、イメージサービスなどを実行する" -"ノード。各サービスは、スケーラビリティーや可用性のために、別々のノードに分割" -"することもできます。" - -msgid "" -"A node that runs the nova-compute " -"daemon that manages VM instances that provide a wide range of services, such as web applications and " -"analytics." -msgstr "" -"nova-compute デーモンを実行する" -"ノード。Web アプリケーションや分析などの幅広いサービスを提供する。" - -msgid "" -"A notification driver that monitors VM instances and updates the capacity " -"cache as needed." -msgstr "" -"VM インスタンスを監視し、必要に応じて容量キャッシュを更新する通知ドライバ。" - -msgid "" -"A number within a database that is incremented each time a change is made. " -"Used by Object Storage when replicating." -msgstr "" -"変更が行われる度に増加するデータベース内の数値。Object Storage が複製を行う際" -"に使用する。" - -msgid "" -"A package commonly installed in VM images that performs initialization of an " -"instance after boot using information that it retrieves from the metadata " -"service, such as the SSH public key and user data." -msgstr "" -"メタデータサービスから取得した、SSH 公開鍵やユーザーデータなどの情報を使用し" -"て、インスタンスの起動後に初期化を実行する、一般的に仮想マシンイメージにイン" -"ストールされるパッケージ。" - -msgid "A persistent storage method supported by XenAPI, such as iSCSI or NFS." -msgstr "iSCSI や NFS など、XenAPI によりサポートされる永続ストレージ方式。" - -msgid "A person who plans, designs, and oversees the creation of clouds." -msgstr "クラウドの作成を計画、設計および監督する人。" - -msgid "" -"A personality that a user assumes to perform a specific set of operations. A " -"role includes a set of rights and privileges. A user assuming that role " -"inherits those rights and privileges." -msgstr "" -"ユーザーが特定の操作の組を実行すると仮定する人格。ロールは一組の権利と権限を" -"含みます。そのロールを仮定しているユーザーは、それらの権利と権限を継承しま" -"す。" - -msgid "A physical computer, not a VM instance (node)." -msgstr "物理コンピューター。仮想マシンインスタンス (ノード) ではない。" - -msgid "" -"A physical or virtual device that provides connectivity to another device or " -"medium." -msgstr "他のデバイスやメディアに接続する物理デバイスまたは仮想デバイス。" - -msgid "" -"A physical or virtual network device that passes network traffic between " -"different networks." -msgstr "" -"異なるネットワーク間でネットワーク通信を転送する、物理または仮想のネットワー" -"クデバイス。" - -msgid "" -"A piece of software that makes available another piece of software over a " -"network." -msgstr "" -"他のソフトウェア部品をネットワーク経由で利用可能にするソフトウェア部品。" - -msgid "" -"A platform that provides a suite of desktop environments that users access " -"to receive a desktop experience from any location. This may provide general " -"use, development, or even homogeneous testing environments." -msgstr "" -"デスクトップ環境群を提供するプラットフォーム。ユーザーがどこからでもデスク" -"トップを利用するためにアクセスする可能性がある。一般的な使用、開発、同種のテ" -"スト環境さえも提供できる。" - -msgid "A plug-in for the OpenStack dashboard (horizon)." -msgstr "OpenStack dashboard (horizon) のプラグイン。" - -msgid "" -"A point-in-time copy of an OpenStack storage volume or image. Use storage " -"volume snapshots to back up volumes. Use image snapshots to back up data, or " -"as \"gold\" images for additional servers." -msgstr "" -"OpenStack ストレージボリュームやイメージの、ある時点でのコピー。ストレージの" -"ボリュームスナップショットは、ボリュームをバックアップするために使用する。イ" -"メージスナップショットは、データのバックアップを行ったり、新しいサーバー用の" -"「ゴールド」イメージ(設定済みイメージ)としてバックアップしたりするのに使用" -"する。" - -msgid "" -"A pre-made VM image that serves as a cloudpipe server. Essentially, OpenVPN " -"running on Linux." -msgstr "" -"cloudpipe サーバとしてサービスを行う為の、予め用意された VM イメージ。本質的" -"には Linux 上で実行される OpenVPN。" - -msgid "" -"A process that is created when a RPC call is executed; used to push the " -"message to the topic exchange." -msgstr "" -"RPC コールが実行されるときに作成されるプロセス。メッセージをトピック交換者に" -"プッシュするために使用される。" - -msgid "" -"A process that runs in the background and waits for requests. May or may not " -"listen on a TCP or UDP port. Do not confuse with a worker." -msgstr "" -"バックグラウンドで動作し、リクエストを待機するプロセス。TCP ポートや UDP ポー" -"トをリッスンする可能性がある。ワーカーとは異なる。" - -msgid "" -"A program that keeps the Image service VM image cache at or below its " -"configured maximum size." -msgstr "" -"Image service の仮想マシンイメージキャッシュを設定した最大値以下に保つプログ" -"ラム。" - -msgid "" -"A programming language that is used to create systems that involve more than " -"one computer by way of a network." -msgstr "" -"ネットワーク経由で複数のコンピューターが関連するシステムを作成するために使用" -"されるプログラミング言語。" - -msgid "" -"A project that is not officially endorsed by the OpenStack Foundation. If " -"the project is successful enough, it might be elevated to an incubated " -"project and then to a core project, or it might be merged with the main code " -"trunk." -msgstr "" -"OpenStack Foundation で公認されていないプロジェクト。プロジェクトが充分成功し" -"た場合、育成プロジェクトに昇格し、その後コアプロジェクトに昇格する事がある。" -"あるいはメインの code trunk にマージされる事もある。" - -msgid "" -"A project that ports the shell script-based project named DevStack to Python." -msgstr "" -"DevStack という名前のシェルスクリプトベースのプロジェクトを Python に移植する" -"プロジェクト。" - -msgid "A recommended architecture for an OpenStack cloud." -msgstr "OpenStack クラウドの推奨アーキテクチャー。" - -msgid "" -"A record that specifies information about a particular domain and belongs to " -"the domain." -msgstr "特定のドメインに関する情報を指定し、ドメインに所属するレコード。" - -msgid "" -"A remote, mountable file system in the context of the Shared File Systems. " -"You can mount a share to, and access a share from, several hosts by several " -"users at a time." -msgstr "" -"Shared File System サービスにおいて、リモートのマウント可能なファイルシステム" -"のこと。同時に、複数のユーザーが複数のホストから、共有をマウントしたり、アク" -"セスしたりできる。" - -msgid "A routing algorithm in the Compute RabbitMQ." -msgstr "Compute RabbitMQ におけるルーティングアルゴリズム。" - -msgid "" -"A routing table that is created within the Compute RabbitMQ during RPC " -"calls; one is created for each RPC call that is invoked." -msgstr "" -"RPC コール中に Compute RabbitMQ 内で作成されるルーティングテーブル。関連する" -"各 RPC コールに対して作成されるもの。" - -msgid "" -"A running VM, or a VM in a known state such as suspended, that can be used " -"like a hardware server." -msgstr "" -"実行中の仮想マシン。または、一時停止などの既知の状態にある仮想マシン。ハード" -"ウェアサーバーのように使用できる。" - -msgid "" -"A scheduling method used by Compute that randomly chooses an available host " -"from the pool." -msgstr "" -"利用可能なホストをプールからランダムに選択する、Compute により使用されるスケ" -"ジューリング方式。" - -msgid "A scripting language that is used to build web pages." -msgstr "Web ページを構築するために使用されるスクリプト言語。" - -msgid "" -"A security model that focuses on data confidentiality and controlled access " -"to classified information. This model divide the entities into subjects and " -"objects. The clearance of a subject is compared to the classification of the " -"object to determine if the subject is authorized for the specific access " -"mode. The clearance or classification scheme is expressed in terms of a " -"lattice." -msgstr "" -"データの機密性、および区分けした情報へのアクセスの制御に注力したセキュリ" -"ティーモデル。このモデルは、エンティティーをサブジェクト (主体) とオブジェク" -"ト (対象) に分ける。サブジェクトが特定のアクセスモードを許可されるかどうかを" -"判断するために、サブジェクトの権限がオブジェクトの区分と比較される。権限や区" -"分のスキーマは、格子モデルで表現される。" - -msgid "" -"A server is a VM instance in the Compute system. Flavor and image are " -"requisite elements when creating a server." -msgstr "" -"サーバーは、Compute システムにおける仮想マシンインスタンスである。フレーバー" -"とイメージが、サーバーの作成時に必須の要素である。" - -msgid "" -"A set of OpenStack resources created and managed by the Orchestration " -"service according to a given template (either an AWS CloudFormation template " -"or a Heat Orchestration Template (HOT))." -msgstr "" -"指定されたテンプレート (AWS CloudFormation テンプレートまたは Heat " -"Orchestration Template (HOT)) に基づいて、Orchestration により作成、管理され" -"る OpenStack リソース群。" - -msgid "" -"A set of network traffic filtering rules that are applied to a Compute " -"instance." -msgstr "" -"Compute のインスタンスに適用される、ネットワーク通信のフィルタリングルールの" -"集合。" - -msgid "" -"A set of segment objects that Object Storage combines and sends to the " -"client." -msgstr "" -"Object Storage が結合し、クライアントに送信する、オブジェクトの断片の塊。" - -msgid "" -"A simple certificate authority provided by Compute for cloudpipe VPNs and VM " -"image decryption." -msgstr "" -"cloudpipe VPN と仮想マシンイメージの復号のために、Compute により提供される簡" -"単な認証局。" - -msgid "" -"A special Object Storage object that contains the manifest for a large " -"object." -msgstr "" -"大きなオブジェクト向けのマニフェストを含む、特別な Object Storage のオブジェ" -"クト。" - -msgid "" -"A special type of VM image that is booted when an instance is placed into " -"rescue mode. Allows an administrator to mount the file systems for an " -"instance to correct the problem." -msgstr "" -"インスタンスがレスキューモード時に起動する、特別な種類の仮想マシンイメージ。" -"管理者が問題を修正するために、インスタンスのファイルシステムをマウントでき" -"る。" - -msgid "" -"A specification that, when implemented by a physical PCIe device, enables it " -"to appear as multiple separate PCIe devices. This enables multiple " -"virtualized guests to share direct access to the physical device, offering " -"improved performance over an equivalent virtual device. Currently supported " -"in OpenStack Havana and later releases." -msgstr "" -"物理 PCIe デバイスにより実装されるとき、複数の別々の PCIe デバイスとして見え" -"るようにできる仕様。これにより、複数の仮想化ゲストが物理デバイスへの直接アク" -"セスを共有できるようになる。同等の仮想デバイス経由より性能を改善できる。" -"OpenStack Havana 以降のリリースでサ" -"ポートされている。" - -msgid "" -"A standardized interface for managing compute, data, and network resources, " -"currently unsupported in OpenStack." -msgstr "" -"コンピュート、データ、ネットワークのリソースを管理するための標準的なインター" -"フェース。現在 OpenStack でサポートされない。" - -msgid "" -"A string of text provided to the client after authentication. Must be " -"provided by the user or process in subsequent requests to the API endpoint." -msgstr "" -"認証後にクライアントに提供されるテキスト文字列。API エンドポイントに続くリク" -"エストにおいて、ユーザーまたはプロセスにより提供される必要がある。" - -msgid "" -"A subset of API calls that are accessible to authorized administrators and " -"are generally not accessible to end users or the public Internet. They can " -"exist as a separate service (keystone) or can be a subset of another API " -"(nova)." -msgstr "" -"認可された管理者がアクセスでき、一般的にエンドユーザーとパブリックなインター" -"ネットがアクセスできない、API コールのサブセット。専用のサービス (keystone) " -"が存在し、他の API (nova) のサブセットになる可能性がある。" - -msgid "" -"A system by which Internet domain name-to-address and address-to-name " -"resolutions are determined." -msgstr "" -"インターネットのドメイン名からアドレス、アドレスからドメイン名に名前解決する" -"システム。" - -msgid "" -"A system that provides services to other system entities. In case of " -"federated identity, OpenStack Identity is the service provider." -msgstr "" -"サービスを他のシステムエンティティーに提供するシステム。連合認証の場合、" -"OpenStack Identity がサービスプロバイダーとなる。" - -msgid "" -"A tool to automate system configuration and installation on Debian-based " -"Linux distributions." -msgstr "" -"Debian 系の Linux ディストリビューションでシステム設定やインストールを自動化" -"するツール。" - -msgid "" -"A tool to automate system configuration and installation on Red Hat, Fedora, " -"and CentOS-based Linux distributions." -msgstr "" -"Red Hat、Fedora、CentOS 系の Linux ディストリビューションにおいて、システム設" -"定とインストールを自動化するためのツール。" - -msgid "A type of VM image that exists as a single, bootable file." -msgstr "単独の、ブート可能なファイルとして存在する仮想マシンイメージの形式。" - -msgid "" -"A type of image file that is commonly used for animated images on web pages." -msgstr "Web ページのアニメーション画像によく使用される画像ファイルの形式。" - -msgid "" -"A type of reboot where a physical or virtual power button is pressed as " -"opposed to a graceful, proper shutdown of the operating system." -msgstr "" -"きちんとした正常なOSのシャットダウンを行わず、物理又は仮想電源ボタンを押すタ" -"イプの再起動。" - -msgid "A unique ID given to each replica of an Object Storage database." -msgstr "Object Storage データベースの各レプリカに与えられる一意な ID。" - -msgid "" -"A unit of storage within Object Storage used to store objects. It exists on " -"top of devices and is replicated for fault tolerance." -msgstr "" -"オブジェクトを保存するために使用される、Object Storage 内の保存単位。デバイス" -"の上位に存在し、耐障害のために複製される。" - -msgid "" -"A user-created Python module that is loaded by horizon to change the look " -"and feel of the dashboard." -msgstr "" -"ダッシュボードのルックアンドフィールを変更する為に Horizon がロードする、ユー" -"ザが作成した Python モジュール。" - -msgid "" -"A virtual network port within Networking; VIFs / vNICs are connected to a " -"port." -msgstr "" -"Networking 内の仮想ネットワークポート。仮想インターフェースや仮想 NIC は、" -"ポートに接続されます。" - -msgid "" -"A virtual network that provides connectivity between entities. For example, " -"a collection of virtual ports that share network connectivity. In Networking " -"terminology, a network is always a layer-2 network." -msgstr "" -"エンティティ間の接続性を提供する仮想ネットワーク。例えば、ネットワーク接続性" -"を共有する仮想ポート群。Networking の用語では、ネットワークは必ず L2 ネット" -"ワークを意味する。" - -msgid "" -"A web framework used extensively in horizon." -msgstr "" -"horizon で広範囲に使用している Web フ" -"レームワーク。" - -msgid "" -"A worker process that verifies the integrity of Object Storage objects, " -"containers, and accounts. Auditors is the collective term for the Object " -"Storage account auditor, container auditor, and object auditor." -msgstr "" -"Object Storage のオブジェクト、コンテナー、アカウントの完全性を検証するワー" -"カープロセス。auditor は、Object Storage アカウント auditor、コンテナー " -"auditor、オブジェクト auditor の総称。" - -msgid "" -"A wrapper used by the Image service that contains a VM image and its " -"associated metadata, such as machine state, OS disk size, and so on." -msgstr "" -"仮想マシンイメージ、および、マシンの状態や OS ディスク容量などの関連メタデー" -"タを含む、Image サービスにより使用されるラッパー。" - -msgid "ACL" -msgstr "ACL" - -msgid "API (application programming interface)" -msgstr "API (application programming interface)" - -msgid "API endpoint" -msgstr "API エンドポイント" - -msgid "API extension" -msgstr "API 拡張" - -msgid "API extension plug-in" -msgstr "API 拡張プラグイン" - -msgid "API key" -msgstr "API キー" - -msgid "API server" -msgstr "API サーバー" - -msgid "API token" -msgstr "API トークン" - -msgid "" -"API used to access OpenStack Networking. Provides an extensible architecture " -"to enable custom plug-in creation." -msgstr "" -"OpenStack Networking にアクセスするために利用する API。独自プラグインを作成で" -"きる拡張性を持ったアーキテクチャーになっている。" - -msgid "API used to access OpenStack Object Storage." -msgstr "OpenStack Object Storage にアクセスするために使用する API。" - -msgid "API version" -msgstr "API バージョン" - -msgid "ATA over Ethernet (AoE)" -msgstr "ATA over Ethernet (AoE)" - -msgid "AWS" -msgstr "AWS" - -msgid "AWS (Amazon Web Services)" -msgstr "AWS (Amazon Web Services)" - -msgid "" -"AWS CloudFormation allows AWS users to create and manage a collection of " -"related resources. The Orchestration service supports a CloudFormation-" -"compatible format (CFN)." -msgstr "" -"AWS CloudFormation により、AWS ユーザーは関連するリソース群を作成し、管理でき" -"るようになる。オーケストレーションサービスは CloudFormation 互換形式 (CFN) を" -"サポートする。" - -msgid "AWS CloudFormation template" -msgstr "AWS CloudFormation テンプレート" - -msgid "" -"Absolute limit on the amount of network traffic a Compute VM instance can " -"send and receive." -msgstr "" -"Compute の仮想マシンインスタンスが送受信できるネットワーク通信量の絶対制限。" - -msgid "Active Directory" -msgstr "Active Directory" - -msgid "" -"Acts as the gatekeeper to Object Storage and is responsible for " -"authenticating the user." -msgstr "Object Storage へのゲートとして動作する。ユーザーの認証に責任を持つ。" - -msgid "Address Resolution Protocol (ARP)" -msgstr "Address Resolution Protocol (ARP)" - -msgid "Advanced Message Queuing Protocol (AMQP)" -msgstr "Advanced Message Queuing Protocol (AMQP)" - -msgid "Advanced RISC Machine (ARM)" -msgstr "Advanced RISC Machine (ARM)" - -msgid "" -"All OpenStack core projects are provided under the terms of the Apache " -"License 2.0 license." -msgstr "" -"すべての OpenStack コアプロジェクトは Apache License 2.0 ライセンスの条件で提" -"供されている。" - -msgid "" -"All domains and their components, such as mail servers, utilize DNS to " -"resolve to the appropriate locations. DNS servers are usually set up in a " -"master-slave relationship such that failure of the master invokes the slave. " -"DNS servers might also be clustered or replicated such that changes made to " -"one DNS server are automatically propagated to other active servers." -msgstr "" -"すべてのドメイン、メールサーバーなどのコンポーネントは、DNS を利用して、適切" -"な場所を解決する。DNS サーバーは、マスターの障害がスレーブにより助けられるよ" -"う、一般的にマスターとスレーブの関係で構築する。DNS サーバーは、ある DNS サー" -"バーへの変更が他の動作中のサーバーに自動的に反映されるよう、クラスター化やレ" -"プリケーションされることもある。" - -msgid "" -"Allows a user to set a flag on an Object Storage container so that all " -"objects within the container are versioned." -msgstr "" -"コンテナー内のすべてのオブジェクトがバージョンを付けられるように、ユーザーが " -"Object Storage のコンテナーにフラグを設定できる。" - -msgid "Alphanumeric ID assigned to each Identity service role." -msgstr "各 Identity service ロールに割り当てられる英数 ID。" - -msgid "" -"Also, a domain is an entity or container of all DNS-related information " -"containing one or more records." -msgstr "" -"ドメインは、1 つ以上のレコードを含む、すべて DNS 関連の情報のエンティティーや" -"コンテナーである。" - -msgid "Alternative name for the Block Storage API." -msgstr "Block Storage API の別名。" - -msgid "Alternative name for the glance image API." -msgstr "Glance イメージ API の別名。" - -msgid "Alternative term for a Networking plug-in or Networking API extension." -msgstr "Networking プラグインや Networking API 拡張の別名。" - -msgid "Alternative term for a RabbitMQ message exchange." -msgstr "RabbitMQ メッセージ交換の別名。" - -msgid "Alternative term for a VM image." -msgstr "VM イメージの別名。" - -msgid "Alternative term for a VM instance type." -msgstr "VM インスタンスタイプの別名。" - -msgid "Alternative term for a VM or guest." -msgstr "仮想マシンやゲストの別名。" - -msgid "Alternative term for a cloud controller node." -msgstr "クラウドコントローラーノードの別名。" - -msgid "Alternative term for a cloudpipe." -msgstr "cloudpipe の別名。" - -msgid "Alternative term for a fixed IP address." -msgstr "Fixed IP アドレスの別名。" - -msgid "Alternative term for a flavor ID." -msgstr "フレーバー ID の別名。" - -msgid "" -"Alternative term for a non-durable exchange." -msgstr "非永続交換の別名。" - -msgid "Alternative term for a non-durable queue." -msgstr "非永続キューの別名。" - -msgid "" -"Alternative term for a paused VM instance." -msgstr "" -"一時停止された仮想マシンインスタンス" -"の別名。" - -msgid "Alternative term for a virtual network." -msgstr "仮想ネットワークの別名。" - -msgid "Alternative term for a volume plug-in." -msgstr "ボリュームプラグインの別名。" - -msgid "" -"Alternative term for an API extension or plug-in. In the context of Identity " -"service, this is a call that is specific to the implementation, such as " -"adding support for OpenID." -msgstr "" -"API 拡張やプラグインの別名。Identity service では、OpenID のサポートの追加な" -"ど、特定の実装を意味する。" - -msgid "Alternative term for an API token." -msgstr "API トークンの別名。" - -msgid "Alternative term for an Amazon EC2 access key. See EC2 access key." -msgstr "Amazon EC2 アクセスキーの別名。EC2 アクセスキー参照。" - -msgid "Alternative term for an Identity service catalog." -msgstr "Identity サービスカタログの別名。" - -msgid "Alternative term for an Identity service default token." -msgstr "Identity service デフォルトトークンの別名。" - -msgid "Alternative term for an Object Storage authorization node." -msgstr "Object Storage 認可ノードの別名。" - -msgid "Alternative term for an admin API." -msgstr "管理 API(admin API)の別名。" - -msgid "Alternative term for an ephemeral volume." -msgstr "エフェメラルボリュームの別名。" - -msgid "Alternative term for an image." -msgstr "イメージの別名。" - -msgid "Alternative term for instance UUID." -msgstr "インスタンス UUID の別名。" - -msgid "Alternative term for non-durable." -msgstr "非永続の別名。" - -msgid "Alternative term for tenant." -msgstr "テナントの別名。" - -msgid "Alternative term for the Compute API." -msgstr "Compute API の別名。" - -msgid "Alternative term for the Identity service API." -msgstr "Identity service API の別名。" - -msgid "Alternative term for the Identity service catalog." -msgstr "Identity サービスカタログの別名。" - -msgid "Alternative term for the Image service image registry." -msgstr "Image service イメージレジストリの別名。" - -msgid "Alternative term for the Image service registry." -msgstr "Image service レジストリの別名。" - -msgid "Amazon Kernel Image (AKI)" -msgstr "Amazon Kernel Image (AKI)" - -msgid "Amazon Machine Image (AMI)" -msgstr "Amazon Machine Image (AMI)" - -msgid "Amazon Ramdisk Image (ARI)" -msgstr "Amazon Ramdisk Image (ARI)" - -msgid "Amazon Web Services." -msgstr "Amazon Web Services。" - -msgid "" -"An API endpoint used for both service-to-service communication and end-user " -"interactions." -msgstr "" -"サービス間通信やエンドユーザーの操作などに使用される API エンドポイント。" - -msgid "" -"An API on a separate endpoint for attaching, detaching, and creating block " -"storage for compute VMs." -msgstr "" -"コンピュート VM 用のブロックストレージの作成、接続、接続解除を行うための API " -"で、独立したエンドポイントとして提供される。" - -msgid "An API that is accessible to tenants." -msgstr "テナントにアクセス可能な API。" - -msgid "" -"An Amazon EBS storage volume that contains a bootable VM image, currently " -"unsupported in OpenStack." -msgstr "" -"ブート可能な仮想マシンイメージを含む Amazon EBS ストレージボリューム。現在 " -"OpenStack では未サポート。" - -msgid "" -"An Amazon EC2 concept of an isolated area that is used for fault tolerance. " -"Do not confuse with an OpenStack Compute zone or cell." -msgstr "" -"耐障害性のために使用されるエリアを分離する Amazon EC2 の概念。OpenStack " -"Compute のゾーンやセルと混同しないこと。" - -msgid "" -"An IP address that a project can associate with a VM so that the instance " -"has the same public IP address each time that it boots. You create a pool of " -"floating IP addresses and assign them to instances as they are launched to " -"maintain a consistent IP address for maintaining DNS assignment." -msgstr "" -"インスタンスを起動するたびに同じパブリック IP アドレスを持てるように、プロ" -"ジェクトが仮想マシンに関連付けられる IP アドレス。DNS 割り当てを維持するため" -"に、Floating IP アドレスのプールを作成し、インスタンスが起動するたびにそれら" -"をインスタンスに割り当て、一貫した IP アドレスを維持します。" - -msgid "" -"An IP address that can be assigned to a VM instance within the shared IP " -"group. Public IP addresses can be shared across multiple servers for use in " -"various high-availability scenarios. When an IP address is shared to another " -"server, the cloud network restrictions are modified to enable each server to " -"listen to and respond on that IP address. You can optionally specify that " -"the target server network configuration be modified. Shared IP addresses can " -"be used with many standard heartbeat facilities, such as keepalive, that " -"monitor for failure and manage IP failover." -msgstr "" -"共有 IP グループ内の仮想マシンインスタンスに割り当てられる IP アドレス。パブ" -"リック IP アドレスは、さまざまな高可用性のシナリオで使用するために複数サー" -"バーにまたがり共有できる。IP アドレスが別のサーバーと共有されるとき、クラウド" -"のネットワーク制限が変更され、各サーバーがリッスンでき、その IP アドレスに応" -"答できるようになる。オプションとして、対象サーバーの変更するネットワーク設定" -"を指定できる。共有 IP アドレスは、keepalive などの多くの標準的なハートビート" -"機能と一緒に使用でき、エラーをモニターし、IP のフェイルオーバーを管理しる。" - -msgid "An IP address that is accessible to end-users." -msgstr "エンドユーザがアクセス可能な IP アドレス。" - -msgid "" -"An IP address that is associated with the same instance each time that " -"instance boots, is generally not accessible to end users or the public " -"Internet, and is used for management of the instance." -msgstr "" -"インスタンス起動時に毎回同じインスタンスに割当られるIPアドレス(一般に、エン" -"ドユーザやパブリックインターネットからはアクセス出来ない)。インスタンスの管" -"理に使用される。" - -msgid "" -"An IP address used for management and administration, not available to the " -"public Internet." -msgstr "" -"管理のために使用される IP アドレス。パブリックなインターネットから利用できま" -"せん。" - -msgid "" -"An IP address, typically assigned to a router, that passes network traffic " -"between different networks." -msgstr "" -"異なるネットワーク間でネットワーク通信を中継する、IP アドレス。一般的にはルー" -"ターに割り当てられる。" - -msgid "" -"An Identity API v3 entity. Represents a collection of projects, groups and " -"users that defines administrative boundaries for managing OpenStack Identity " -"entities." -msgstr "" -"Identity v3 API のエンティティーで、プロジェクト、グループ、ユーザーの集合" -"で、OpenStack Identity のエンティティーを管理する管理権限の範囲を規定するもの" -"である。" - -msgid "" -"An Identity service API access token that is associated with a specific " -"tenant." -msgstr "特定のテナントに関連付けられた Identity service API アクセストークン。" - -msgid "" -"An Identity service API endpoint that is associated with one or more tenants." -msgstr "" -"1 つ以上のテナントと関連付けられた Identity service API エンドポイント。" - -msgid "" -"An Identity service component that manages and validates tokens after a user " -"or tenant has been authenticated." -msgstr "" -"ユーザーやテナントが認証された後、トークンを管理し、検証する Identity のコン" -"ポーネント。" - -msgid "" -"An Identity service feature that enables services, such as Compute, to " -"automatically register with the catalog." -msgstr "" -"自動的にカタログに登録するために、Compute などのサービスを有効化する、" -"Identity の機能。" - -msgid "" -"An Identity service that lists API endpoints that are available to a user " -"after authentication with the Identity service." -msgstr "" -"ユーザーが Identity で認証後、利用可能な API エンドポイントを一覧表示する、" -"Identity のサービス。" - -msgid "" -"An Identity service token that is not associated with a specific tenant and " -"is exchanged for a scoped token." -msgstr "" -"特定のテナントに関連づけられていない、スコープ付きトークンのために交換され" -"る、Identity のトークン。" - -msgid "" -"An Identity v3 API entity. Represents a collection of users that is owned by " -"a specific domain." -msgstr "" -"Identity v3 API のエンティティーで、特定のドメイン内のユーザーの集合を表す。" - -msgid "An Image service VM image that is available to all tenants." -msgstr "すべてのテナントが利用できる Image service の仮想マシンイメージ。" - -msgid "An Image service VM image that is only available to specified tenants." -msgstr "指定したテナントのみで利用可能な Image service の仮想マシンイメージ。" - -msgid "" -"An Image service container format that indicates that no container exists " -"for the VM image." -msgstr "" -"仮想マシンイメージ用のコンテナーが存在しないことを意味する、Image service の" -"コンテナー形式。" - -msgid "" -"An Image service that provides VM image metadata information to clients." -msgstr "" -"クライアントに仮想マシンイメージメタデータ情報を提供する Image service。" - -msgid "" -"An Internet Protocol (IP) address configured on the load balancer for use by " -"clients connecting to a service that is load balanced. Incoming connections " -"are distributed to back-end nodes based on the configuration of the load " -"balancer." -msgstr "" -"負荷分散するサービスへのクライアント接続に使用される負荷分散装置において設定" -"される IP アドレス。受信の接続が、負荷分散の設定に基づいて、バックエンドの" -"ノードに分散される。" - -msgid "An L2 network segment within Networking." -msgstr "Networking 内の L2 ネットワークセグメント。" - -msgid "An Object Storage component that collects meters." -msgstr "計測項目を収集する Object Storage のコンポーネント。" - -msgid "" -"An Object Storage component that copies an object to remote partitions for " -"fault tolerance." -msgstr "" -"耐障害性のためにオブジェクトをリモートパーティションをコピーする Object " -"Storage コンポーネント。" - -msgid "" -"An Object Storage component that copies changes in the account, container, " -"and object databases to other nodes." -msgstr "" -"アカウント、コンテナー、オブジェクトデータベースを他のノードに変更点をコピー" -"する Object Storage コンポーネント。" - -msgid "An Object Storage component that is responsible for managing objects." -msgstr "オブジェクトの管理に責任を持つ Object Storage のコンポーネント。" - -msgid "" -"An Object Storage component that provides account services such as list, " -"create, modify, and audit. Do not confuse with OpenStack Identity service, " -"OpenLDAP, or similar user-account services." -msgstr "" -"一覧表示、作成、変更、監査などのアカウントサービスを提供する、Object Storage " -"のコンポーネント。OpenStack Identity、OpenLDAP、類似のユーザーアカウントサー" -"ビスなどと混同しないこと。" - -msgid "" -"An Object Storage large object that has been broken up into pieces. The re-" -"assembled object is called a concatenated object." -msgstr "" -"部品に分割された Object Storage の大きなオブジェクト。再構築されたオブジェク" -"トは、連結オブジェクトと呼ばれる。" - -msgid "" -"An Object Storage middleware component that enables creation of URLs for " -"temporary object access." -msgstr "" -"一時的なオブジェクトアクセスのために URL を作成できる Object Storage ミドル" -"ウェアコンポーネント。" - -msgid "An Object Storage node that provides authorization services." -msgstr "認可サービスを提供する Object Storage ノード。" - -msgid "" -"An Object Storage node that provides container services, account services, " -"and object services; controls the account databases, container databases, " -"and object storage." -msgstr "" -"コンテナーサービス、アカウントサービス、オブジェクトサービスを提供する " -"Object Storage のノード。アカウントデータベース、コンテナーデータベース、オブ" -"ジェクトデータベースを制御する。" - -msgid "An Object Storage server that manages containers." -msgstr "コンテナーを管理する Object Storage サーバー。" - -msgid "" -"An Object Storage worker that scans for and deletes account databases and " -"that the account server has marked for deletion." -msgstr "" -"アカウントサーバーが削除する印を付けた、アカウントデータベースをスキャンし、" -"削除する、Object Storage のワーカー。" - -msgid "" -"An OpenStack core project that provides discovery, registration, and " -"delivery services for disk and server images. The project name of the Image " -"service is glance." -msgstr "" -"ディスクやサーバーイメージ向けのサービスの検索、登録、配信を提供する " -"OpenStack コアプロジェクト。Image service のプロジェクト名は glance。" - -msgid "An OpenStack core project that provides object storage services." -msgstr "オブジェクトストレージサービスを提供する OpenStack コアプロジェクト。" - -msgid "" -"An OpenStack grouped release of projects that came out in the spring of " -"2011. It included Compute (nova), Object Storage (swift), and the Image " -"service (glance)." -msgstr "" -"2011年春に登場した OpenStack 関連のプロジェクトリリース。Compute (Nova), " -"Object Storage (Swift), Image Service (Glance) が含まれていた。" - -msgid "" -"An OpenStack service that provides a set of services for management of " -"shared file systems in a multi-tenant cloud environment. The service is " -"similar to how OpenStack provides block-based storage management through the " -"OpenStack Block Storage service project. With the Shared File Systems " -"service, you can create a remote file system and mount the file system on " -"your instances. You can also read and write data from your instances to and " -"from your file system. The project name of the Shared File Systems service " -"is manila." -msgstr "" -"マルチテナントのクラウド環境で共有ファイルシステムを管理するためのサービス群" -"を提供する OpenStack サービス。 OpenStack がブロックベースのストレージ管理" -"を、 OpenStack Block Storage サービスプロジェクトとして提供しているのと類似し" -"ている。 Shared File Systems サービスを使うと、リモートファイルシステムを作成" -"し、自分のインスタンスからそのファイルシステムをマウントし、インスタンスから" -"そのファイルシステムの読み書きを行える。このプロジェクトのコード名は manila。" - -msgid "" -"An OpenStack service, such as Compute, Object Storage, or Image service. " -"Provides one or more endpoints through which users can access resources and " -"perform operations." -msgstr "" -"Compute、Object Storage、Image service などの OpenStack のサービス。ユーザー" -"がリソースにアクセスしたり、操作を実行したりできる 1 つ以上のエンドポイントを" -"提供する。" - -msgid "An OpenStack-provided image." -msgstr "OpenStack が提供するイメージ。" - -msgid "An OpenStack-supported hypervisor." -msgstr "OpenStack がサポートするハイパーバイザーの1つ。" - -msgid "" -"An OpenStack-supported hypervisor. KVM is a full virtualization solution for " -"Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-" -"V), ARM, IBM Power, and IBM zSeries. It consists of a loadable kernel " -"module, that provides the core virtualization infrastructure and a processor " -"specific module." -msgstr "" -"OpenStack がサポートするハイパーバイザー。KVM は、仮想化拡張 (Intel VT や " -"AMD-V) を持つ x86 ハードウェア、ARM、IBM Power、IBM zSeries 上の Linux 向けの" -"完全仮想化ソリューション。" - -msgid "An administrator who has access to all hosts and instances." -msgstr "すべてのホストやインスタンスへアクセス権を持つ管理者。" - -msgid "" -"An administrator-defined token used by Compute to communicate securely with " -"the Identity service." -msgstr "" -"Identity と安全に通信するために Compute により使用される、管理者により定義さ" -"れたトークン。" - -msgid "" -"An alpha-numeric string of text used to access OpenStack APIs and resources." -msgstr "OpenStack API やリソースへのアクセスに使用される英数字文字列。" - -msgid "An alternative name for Networking API." -msgstr "Networking API の別名。" - -msgid "" -"An application protocol for accessing and maintaining distributed directory " -"information services over an IP network." -msgstr "" -"IP ネットワーク上の分散ディレクトリー情報サービスへのアクセスと管理を行うため" -"のアプリケーションプロトコル。" - -msgid "" -"An application protocol for distributed, collaborative, hypermedia " -"information systems. It is the foundation of data communication for the " -"World Wide Web. Hypertext is structured text that uses logical links " -"(hyperlinks) between nodes containing text. HTTP is the protocol to exchange " -"or transfer hypertext." -msgstr "" -"分散、協調、ハイパーメディア情報システム用のアプリケーションプロトコル。WWW " -"のデータ通信の基盤。ハイパーテキストは、ノード間でのテキストを含む論理リンク " -"(ハイパーリンク) を使った構造化テキストのことである。HTTP は、ハイパーテキス" -"トを交換したり転送したりするためのプロトコル。" - -msgid "" -"An application that runs on the back-end server in a load-balancing system." -msgstr "負荷分散システムでバックエンドサーバーで動作するアプリケーション。" - -msgid "" -"An authentication and authorization service for Object Storage, implemented " -"through WSGI middleware; uses Object Storage itself as the persistent " -"backing store." -msgstr "" -"Object Storage の認証と認可のサービス。WSGI ミドルウェア経由で実装される。" -"バックエンドの永続的なデータストアとして、Object Storage 自身を使用する。" - -msgid "" -"An authentication facility within Object Storage that enables Object Storage " -"itself to perform authentication and authorization. Frequently used in " -"testing and development." -msgstr "" -"Object Storage 自身が認証と認可を実行できるようになる、Object Storage 内の認" -"証機能。テストや開発によく使用される。" - -msgid "" -"An easy method to create a local LDAP directory for testing Identity and " -"Compute. Requires Redis." -msgstr "" -"Identity と Compute のテスト目的でローカルな LDAP ディレクトリーを作成するた" -"めの簡易な方法。Redis が必要。" - -msgid "" -"An element of the Compute RabbitMQ that comes to life when a RPC call is " -"executed. It connects to a direct exchange through a unique exclusive queue, " -"sends the message, and terminates." -msgstr "" -"RPC コールが実行されるとき、開始される Compute RabbitMQ の要素。一意な排他" -"キュー経由で直接交換者に接続し、メッセージを送信し、終了します。" - -msgid "" -"An element of the Compute capacity cache that is calculated based on the " -"number of build, snapshot, migrate, and resize operations currently in " -"progress on a given host." -msgstr "" -"指定されたホスト上で現在進行中の build, snapshot, migrate, resize の操作数を" -"元に計算される、Compute のキャパシティキャッシュの1要素。" - -msgid "" -"An encrypted communications protocol for secure communication over a " -"computer network, with especially wide deployment on the Internet. " -"Technically, it is not a protocol in and of itself; rather, it is the result " -"of simply layering the Hypertext Transfer Protocol (HTTP) on top of the TLS " -"or SSL protocol, thus adding the security capabilities of TLS or SSL to " -"standard HTTP communications. most OpenStack API endpoints and many inter-" -"component communications support HTTPS communication." -msgstr "" -"コンピューターネットワークで、とくにインターネットで広く使われている、安全に" -"通信を行うための暗号化通信プロトコル。技術的には、プロトコルではなく、むしろ" -"シンプルに SSL/TLS プロトコルの上に Hypertext Transfer Protocol (HTTP) を重ね" -"ているものである。そのため、SSL や TLS プロトコルのセキュリティー機能を標準的" -"な HTTP 通信に追加したものである。ほとんどの OpenStack API エンドポイントや多" -"くのコンポーネント間通信で、 HTTPS 通信がサポートされている。" - -msgid "" -"An entity in the context of the Shared File Systems that encapsulates " -"interaction with the Networking service. If the driver you selected runs in " -"the mode requiring such kind of interaction, you need to specify the share " -"network to create a share." -msgstr "" -"Shared File System サービスにおいて、Networking サービスとのやり取りを抽象化" -"するエンティティー。選択したドライバーが Networking サービスとのやり取りを必" -"要とするモードで動作している場合、共有を作成する際に共有用ネットワーク " -"(share network) を指定する必要がある。" - -msgid "" -"An entity that maps Object Storage data to partitions. A separate ring " -"exists for each service, such as account, object, and container." -msgstr "" -"Object Storage データのパーティションへのマッピングを行う。アカウント、オブ" -"ジェクト、コンテナーというサービス単位に別々のリングが存在する。" - -msgid "An iSCSI authentication method supported by Compute." -msgstr "Compute によりサポートされる iSCSI の認証方式。" - -msgid "" -"An in-progress specification for cloud management. Currently unsupported in " -"OpenStack." -msgstr "策定中のクラウド管理の仕様。現在、OpenStack では未サポート。" - -msgid "" -"An integrated project that aims to orchestrate multiple cloud applications " -"for OpenStack." -msgstr "" -"OpenStack に複数のクラウドアプリケーションをオーケストレーションする為に開発" -"されたプロジェクト。" - -msgid "" -"An integrated project that orchestrates multiple cloud applications for " -"OpenStack. The project name of Orchestration is heat." -msgstr "" -"OpenStack 向けに複数のクラウドアプリケーションをオーケストレーションする統合" -"プロジェクト。Orchestration のプロジェクト名は heat。" - -msgid "" -"An integrated project that provide scalable and reliable Cloud Database-as-a-" -"Service functionality for both relational and non-relational database " -"engines. The project name of Database service is trove." -msgstr "" -"リレーショナルデータベースと非リレーショナルデータベースの両エンジンに対し" -"て、スケール可能かつ信頼できるクラウド Database-as-a-Service を提供する統合プ" -"ロジェクト。この Database service の名前は trove。" - -msgid "" -"An integrated project that provides metering and measuring facilities for " -"OpenStack. The project name of Telemetry is ceilometer." -msgstr "" -"OpenStack にメータリングと計測の機能を提供する、統合プロジェクト。Telemetry " -"のプロジェクト名は ceilometer。" - -msgid "" -"An interface that is plugged into a port in a Networking network. Typically " -"a virtual network interface belonging to a VM." -msgstr "" -"Networking のネットワークにおけるポートに差し込まれるインターフェース。一般的" -"に、仮想マシンに設定された仮想ネットワークインターフェース。" - -msgid "" -"An object state in Object Storage where a new replica of the object is " -"automatically created due to a drive failure." -msgstr "" -"ドライブ故障により、オブジェクトの新しい複製が自動的に作成された、Object " -"Storage のオブジェクトの状態。" - -msgid "An object within Object Storage that is larger than 5GB." -msgstr "5GB より大きい Object Storage 内のオブジェクト。" - -msgid "An open source LDAP server. Supported by both Compute and Identity." -msgstr "" -"オープンソース LDAP サーバー。Compute と Identity によりサポートされる。" - -msgid "An open source SQL toolkit for Python, used in OpenStack." -msgstr "" -"OpenStack で使われている、オープンソースの Python 用 SQL ツールキット。" - -msgid "" -"An open source community project by Dell that aims to provide all necessary " -"services to quickly deploy clouds." -msgstr "" -"クラウドの迅速なデプロイ用に全ての必要なサービスを提供する用途の、Dell による" -"オープンソースコミュニティプロジェクト。" - -msgid "" -"An operating system configuration management tool supporting OpenStack " -"deployments." -msgstr "" -"OpenStack の導入をサポートするオペレーティングシステムの設定管理ツール。" - -msgid "" -"An operating system configuration-management tool supported by OpenStack." -msgstr "OpenStackがサポートするオペレーティングシステム構成管理ツール。" - -msgid "An operating system instance running under the control of a hypervisor." -msgstr "" -"ハイパーバイザーの管理下で実行しているオペレーティングシステムのインスタン" -"ス。" - -msgid "" -"An operating system instance that runs on top of a hypervisor. Multiple VMs " -"can run at the same time on the same physical host." -msgstr "" -"ハイパーバイザー上で動作するオペレーティングシステムインスタンス。一台の物理" -"ホストで同時に複数の VM を実行できる。" - -msgid "" -"An option within Compute that enables administrators to create and manage " -"users through the nova-manage command as opposed to using " -"the Identity service." -msgstr "" -"管理者が、Identity を使用する代わりに、nova-manage コマン" -"ド経由でユーザーを作成および管理できる、Compute 内のオプション。" - -msgid "" -"An option within Image service so that an image is deleted after a " -"predefined number of seconds instead of immediately." -msgstr "" -"イメージをすぐに削除する代わりに、事前定義した秒数経過後に削除するための、" -"Image service 内のオプション。" - -msgid "Anvil" -msgstr "Anvil" - -msgid "" -"Any business that provides Internet access to individuals or businesses." -msgstr "個人や組織にインターネットアクセスを提供する何らかのビジネス。" - -msgid "" -"Any client software that enables a computer or device to access the Internet." -msgstr "" -"コンピューターやデバイスがインターネットにアクセスできる、何らかのクライアン" -"トソフトウェア。" - -msgid "Any compute node that runs the network worker daemon." -msgstr "ネットワークワーカーデーモンを実行するコンピュートノードすべて。" - -msgid "" -"Any kind of text that contains a link to some other site, commonly found in " -"documents where clicking on a word or words opens up a different website." -msgstr "" -"どこか別のサイトへのリンクを含む、ある種のテキスト。一般的に、別の Web サイト" -"を開く言葉をクリックするドキュメントに見られる。" - -msgid "Any node running a daemon or worker that provides an API endpoint." -msgstr "" -"API エンドポイントを提供するデーモンまたはワーカーを実行するあらゆるノード。" - -msgid "" -"Any piece of hardware or software that wants to connect to the network " -"services provided by Networking, the network connectivity service. An entity " -"can make use of Networking by implementing a VIF." -msgstr "" -"Networking により提供されるネットワークサービス、ネットワーク接続性サービスに" -"接続したい、ハードウェアやソフトウェアの部品。エンティティーは、仮想インター" -"フェースを実装することにより Networking を使用できる。" - -msgid "Apache" -msgstr "Apache" - -msgid "" -"Apache Hadoop is an open source software framework that supports data-" -"intensive distributed applications." -msgstr "" -"Apache Hadoop は、データインテンシブな分散アプリケーションをサポートする、" -"オープンソースソフトウェアフレームワークである。" - -msgid "Apache License 2.0" -msgstr "Apache License 2.0" - -msgid "Apache Web Server" -msgstr "Apache Web Server" - -msgid "Application Programming Interface (API)" -msgstr "Application Programming Interface (API)" - -msgid "Application Service Provider (ASP)" -msgstr "Application Service Provider (ASP)" - -msgid "" -"Association of an interface ID to a logical port. Plugs an interface into a " -"port." -msgstr "" -"論理ポートへのインターフェースIDの紐付け。インターフェースをポートに差し込" -"む。" - -msgid "Asynchronous JavaScript and XML (AJAX)" -msgstr "Asynchronous JavaScript and XML (AJAX)" - -msgid "" -"Attachment point where a virtual interface connects to a virtual network." -msgstr "仮想ネットワークへの仮想インタフェースの接続ポイント。" - -msgid "Austin" -msgstr "Austin" - -msgid "AuthN" -msgstr "AuthN" - -msgid "AuthZ" -msgstr "AuthZ" - -msgid "" -"Authentication and identity service by Microsoft, based on LDAP. Supported " -"in OpenStack." -msgstr "" -"Microsoft が提供する認証サービス。LDAP に基づいている。OpenStack でサポートさ" -"れる。" - -msgid "Authentication method that uses keys rather than passwords." -msgstr "パスワードの代わりに鍵を使用する認証方式。" - -msgid "" -"Authentication method that uses two or more credentials, such as a password " -"and a private key. Currently not supported in Identity." -msgstr "" -"パスワードと秘密鍵など、2 つ以上のクレデンシャルを使用する認証方式。Identity " -"では現在サポートされていない。" - -msgid "Auto ACK" -msgstr "自動 ACK" - -msgid "" -"Automated software test suite designed to run against the trunk of the " -"OpenStack core project." -msgstr "" -"OpenStack コアプロジェクトの trunk ブランチに対してテストを実行するために設計" -"された自動ソフトウェアテストスイート。" - -msgid "B" -msgstr "B" - -msgid "BMC" -msgstr "BMC" - -msgid "BMC (Baseboard Management Controller)" -msgstr "BMC (Baseboard Management Controller)" - -msgid "" -"Baseboard Management Controller. The intelligence in the IPMI architecture, " -"which is a specialized micro-controller that is embedded on the motherboard " -"of a computer and acts as a server. Manages the interface between system " -"management software and platform hardware." -msgstr "" -"ベースボード・マネジメント・コントローラー。IPMI アーキテクチャーにおける管理" -"機能。コンピューターのマザーボードに埋め込まれ、サーバーとして動作する、特別" -"なマイクロコントローラーである。システム管理ソフトウェアとプラットフォーム" -"ハードウェアの間の通信を管理する。" - -msgid "Bell-LaPadula model" -msgstr "Bell-LaPadula モデル" - -msgid "" -"Belongs to a particular domain and is used to specify information about the " -"domain. There are several types of " -"DNS records. Each record type contains particular information used to " -"describe the purpose of that record. Examples include mail exchange (MX) " -"records, which specify the mail server for a particular domain; and name " -"server (NS) records, which specify the authoritative name servers for a " -"domain." -msgstr "" -"特定のドメインに属し、ドメインに関す" -"る情報を指定するために使用される。いくつかの種類の DNS レコードがある。各レ" -"コード種別は、そのレコードの目的を説明するために使用される特定の情報を含む。" -"例えば、mail exchange (MX) レコードは、特定のドメインのメールサーバーを指定す" -"る。name server (NS) レコードは、ドメインの権威ネームサーバーを指定する。" - -msgid "Benchmark service" -msgstr "Benchmark サービス" - -msgid "Bexar" -msgstr "Bexar" - -msgid "" -"Bexar is the code name for the second release of OpenStack. The design " -"summit took place in San Antonio, Texas, US, which is the county seat for " -"Bexar county." -msgstr "" -"Bexar は OpenStack の 2 番目のコード名。デザインサミットは、アメリカ合衆国テ" -"キサス州サンアントニオで開催された。ベア郡の郡庁所在地。" - -msgid "Block Storage API" -msgstr "Block Storage API" - -msgid "" -"Block storage that is simultaneously accessible by multiple clients, for " -"example, NFS." -msgstr "" -"複数のクライアントにより同時にアクセス可能なブロックストレージ。例えば NFS。" - -msgid "Bootstrap Protocol (BOOTP)" -msgstr "Bootstrap Protocol (BOOTP)" - -msgid "Border Gateway Protocol (BGP)" -msgstr "Border Gateway Protocol (BGP)" - -msgid "" -"Both Image service and Compute support encrypted virtual machine (VM) images " -"(but not instances). In-transit data encryption is supported in OpenStack " -"using technologies such as HTTPS, SSL, TLS, and SSH. Object Storage does not " -"support object encryption at the application level but may support storage " -"that uses disk encryption." -msgstr "" -"Image service と Compute は、どちらも仮想マシンイメージ (インスタンスではな" -"い) の暗号化をサポートする。転送中のデータ暗号は、HTTPS、SSL、TLS、SSH などの" -"技術を使用して、OpenStack においてサポートされる。Object Storage は、アプリ" -"ケーションレベルでオブジェクト暗号化をサポートしませんが、ディスク暗号化を使用するストレージをサポートする可能" -"性がある。" - -msgid "Both a VM container format and disk format. Supported by Image service." -msgstr "" -"仮想マシンのコンテナー形式とディスク形式の両方。Image service によりサポート" -"される。" - -msgid "" -"Builds and manages rings within Object Storage, assigns partitions to " -"devices, and pushes the configuration to other storage nodes." -msgstr "" -"Object Storage のリングの作成、管理を行い、パーティションのデバイスへの割り当" -"てを行い、他のストレージノードに設定を転送する。" - -msgid "C" -msgstr "C" - -msgid "CA" -msgstr "CA" - -msgid "CA (Certificate/Certification Authority)" -msgstr "CA (認証局)" - -msgid "CADF" -msgstr "CADF" - -msgid "CALL" -msgstr "CALL" - -msgid "CAST" -msgstr "CAST" - -msgid "CAST (RPC primitive)" -msgstr "CAST (RPC プリミティブ)" - -msgid "CMDB" -msgstr "CMDB" - -msgid "CMDB (Configuration Management Database)" -msgstr "CMDB (構成管理データベース)" - -msgid "Cactus" -msgstr "Cactus" - -msgid "" -"Cactus is a city in Texas, US and is the code name for the third release of " -"OpenStack. When OpenStack releases went from three to six months long, the " -"code name of the release changed to match a geography nearest the previous " -"summit." -msgstr "" -"Cactus は、アメリカ合衆国テキサス州の都市であり、OpenStack の 3 番目のリリー" -"スのコード名である。OpenStack のリリース間隔が 3 か月から 6 か月になったと" -"き、リリースのコード名が前のサミットと地理的に近いところになるように変更され" -"た。" - -msgid "" -"Can concurrently use multiple layer-2 networking technologies, such as " -"802.1Q and VXLAN, in Networking." -msgstr "" -"Networking において、802.1Q や VXLAN などの複数の L2 ネットワーク技術を同時に" -"使用できる。" - -msgid "" -"Causes the network interface to pass all traffic it receives to the host " -"rather than passing only the frames addressed to it." -msgstr "" -"ネットワークインターフェースが、そこを指定されたフレームだけではなく、ホスト" -"に届いたすべての通信を渡すようにする。" - -msgid "CentOS" -msgstr "CentOS" - -msgid "Ceph" -msgstr "Ceph" - -msgid "" -"Ceph component that enables a Linux block device to be striped over multiple " -"distributed data stores." -msgstr "" -"Linux ブロックデバイスが複数の分散データストアにわたり分割できるようにする、" -"Ceph のコンポーネント。" - -msgid "CephFS" -msgstr "CephFS" - -msgid "" -"Certificate Authority or Certification Authority. In cryptography, an entity " -"that issues digital certificates. The digital certificate certifies the " -"ownership of a public key by the named subject of the certificate. This " -"enables others (relying parties) to rely upon signatures or assertions made " -"by the private key that corresponds to the certified public key. In this " -"model of trust relationships, a CA is a trusted third party for both the " -"subject (owner) of the certificate and the party relying upon the " -"certificate. CAs are characteristic of many public key infrastructure (PKI) " -"schemes." -msgstr "" -"認証局。暗号において、電子証明書を発行するエンティティー。電子証明書は、証明" -"書の発行先の名前により公開鍵の所有者を証明する。これにより、他の信頼される機" -"関が証明書を信頼できるようになる。また、証明された公開鍵に対応する秘密鍵によ" -"る表明を信頼できるようになる。この信頼関係のモデルにおいて、CA は証明書の発行" -"先と証明書を信頼している機関の両方に対する信頼された第三者機関である。CA は、" -"多くの公開鍵基盤 (PKI) スキームの特徴である。" - -msgid "Challenge-Handshake Authentication Protocol (CHAP)" -msgstr "Challenge-Handshake Authentication Protocol (CHAP)" - -msgid "Changes to these types of disk volumes are saved." -msgstr "この種類のディスクボリュームに変更すると、データが保存される。" - -msgid "" -"Checks for and deletes unused VMs; the component of Image service that " -"implements delayed delete." -msgstr "" -"未使用の仮想マシンを確認し、削除する。遅延削除を実装する、Image service のコ" -"ンポーネント。" - -msgid "" -"Checks for missing replicas and incorrect or corrupted objects in a " -"specified Object Storage account by running queries against the back-end " -"SQLite database." -msgstr "" -"バックエンドの SQLite データベースに問い合わせることにより、指定された " -"Object Storage のアカウントに、レプリカの欠損やオブジェクトの不整合・破損がな" -"いかを確認する。" - -msgid "" -"Checks for missing replicas or incorrect objects in specified Object Storage " -"containers through queries to the SQLite back-end database." -msgstr "" -"SQLite バックエンドデータベースへの問い合わせにより、指定した Object Storage " -"コンテナーにおいてレプリカの欠損やオブジェクトの不整合がないかを確認する。" - -msgid "Chef" -msgstr "Chef" - -msgid "" -"Choosing a host based on the existence of a GPU is currently unsupported in " -"OpenStack." -msgstr "GPU の有無によりホストを選択することは、現在 OpenStack で未サポート。" - -msgid "CirrOS" -msgstr "CirrOS" - -msgid "Cisco neutron plug-in" -msgstr "Cisco neutron プラグイン" - -msgid "" -"Cloud Auditing Data Federation (CADF) is a specification for audit event " -"data. CADF is supported by OpenStack Identity." -msgstr "" -"Cloud Auditing Data Federation (CADF) は、監査イベントデータの仕様である。" -"CADF は OpenStack Identity によりサポートされる。" - -msgid "Cloud Data Management Interface (CDMI)" -msgstr "" -"クラウドデータ管理インターフェース (CDMI:Cloud Data Management Interface)" - -msgid "Cloud Infrastructure Management Interface (CIMI)" -msgstr "Cloud Infrastructure Management Interface (CIMI)" - -msgid "Cloudbase-Init" -msgstr "Cloudbase-Init" - -msgid "Code name for the DNS service project for OpenStack." -msgstr "OpenStack の DNS サービスプロジェクトのコード名。" - -msgid "" -"Code name for the OpenStack project that provides the Containers Service." -msgstr "コンテナーサービスを提供する OpenStack プロジェクトのコード名。" - -msgid "Code name of the key management service for OpenStack." -msgstr "OpenStack の key management サービスのコード名。" - -msgid "" -"Collection of Compute components that represent the global state of the " -"cloud; talks to services, such as Identity authentication, Object Storage, " -"and node/storage workers through a queue." -msgstr "" -"クラウドの全体状況を表す Compute コンポーネント群。キュー経由で、Identity の" -"認証、Object Storage、ノード/ストレージワーカーなどのサービスと通信する。" - -msgid "" -"Collective name for the Object Storage object services, container services, " -"and account services." -msgstr "" -"Object Storage のオブジェクトサービス、コンテナーサービス、アカウントサービス" -"の集合名。" - -msgid "" -"Collective term for Object Storage components that provide additional " -"functionality." -msgstr "追加機能を提供する Object Storage のコンポーネントの総称。" - -msgid "" -"Collective term for a group of Object Storage components that processes " -"queued and failed updates for containers and objects." -msgstr "" -"キュー済みや失敗した、コンテナーやオブジェクトに対する更新を処理する、Object " -"Storage のコンポーネントのグループの総称。" - -msgid "" -"Combination of a URI and UUID used to access Image service VM images through " -"the image API." -msgstr "" -"Image API 経由で Image service の仮想マシンイメージにアクセスするために使用さ" -"れる、URI や UUID の組み合わせ。" - -msgid "Common Internet File System (CIFS)" -msgstr "Common Internet File System (CIFS)" - -msgid "" -"Community project that captures Compute AMQP communications; useful for " -"debugging." -msgstr "" -"Compute AMQP 通信をキャプチャーする、コミュニティーのプロジェクト。デバッグに" -"有用。" - -msgid "" -"Community project that uses shell scripts to quickly build complete " -"OpenStack development environments." -msgstr "" -"シェルスクリプトを使用して、完全な OpenStack 導入環境を迅速に構築するためのコ" -"ミュニティープロジェクト。" - -msgid "" -"Community project used to run automated tests against the OpenStack API." -msgstr "" -"OpenStack API に対して自動テストを実行するために使用されるコミュニティープロ" -"ジェクト。" - -msgid "" -"Companies that rent specialized applications that help businesses and " -"organizations provide additional services with lower cost." -msgstr "" -"企業や組織を支援する特定のアプリケーションを貸し出す会社が、より低いコストで" -"追加サービスを提供する。" - -msgid "" -"Component of Identity that provides a rule-management interface and a rule-" -"based authorization engine." -msgstr "" -"ルール管理インターフェースやルールベースの認可エンジンを提供する Identity の" -"コンポーネント。" - -msgid "Compute" -msgstr "Compute" - -msgid "Compute API" -msgstr "Compute API" - -msgid "Compute service" -msgstr "Compute サービス" - -msgid "" -"Computer that provides explicit services to the client software running on " -"that system, often managing a variety of computer operations." -msgstr "" -"そのシステムにおいて動作しているクライアントソフトウェアに具体的なサービスを" -"提供するコンピューター。さまざまなコンピューター処理を管理することもある。" - -msgid "" -"Configurable option within Object Storage to limit database writes on a per-" -"account and/or per-container basis." -msgstr "" -"アカウントごと、コンテナーごとにデータベースへの書き込みを制限するための、" -"Object Storage 内の設定オプション。" - -msgid "Configuration Management Database." -msgstr "構成管理データベース。" - -msgid "" -"Configuration setting within RabbitMQ that enables or disables message " -"acknowledgment. Enabled by default." -msgstr "" -"メッセージ ACK を有効化または無効化する、RabbitMQ 内の設定。デフォルトで有" -"効。" - -msgid "" -"Connected to by a direct consumer in RabbitMQCompute, the message can be " -"consumed only by the current connection." -msgstr "" -"RabbitMQ—Compute において直接利用者により接続される。メッセージは、現在の接続" -"だけにより使用される。" - -msgid "Containers service" -msgstr "Containers サービス" - -msgid "" -"Contains configuration information that Object Storage uses to reconfigure a " -"ring or to re-create it from scratch after a serious failure." -msgstr "" -"リングを再設定するため、深刻な障害の後に最初から再作成するために、Object " -"Storage が使用する設定情報を含む。" - -msgid "" -"Contains information about a user as provided by the identity provider. It " -"is an indication that a user has been authenticated." -msgstr "" -"認証プロバイダーにより提供されたとおり、ユーザーに関する情報を含む。ユーザー" -"が認証済みであることを意味する。" - -msgid "" -"Contains the locations of all Object Storage partitions within the ring." -msgstr "リング内にあるすべての Object Storage のパーティションの場所を含む。" - -msgid "Contains the output from a Linux VM console in Compute." -msgstr "Compute の Linux 仮想マシンコンソールからの出力を含む。" - -msgid "Contractual obligations that ensure the availability of a service." -msgstr "サービスの可用性を保証する契約上の義務。" - -msgid "" -"Converts an existing server to a different flavor, which scales the server " -"up or down. The original server is saved to enable rollback if a problem " -"occurs. All resizes must be tested and explicitly confirmed, at which time " -"the original server is removed." -msgstr "" -"既存のサーバーを別のフレーバーに変更する。サーバーをスケールアップまたはス" -"ケールダウンする。元のサーバーは、問題発生時にロールバックできるよう保存され" -"る。すべてのリサイズは、元のサーバーを削除するときに、テストされ、明示的に確" -"認される必要がある。" - -msgid "" -"Creates a full Object Storage development environment within a single VM." -msgstr "単一の仮想マシンに一通りの Object Storage 開発環境を作成すること。" - -msgid "Cross-Origin Resource Sharing (CORS)" -msgstr "Cross-Origin Resource Sharing (CORS)" - -msgid "Crowbar" -msgstr "Crowbar" - -msgid "Custom modules that extend some OpenStack core APIs." -msgstr "いくつかの OpenStack コア API を拡張するカスタムモジュール。" - -msgid "D" -msgstr "D" - -msgid "DAC" -msgstr "DAC" - -msgid "DAC (discretionary access control)" -msgstr "DAC (任意アクセス制御)" - -msgid "DHCP" -msgstr "DHCP" - -msgid "DHCP (Dynamic Host Configuration Protocol)" -msgstr "DHCP (Dynamic Host Configuration Protocol)" - -msgid "DHCP agent" -msgstr "DHCP エージェント" - -msgid "DHTML (Dynamic HyperText Markup Language)" -msgstr "DHTML (Dynamic HyperText Markup Language)" - -msgid "DNS" -msgstr "DNS" - -msgid "DNS (Domain Name Server, Service or System)" -msgstr "DNS" - -msgid "DNS (Domain Name System, Server or Service)" -msgstr "DNS (ドメインネームシステム、サーバー、サービス)" - -msgid "" -"DNS helps navigate the Internet by translating the IP address into an " -"address that is easier to remember. For example, translating 111.111.111.1 " -"into www.yahoo.com." -msgstr "" -"DNS は、IP アドレスを人間が覚えやすいアドレスに変換することにより、インター" -"ネットを参照しやすくする。例えば、111.111.111.1 を www.yahoo.com に変換する。" - -msgid "DNS record" -msgstr "DNS レコード" - -msgid "DNS records" -msgstr "DNS レコード" - -msgid "DNS service" -msgstr "DNS サービス" - -msgid "DRTM" -msgstr "DRTM" - -msgid "DRTM (dynamic root of trust measurement)" -msgstr "DRTM (dynamic root of trust measurement)" - -msgid "" -"Daemon that provides DNS, DHCP, BOOTP, and TFTP services for virtual " -"networks." -msgstr "" -"仮想ネットワーク向けに DNS、DHCP、BOOTP、TFTP サービスを提供するデーモン。" - -msgid "" -"Data that is only known to or accessible by a user and used to verify that " -"the user is who he says he is. Credentials are presented to the server " -"during authentication. Examples include a password, secret key, digital " -"certificate, and fingerprint." -msgstr "" -"ユーザーのみが知っている、またはアクセスできるデータ。ユーザーが正当であるこ" -"とを検証するために使用される。クレデンシャルは、認証中にサーバーに提示され" -"る。例えば、パスワード、秘密鍵、電子証明書、フィンガープリントなどがある。" - -msgid "Database service" -msgstr "Database サービス" - -msgid "Debian" -msgstr "Debian" - -msgid "" -"Defines resources for a cell, including CPU, storage, and networking. Can " -"apply to the specific services within a cell or a whole cell." -msgstr "" -"CPU、ストレージ、ネットワークを含むセルのリソースを定義する。1セルやセル全体" -"に含まれる特定のサービスに適用可能。" - -msgid "" -"Denial of service (DoS) is a short form for denial-of-service attack. This " -"is a malicious attempt to prevent legitimate users from using a service." -msgstr "" -"DoS は、サービス妨害攻撃の省略形である。正当なユーザーがサービスを使用するこ" -"とを妨害するための悪意のある試み。" - -msgid "" -"Depending on context, the core API is either the OpenStack API or the main " -"API of a specific core project, such as Compute, Networking, Image service, " -"and so on." -msgstr "" -"コア API は、文脈に応じて OpenStack API または特定のコアプロジェクトのメイン " -"API を意味する。コアプロジェクトは、Compute、Networking、Image service などが" -"ある。" - -msgid "" -"Describes the parameters of the various virtual machine images that are " -"available to users; includes parameters such as CPU, storage, and memory. " -"Alternative term for flavor." -msgstr "" -"ユーザが利用可能な様々な仮想マシンイメージのパラメーター(CPU、ストレージ、メ" -"モリ等を含む)を示す。フレーバーの別名。" - -msgid "Desktop-as-a-Service" -msgstr "Desktop-as-a-Service" - -msgid "" -"Determines whether back-end members of a VIP pool can process a request. A " -"pool can have several health monitors associated with it. When a pool has " -"several monitors associated with it, all monitors check each member of the " -"pool. All monitors must declare a member to be healthy for it to stay active." -msgstr "" -"仮想 IP プールのバックエンドメンバーがリクエストを処理できるかどうかを判断す" -"る。プールは、それに関連づけられた複数のヘルスモニターを持てる。すべてのモニ" -"ターは、プールのメンバーをお互いに確認する。すべてのモニターは、その稼働状況" -"の健全性であることをメンバーに宣言する必要がある。" - -msgid "DevStack" -msgstr "DevStack" - -msgid "" -"Device plugged into a PCI slot, such as a fibre channel or network card." -msgstr "" -"ファイバーチャネルやネットワークカードなどの PCI スロット内に挿入されるデバイ" -"ス。" - -msgid "Diablo" -msgstr "Diablo" - -msgid "" -"Disables server-side message acknowledgment in the Compute RabbitMQ. " -"Increases performance but decreases reliability." -msgstr "" -"Compute RabbitMQ において、サーバーサイドメッセージ交換を無効化する。性能を向" -"上されるが、信頼性を低下させる。" - -msgid "" -"Discretionary access control. Governs the ability of subjects to access " -"objects, while enabling users to make policy decisions and assign security " -"attributes. The traditional UNIX system of users, groups, and read-write-" -"execute permissions is an example of DAC." -msgstr "" -"任意アクセス制御。サブジェクトがオブジェクトにアクセスする機能を統制する。" -"ユーザーがポリシーを決定し、セキュリティー属性を割り当てられる。伝統的な " -"UNIX システムのユーザー、グループ、読み書き権限が、DAC の例である。" - -msgid "" -"Disk-based data storage generally represented as an iSCSI target with a file " -"system that supports extended attributes; can be persistent or ephemeral." -msgstr "" -"ディスクを用いたデータストレージ。一般的に、拡張属性をサポートするファイルシ" -"ステムを持つ、iSCSI ターゲットとして利用される。永続的なものと一時的なものが" -"ある。" - -msgid "" -"Disk-based virtual memory used by operating systems to provide more memory " -"than is actually available on the system." -msgstr "" -"システムにおいて実際に利用可能なメモリーより多くのメモリーをオペレーティング" -"システムにより使用されるディスクベースの仮想メモリー。" - -msgid "Distributed block storage system for QEMU, supported by OpenStack." -msgstr "" -"OpenStack によりサポートされる、QEMU 用の分散ブロックストレージシステム。" - -msgid "" -"Distributes partitions proportionately across Object Storage devices based " -"on the storage capacity of each device." -msgstr "" -"各デバイスのストレージキャパシティに基づき、Object Storage デバイスをまたがり" -"パーティションを比例分配する。" - -msgid "Django" -msgstr "Django" - -msgid "Domain Name System (DNS)" -msgstr "Domain Name System (DNS)" - -msgid "" -"Domain Name System. A hierarchical and distributed naming system for " -"computers, services, and resources connected to the Internet or a private " -"network. Associates a human-friendly names to IP addresses." -msgstr "" -"ドメインネームシステム。インターネットやプライベートネットワークに接続される" -"コンピューター、サービス、リソースの名前を管理する階層化分散システム。人間が" -"理解しやすい名前を IP アドレスに関連付ける。" - -msgid "Dynamic Host Configuration Protocol (DHCP)" -msgstr "動的ホスト設定プロトコル(DHCP)" - -msgid "" -"Dynamic Host Configuration Protocol. A network protocol that configures " -"devices that are connected to a network so that they can communicate on that " -"network by using the Internet Protocol (IP). The protocol is implemented in " -"a client-server model where DHCP clients request configuration data, such as " -"an IP address, a default route, and one or more DNS server addresses from a " -"DHCP server." -msgstr "" -"Dynamic Host Configuration Protocol。ネットワークに接続されたデバイスが、その" -"ネットワーク上で IP を使用して通信できるよう、ネットワークデバイスを設定する" -"ネットワークプロトコル。このプロトコルは、クライアントサイドモデルで実装され" -"ている。DHCP クライアントは、IP アドレス、デフォルトルート、1 つ以上の DNS " -"サーバーアドレス設定データを要求する。" - -msgid "Dynamic HyperText Markup Language (DHTML)" -msgstr "Dynamic HyperText Markup Language (DHTML)" - -msgid "Dynamic root of trust measurement." -msgstr "Dynamic root of trust measurement." - -msgid "E" -msgstr "E" - -msgid "EBS boot volume" -msgstr "EBS ブートボリューム" - -msgid "EC2" -msgstr "EC2" - -msgid "EC2 API" -msgstr "EC2 API" - -msgid "EC2 Compatibility API" -msgstr "EC2 互換API" - -msgid "EC2 access key" -msgstr "EC2 アクセスキー" - -msgid "EC2 compatibility API" -msgstr "EC2 互換 API" - -msgid "EC2 secret key" -msgstr "EC2 シークレットキー" - -msgid "ESXi" -msgstr "ESXi" - -msgid "ESXi hypervisor" -msgstr "ESXi ハイパーバイザー" - -msgid "ETag" -msgstr "ETag" - -msgid "" -"Each OpenStack release has a code name. Code names ascend in alphabetical " -"order: Austin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, " -"Icehouse, Juno, Kilo, Liberty, and Mitaka. Code names are cities or counties " -"near where the corresponding OpenStack design summit took place. An " -"exception, called the Waldon exception, is granted to elements of the state " -"flag that sound especially cool. Code names are chosen by popular vote." -msgstr "" -"各 OpenStack リリースはコード名を持つ。コード名はアルファベット順になります。" -"Austin, Bexar, Cactus, Diablo, Essex, Folsom, Grizzly, Havana, Icehouse, " -"Juno, Kilo、Liberty、Mitaka。コード名は、対応する OpenStack デザインサミット" -"が開催された場所の近くにある都市または国である。Waldon 例外と言われる例外は、" -"非常に格好良く聞こえる、状態フラグの要素に保証される。コード名は、一般的な投" -"票により選択される。" - -msgid "" -"Either a soft or hard reboot of a server. With a soft reboot, the operating " -"system is signaled to restart, which enables a graceful shutdown of all " -"processes. A hard reboot is the equivalent of power cycling the server. The " -"virtualization platform should ensure that the reboot action has completed " -"successfully, even in cases in which the underlying domain/VM is paused or " -"halted/stopped." -msgstr "" -"サーバーのソフトリブートまたはハードリブート。ソフトリブートの場合、オペレー" -"ティングに再起動のシグナルが送信されます。これにより、すべてのプロセスを穏や" -"かにシャットダウンできます。ハードリブートは、サーバーの強制再起動と同じで" -"す。仮想化プラットフォームは、ベースの仮想マシンが一時停止中の場合や停止中の" -"場合でも、きちんとリブート動作を正常に完了させるべきです。" - -msgid "Elastic Block Storage (EBS)" -msgstr "Elastic Block Storage (EBS)" - -msgid "Element of RabbitMQ that provides a response to an incoming MQ message." -msgstr "送信されてきた MQ メッセージに応答する RabbitMQ の要素。" - -msgid "" -"Enables Compute and Networking integration, which enables Networking to " -"perform network management for guest VMs." -msgstr "" -"Compute と Networking の統合を可能にする。Networking がゲスト仮想マシン用の" -"ネットワークを管理できるようになる。" - -msgid "" -"Enables Compute to communicate with NetApp storage devices through the " -"NetApp OnCommand Provisioning " -"Manager." -msgstr "" -"Compute が NetApp OnCommand " -"Provisioning Manager 経由で NetApp ス" -"トレージデバイスと通信できるようにする。" - -msgid "" -"Enables Networking to distribute incoming requests evenly between designated " -"instances." -msgstr "" -"Networking により、受信リクエストを指定されたインスタンス間で均等に分散できる" -"ようになる。" - -msgid "" -"Enables a Linux bridge to understand a Networking port, interface " -"attachment, and other abstractions." -msgstr "" -"Linux ブリッジが、Networking のポート、インターフェース接続、他の抽象化を理解" -"できるようにする。" - -msgid "Essex" -msgstr "Essex" - -msgid "" -"Essex is the code name for the fifth release of OpenStack. The design summit " -"took place in Boston, Massachusetts, US and Essex is a nearby city." -msgstr "" -"Essex は、OpenStack の 5 番目のリリースのコード名。デザインサミットは、アメリ" -"カ合衆国マサチューセッツ州ボストンで開催された。Essex は近くの都市。" - -msgid "Eucalyptus Kernel Image (EKI)" -msgstr "Eucalyptus Kernel Image (EKI)" - -msgid "Eucalyptus Machine Image (EMI)" -msgstr "Eucalyptus Machine Image (EMI)" - -msgid "Eucalyptus Ramdisk Image (ERI)" -msgstr "Eucalyptus Ramdisk Image (ERI)" - -msgid "" -"Extension to iptables that allows creation of firewall rules that match " -"entire \"sets\" of IP addresses simultaneously. These sets reside in indexed " -"data structures to increase efficiency, particularly on systems with a large " -"quantity of rules." -msgstr "" -"連続する IP アドレスの全体に一致するファイアウォールルールを作成できる、" -"iptables の拡張。これらのセットは、効率化するためにインデックス化されたデータ" -"構造、とくに大量のルールを持つシステムにあります。" - -msgid "F" -msgstr "F" - -msgid "FWaaS" -msgstr "FWaaS" - -msgid "" -"Facility in Compute that allows each virtual machine instance to have more " -"than one VIF connected to it." -msgstr "" -"各仮想マシンインスタンスが複数の仮想インターフェースに接続できるようになる、" -"Compute における機能。" - -msgid "" -"Facility in Compute that enables a virtual machine instance to have more " -"than one VIF connected to it." -msgstr "" -"各仮想マシンインスタンスが複数の仮想インターフェースに接続できるようになる、" -"Compute における機能。" - -msgid "FakeLDAP" -msgstr "FakeLDAP" - -msgid "" -"Feature in modern Ethernet networks that supports frames up to approximately " -"9000 bytes." -msgstr "約 9000 バイトまでのフレームをサポートする最近のイーサネット上の機能。" - -msgid "" -"Feature of certain network interface drivers that combines many smaller " -"received packets into a large packet before delivery to the kernel IP stack." -msgstr "" -"カーネルの IP スタックに届ける前に、多くの小さな受信パケットを大きなパケット" -"に結合する、特定のネットワークインターフェースドライバーの機能。" - -msgid "Fedora" -msgstr "Fedora" - -msgid "Fibre Channel" -msgstr "ファイバーチャネル" - -msgid "Fibre Channel over Ethernet (FCoE)" -msgstr "Fibre Channel over Ethernet (FCoE)" - -msgid "" -"File system option that enables storage of additional information beyond " -"owner, group, permissions, modification time, and so on. The underlying " -"Object Storage file system must support extended attributes." -msgstr "" -"所有者、グループ、パーミッション、変更時間など以外の追加情報を保存できるよう" -"にする、ファイルシステムのオプション。Object Storage のバックエンドのファイル" -"システムは、拡張属性をサポートする必要がある。" - -msgid "" -"Filtering tool for a Linux bridging firewall, enabling filtering of network " -"traffic passing through a Linux bridge. Used in Compute along with " -"arptables, iptables, and ip6tables to ensure isolation of network " -"communications." -msgstr "" -"Linux ブリッジのファイアウォール用のフィルタリングツール。Linux ブリッジを通" -"過するネットワーク通信のフィルタリングできる。ネットワーク通信を分離するため" -"に、OpenStack Compute において arptables、iptables、ip6tables と一緒に使用さ" -"れる。" - -msgid "Firewall-as-a-Service (FWaaS)" -msgstr "Firewall-as-a-Service (FWaaS)" - -msgid "Flat Manager" -msgstr "Flat マネージャー" - -msgid "FlatDHCP Manager" -msgstr "FlatDHCP マネージャー" - -msgid "Folsom" -msgstr "Folsom" - -msgid "" -"Folsom is the code name for the sixth release of OpenStack. The design " -"summit took place in San Francisco, California, US and Folsom is a nearby " -"city." -msgstr "" -"Folsom は、OpenStack の 6 番目のリリースのコード名である。デザインサミット" -"が、アメリカ合衆国カリフォルニア州サンフランシスコで開催された。Folsom は近郊" -"の都市である。" - -msgid "" -"For IaaS, ability for a regular (non-privileged) account to manage a virtual " -"infrastructure component such as networks without involving an administrator." -msgstr "" -"IaaS の場合、管理者が介することなく、通常の (特権を持たない) ユーザーがネット" -"ワークなどの仮想インフラのコンポーネントを管理する機能。" - -msgid "FormPost" -msgstr "FormPost" - -msgid "G" -msgstr "G" - -msgid "" -"Generally, extra properties on an Image service image to which only cloud " -"administrators have access. Limits which user roles can perform CRUD " -"operations on that property. The cloud administrator can configure any image " -"property as protected." -msgstr "" -"クラウド管理者のみがアクセスできる、Image service のイメージの追加プロパ" -"ティー。どのユーザーロールがそのプロパティーにおいて CRUD 操作を実行できるか" -"を制限する。クラウド管理者は、保護されたイメージのプロパティーをすべて設定で" -"きる。" - -msgid "" -"Gives guest VMs exclusive access to a PCI device. Currently supported in " -"OpenStack Havana and later releases." -msgstr "" -"ゲスト仮想マシンが PCI デバイスに排他的にアクセスされる。OpenStack Havana 以" -"降でサポートされる。" - -msgid "Glossary" -msgstr "用語集" - -msgid "GlusterFS" -msgstr "GlusterFS" - -msgid "Governance service" -msgstr "Governance service" - -msgid "Graphic Interchange Format (GIF)" -msgstr "Graphic Interchange Format (GIF)" - -msgid "Graphics Processing Unit (GPU)" -msgstr "Graphics Processing Unit (GPU)" - -msgid "Green Threads" -msgstr "Green Threads" - -msgid "Grizzly" -msgstr "Grizzly" - -msgid "Group" -msgstr "グループ" - -msgid "H" -msgstr "H" - -msgid "Hadoop" -msgstr "Hadoop" - -msgid "Hadoop Distributed File System (HDFS)" -msgstr "Hadoop Distributed File System (HDFS)" - -msgid "Havana" -msgstr "Havana" - -msgid "Heat Orchestration Template (HOT)" -msgstr "Heat Orchestration Template (HOT)" - -msgid "Heat input in the format native to OpenStack." -msgstr "OpenStack 固有形式の Heat の入力データ。" - -msgid "" -"High-availability mode for legacy (nova) networking. Each compute node " -"handles NAT and DHCP and acts as a gateway for all of the VMs on it. A " -"networking failure on one compute node doesn't affect VMs on other compute " -"nodes." -msgstr "" -"レガシーネットワーク (nova) の高可用性モード。各コンピュートノードは、NAT と " -"DHCP を処理し、すべての仮想マシンのゲートウェイとして動作する。あるコンピュー" -"トノードにおけるネットワーク障害は、他のコンピュートノードにある仮想マシンに" -"影響しません。" - -msgid "" -"High-performance 64-bit file system created by Silicon Graphics. Excels in " -"parallel I/O operations and data consistency." -msgstr "" -"Silicon Graphics 社により作成された、高性能な 64 ビットファイルシステム。並" -"列 I/O 処理とデータ一貫性に優れる。" - -msgid "Host Bus Adapter (HBA)" -msgstr "Host Bus Adapter (HBA)" - -msgid "Hyper-V" -msgstr "Hyper-V" - -msgid "Hypertext Transfer Protocol (HTTP)" -msgstr "Hypertext Transfer Protocol (HTTP)" - -msgid "Hypertext Transfer Protocol Secure (HTTPS)" -msgstr "Hypertext Transfer Protocol Secure (HTTPS)" - -msgid "I" -msgstr "I" - -msgid "ICMP" -msgstr "ICMP" - -msgid "ID number" -msgstr "ID 番号" - -msgid "IDS" -msgstr "IDS" - -msgid "IDS (Intrusion Detection System)" -msgstr "IDS (Intrusion Detection System)" - -msgid "INI" -msgstr "INI" - -msgid "IOPS" -msgstr "IOPS" - -msgid "" -"IOPS (Input/Output Operations Per Second) are a common performance " -"measurement used to benchmark computer storage devices like hard disk " -"drives, solid state drives, and storage area networks." -msgstr "" -"IOPS (Input/Output Operations Per Second) は、ハードディスク、SSD、SAN などの" -"ストレージデバイスをベンチマークするために使用される、一般的なパフォーマンス" -"指標である。" - -msgid "IP Address Management (IPAM)" -msgstr "IP Address Management (IPAM)" - -msgid "IP address" -msgstr "IP アドレス" - -msgid "IP addresses" -msgstr "IP アドレス" - -msgid "IPL" -msgstr "IPL" - -msgid "IPL (Initial Program Loader)" -msgstr "IPL (Initial Program Loader)" - -msgid "IPMI" -msgstr "IPMI" - -msgid "IPMI (Intelligent Platform Management Interface)" -msgstr "IPMI (Intelligent Platform Management Interface)" - -msgid "IQN" -msgstr "IQN" - -msgid "ISO9660" -msgstr "ISO9660" - -msgid "ISO9660 format" -msgstr "ISO9660 形式" - -msgid "IaaS" -msgstr "IaaS" - -msgid "IaaS (Infrastructure-as-a-Service)" -msgstr "IaaS (Infrastructure-as-a-Service)" - -msgid "Icehouse" -msgstr "Icehouse" - -msgid "Identity" -msgstr "Identity" - -msgid "Identity API" -msgstr "Identity API" - -msgid "Identity back end" -msgstr "Identity バックエンド" - -msgid "Identity service" -msgstr "Identity サービス" - -msgid "Identity service API" -msgstr "Identity service API" - -msgid "" -"If Object Storage finds objects, containers, or accounts that are corrupt, " -"they are placed in this state, are not replicated, cannot be read by " -"clients, and a correct copy is re-replicated." -msgstr "" -"Object Storage が壊れたオブジェクト、コンテナー、アカウントを見つけた際に、そ" -"のデータはこの状態にセットされる。この状態にセットされたデータは、複製され" -"ず、クライアントが読み出すこともできなくなり、正しいコピーが再複製される。" - -msgid "" -"If a requested resource such as CPU time, disk storage, or memory is not " -"available in the parent cell, the request is forwarded to its associated " -"child cells. If the child cell can fulfill the request, it does. Otherwise, " -"it attempts to pass the request to any of its children." -msgstr "" -"CPU 時間、ディスクストレージ、メモリ等の要求されたリソースが親セルで利用不可" -"の場合、リクエストは親セルに紐付けられた子セルに転送される。子セルがリクエス" -"トに対応可能な場合、子セルはそのリクエストを処理する。対応不可の場合、そのリ" -"クエストを自分の子セルに渡そうとする。" - -msgid "" -"If a requested resource, such as CPU time, disk storage, or memory, is not " -"available in the parent cell, the request is forwarded to associated child " -"cells." -msgstr "" -"要求されたリソース(CPU時間、ディスクストレージ、メモリ)が親セルで利用不可の" -"場合、そのリクエストは紐付けられた子セルに転送される。" - -msgid "Image API" -msgstr "Image API" - -msgid "Image service" -msgstr "Image サービス" - -msgid "Image service API" -msgstr "Image サービス API" - -msgid "" -"Impassable limits for guest VMs. Settings include total RAM size, maximum " -"number of vCPUs, and maximum disk size." -msgstr "" -"ゲスト仮想マシンの超えられない制限。合計メモリー容量、最大仮想 CPU 数、最大" -"ディスク容量の設定。" - -msgid "" -"In Compute and Block Storage, the ability to set resource limits on a per-" -"project basis." -msgstr "" -"プロジェクト単位を基本として使用でき" -"るリソース上限を設定できる、Compute と Block Storage の機能。" - -msgid "" -"In Compute, conductor is the process that proxies database requests from the " -"compute process. Using conductor improves security because compute nodes do " -"not need direct access to the database." -msgstr "" -"Compute において、コンピュートプロセスからのデータベース要求をプロキシーする" -"処理。コンダクターを使用することにより、コンピュートノードがデータベースに直" -"接アクセスする必要がなくなるので、セキュリティーを向上できる。" - -msgid "" -"In Compute, the support that enables associating DNS entries with floating " -"IP addresses, nodes, or cells so that hostnames are consistent across " -"reboots." -msgstr "" -"Compute において、ホスト名が再起動後も同じになるよう、DNS エンティティーを " -"Floating IP アドレス、ノード、セルに関連付けできる。" - -msgid "" -"In Object Storage, tools to test and ensure dispersion of objects and " -"containers to ensure fault tolerance." -msgstr "" -"Object Storage で、フォールトトレラントの確認の為に、オブジェクトとコンテナ" -"の分散をテスト、確認するツール。" - -msgid "" -"In OpenStack Identity, entities represent individual API consumers and are " -"owned by a specific domain. In OpenStack Compute, a user can be associated " -"with roles, projects, or both." -msgstr "" -"OpenStack Identity では、エンティティーは個々の API 利用者を表す、特定のドメ" -"インに属する。OpenStack Compute では、ユーザーはロール、プロジェクトもしくは" -"その両者と関連付けることができる。" - -msgid "" -"In OpenStack, the API version for a project is part of the URL. For example, " -"example.com/nova/v1/foobar." -msgstr "" -"OpenStack では、プロジェクトの API バージョンが URL の一部となる。例: " -"example.com/nova/v1/foobar。" - -msgid "" -"In a high-availability setup with an active/active configuration, several " -"systems share the load together and if one fails, the load is distributed to " -"the remaining systems." -msgstr "" -"アクティブ/アクティブ設定を用いた高可用構成の場合、複数のシステムが処理を一緒" -"に分担する。また、あるシステムが故障した場合、処理が残りのシステムに分散され" -"る。" - -msgid "" -"In a high-availability setup with an active/passive configuration, systems " -"are set up to bring additional resources online to replace those that have " -"failed." -msgstr "" -"アクティブ/パッシブ設定を用いた高可用性セットアップでは、故障したシステムを置" -"き換えるために、システムが追加リソースをオンラインにするようセットアップされ" -"る。" - -msgid "" -"In the context of Object Storage, this is a process that is not terminated " -"after an upgrade, restart, or reload of the service." -msgstr "" -"Object Storage の文脈において、サービスの更新、再起動、再読み込みの後に終了し" -"ないプロセス。" - -msgid "" -"In the context of the Identity service, the worker process that provides " -"access to the admin API." -msgstr "Identity の領域で、管理 API へのアクセスを提供するワーカープロセス。" - -msgid "" -"Information that consists solely of ones and zeroes, which is the language " -"of computers." -msgstr "1 と 0 だけから構成される情報。コンピューターの言語。" - -msgid "" -"Infrastructure-as-a-Service. IaaS is a provisioning model in which an " -"organization outsources physical components of a data center, such as " -"storage, hardware, servers, and networking components. A service provider " -"owns the equipment and is responsible for housing, operating and maintaining " -"it. The client typically pays on a per-use basis. IaaS is a model for " -"providing cloud services." -msgstr "" -"Infrastructure-as-a-Service。IaaS は、ストレージ、ハードウェア、サーバー、" -"ネットワークなど、データセンターの物理コンポーネントをアウトソースする組織の" -"配備モデル。サーバープロバイダーは、設備を所有し、ハウジング、運用、メンテナ" -"ンスに責任を持つ。クライアントは、一般的に使用量に応じて費用を払う。IaaS は、" -"クラウドサービスを提供するモデル。" - -msgid "Initial Program Loader." -msgstr "Initial Program Loader。初期プログラムローダー。" - -msgid "" -"Intelligent Platform Management Interface. IPMI is a standardized computer " -"system interface used by system administrators for out-of-band management of " -"computer systems and monitoring of their operation. In layman's terms, it is a way to manage a computer " -"using a direct network connection, whether it is turned on or not; " -"connecting to the hardware rather than an operating system or login shell." -msgstr "" -"Intelligent Platform Management Interface。IPMI は、コンピューターシステムの" -"アウトオブバンド管理、運用監視のため" -"に、システム管理者により使用される標準的なコンピューターシステムインター" -"フェース。平たく言うと、電源状態によらず、ネットワークの直接通信を使用してコ" -"ンピューターを管理する方法。オペレーティングシステムやログインシェルではな" -"く、ハードウェアに接続する。" - -msgid "" -"Interactions and processes that are obfuscated from the user, such as " -"Compute volume mount, data transmission to an iSCSI target by a daemon, or " -"Object Storage object integrity checks." -msgstr "" -"Compute のボリュームのマウント、デーモンによる iSCSI ターゲットへのデータ転" -"送、Object Storage のオブジェクトの完全性検査など、ユーザーから見えにくい操作" -"や処理。" - -msgid "" -"Interface within Networking that enables organizations to create custom plug-" -"ins for advanced features, such as QoS, ACLs, or IDS." -msgstr "" -"組織が QoS、ACL、IDS などの高度な機能向けのカスタムプラグインを作成できるよう" -"にする、Networking 内のインターフェース。" - -msgid "Internet Control Message Protocol (ICMP)" -msgstr "Internet Control Message Protocol (ICMP)" - -msgid "" -"Internet Control Message Protocol, used by network devices for control " -"messages. For example, uses ICMP to test connectivity." -msgstr "" -"インターネット制御メッセージプロトコル。制御メッセージ用にネットワークデバイ" -"スにより使用される。例えば、 は接続性をテストするために ICMP " -"を使用する。" - -msgid "Internet Service Provider (ISP)" -msgstr "Internet Service Provider (ISP)" - -msgid "Internet Small Computer System Interface (iSCSI)" -msgstr "Internet Small Computer System Interface (iSCSI)" - -msgid "Internet protocol (IP)" -msgstr "インターネットプロトコル (IP)" - -msgid "Intrusion Detection System." -msgstr "侵入検知システム。" - -msgid "J" -msgstr "J" - -msgid "Java" -msgstr "Java" - -msgid "JavaScript" -msgstr "JavaScript" - -msgid "JavaScript Object Notation (JSON)" -msgstr "JavaScript Object Notation (JSON)" - -msgid "Jenkins" -msgstr "Jenkins" - -msgid "Juno" -msgstr "Juno" - -msgid "K" -msgstr "K" - -msgid "Kerberos" -msgstr "Kerberos" - -msgid "Kickstart" -msgstr "Kickstart" - -msgid "Kilo" -msgstr "Kilo" - -msgid "L" -msgstr "L" - -msgid "LBaaS" -msgstr "LBaaS" - -msgid "" -"LBaaS feature that provides availability monitoring using the ping command, TCP, and HTTP/HTTPS GET." -msgstr "" -"ping コマンド、TCP、HTTP/HTTPS GET を使用してモニタリング" -"する機能を提供する LBaaS の機能。" - -msgid "Launchpad" -msgstr "Launchpad" - -msgid "Layer-2 (L2) agent" -msgstr "L2 エージェント" - -msgid "Layer-2 network" -msgstr "L2 ネットワーク" - -msgid "Layer-3 (L3) agent" -msgstr "L3 エージェント" - -msgid "Layer-3 network" -msgstr "L3 ネットワーク" - -msgid "Liberty" -msgstr "Liberty" - -msgid "" -"Licensed under the Apache License, Version 2.0 (the \"License\"); you may " -"not use this file except in compliance with the License. You may obtain a " -"copy of the License at" -msgstr "" -"Licensed under the Apache License, Version 2.0 (the \"License\"); you may " -"not use this file except in compliance with the License. You may obtain a " -"copy of the License at" - -msgid "Lightweight Directory Access Protocol (LDAP)" -msgstr "Lightweight Directory Access Protocol (LDAP)" - -msgid "Linux Bridge" -msgstr "Linux ブリッジ" - -msgid "Linux Bridge neutron plug-in" -msgstr "Linux Bridge neutron プラグイン" - -msgid "Linux bridge" -msgstr "Linux ブリッジ" - -msgid "Linux containers (LXC)" -msgstr "Linux コンテナー (LXC)" - -msgid "" -"Linux kernel feature that provides independent virtual networking instances " -"on a single host with separate routing tables and interfaces. Similar to " -"virtual routing and forwarding (VRF) services on physical network equipment." -msgstr "" -"別々のルーティングテーブルとインターフェースを持つ単一のホストにおいて、独立" -"した仮想ネットワークインターフェースを提供する Linux カーネル機能。物理ネット" -"ワーク環境における仮想ルーティングおよびフォワーディング (VRF) サービスと似て" -"いる。" - -msgid "" -"Linux kernel security module that provides the mechanism for supporting " -"access control policies." -msgstr "" -"アクセス制御ポリシーをサポートするための機構を提供する Linux カーネルセキュリ" -"ティーモジュール。" - -msgid "Lists allowed commands within the Compute rootwrap facility." -msgstr "Compute rootwrap 機能内で許可されるコマンドの一覧。" - -msgid "" -"Lists containers in Object Storage and stores container information in the " -"account database." -msgstr "" -"Object Storage にあるコンテナーを一覧表示し、コンテナーの情報をアカウントデー" -"タベースに保存する。" - -msgid "Load-Balancer-as-a-Service (LBaaS)" -msgstr "Load-Balancer-as-a-Service (LBaaS)" - -msgid "Logical Volume Manager (LVM)" -msgstr "論理ボリュームマネージャー (LVM)" - -msgid "" -"Logical groupings of related code, such as the Block Storage volume manager " -"or network manager." -msgstr "" -"Block Storage のボリュームマネージャーやネットワークマネージャーなど、関連す" -"るコードの論理的なグループ。" - -msgid "Logical subdivision of an IP network." -msgstr "IP ネットワークの論理分割。" - -msgid "" -"Lower power consumption CPU often found in mobile and embedded devices. " -"Supported by OpenStack." -msgstr "" -"モバイル機器や組み込みデバイスによく利用される低消費電力 CPU。OpenStack はサ" -"ポートしている。" - -msgid "M" -msgstr "M" - -msgid "" -"MD5 hash of an object within Object Storage, used to ensure data integrity." -msgstr "" -"Object Storage 内のオブジェクトの MD5 ハッシュ。データの完全性を確認するため" -"に使用される。" - -msgid "Maps Object Storage partitions to physical storage devices." -msgstr "Object Storage パーティションの物理ストレージデバイスへの対応付け" - -msgid "" -"Massively scalable distributed storage system that consists of an object " -"store, block store, and POSIX-compatible distributed file system. Compatible " -"with OpenStack." -msgstr "" -"オブジェクトストア、ブロックストア、および POSIX 互換分散ファイルシステムから" -"構成される大規模スケール可能分散ストレージシステム。OpenStack 互換。" - -msgid "" -"Maximum frame or packet size for a particular network medium. Typically 1500 " -"bytes for Ethernet networks." -msgstr "" -"特定のネットワークメディア向けの最大フレームやパケットサイズ。一般的に、イー" -"サネット向けは 1500 バイト。" - -msgid "" -"Mechanism for highly-available multi-host routing when using OpenStack " -"Networking (neutron)." -msgstr "" -"OpenStack Networking (neutron) の使用時、高可用なマルチホストルーティングのた" -"めの機構。" - -msgid "" -"Mechanism in IP networks to detect end-to-end MTU and adjust packet size " -"accordingly." -msgstr "" -"エンド間の MTU を検出し、パケットサイズを適切に調整するための IP ネットワーク" -"における機構。" - -msgid "" -"Message exchange that is cleared when the service restarts. Its data is not " -"written to persistent storage." -msgstr "" -"サービスの再起動時に削除されるメッセージ交換。このデータは永続ストレージに書" -"き込まれない。" - -msgid "" -"Message queue software supported by OpenStack. An alternative to RabbitMQ. " -"Also spelled 0MQ." -msgstr "" -"OpenStack によりサポートされるメッセージキューソフトウェア。RabbitMQ の代替。" -"0MQ とも表記。" - -msgid "" -"Message queue software supported by OpenStack; an alternative to RabbitMQ." -msgstr "" -"OpenStack によりサポートされるメッセージキューソフトウェア。RabbitMQ の代替。" - -msgid "" -"Message queue that is cleared when the service restarts. Its data is not " -"written to persistent storage." -msgstr "" -"サービスの再起動時に削除されるメッセージキュー。このデータは永続ストレージに" -"書き込まれない。" - -msgid "Message service" -msgstr "Message サービス" - -msgid "Meta-Data Server (MDS)" -msgstr "Meta-Data Server (MDS)" - -msgid "Metadata agent" -msgstr "メタデータエージェント" - -msgid "" -"Method to access VM instance consoles using a web browser. Supported by " -"Compute." -msgstr "" -"Web ブラウザーを使用して仮想マシンインスタンスのコンソールにアクセスする方" -"法。Compute によりサポートされる。" - -msgid "Mitaka" -msgstr "Mitaka" - -msgid "Modular Layer 2 (ML2) neutron plug-in" -msgstr "Modular Layer 2 (ML2) neutron プラグイン" - -msgid "" -"Modular system that allows the underlying message queue software of Compute " -"to be changed. For example, from RabbitMQ to ZeroMQ or Qpid." -msgstr "" -"Compute が利用するメッセージキューソフトウェアを変更できるようにする仕組み。" -"例えば、 RabbitMQ を ZeroMQ や Qpid に変更できる。" - -msgid "Monitor (LBaaS)" -msgstr "モニター (LBaaS)" - -msgid "Monitor (Mon)" -msgstr "モニター (Mon)" - -msgid "Monitoring" -msgstr "Monitoring" - -msgid "MultiNic" -msgstr "MultiNic" - -msgid "N" -msgstr "N" - -msgid "NAT" -msgstr "NAT" - -msgid "NTP" -msgstr "NTP" - -msgid "Name for the Compute component that manages VMs." -msgstr "仮想マシンを管理する Compute のコンポーネントの名称。" - -msgid "Nebula" -msgstr "Nebula" - -msgid "NetApp volume driver" -msgstr "NetApp ボリュームドライバー" - -msgid "Network Address Translation (NAT)" -msgstr "ネットワークアドレス変換 (NAT)" - -msgid "" -"Network Address Translation; Process of modifying IP address information " -"while in transit. Supported by Compute and Networking." -msgstr "" -"ネットワークアドレス変換。IP アドレス情報を転送中に変更する処理。Compute と " -"Networking によりサポートされる。" - -msgid "Network File System (NFS)" -msgstr "Network File System (NFS)" - -msgid "Network Time Protocol (NTP)" -msgstr "Network Time Protocol (NTP)" - -msgid "" -"Network Time Protocol; Method of keeping a clock for a host or node correct " -"via communication with a trusted, accurate time source." -msgstr "" -"ネットワーク時刻プロトコル。信頼された、正確な時刻源と通信することにより、ホ" -"ストやノードの時刻を正確に保つ方法。" - -msgid "" -"Network traffic between a user or client (north) and a server (south), or " -"traffic into the cloud (south) and out of the cloud (north). See also east-" -"west traffic." -msgstr "" -"ユーザーやクライアント (ノース)、とサーバー (サウス) 間のネットワーク通信、ク" -"ラウド (サウス) とクラウド外 (ノース) 内の通信。イースト・サウス通信も参照。" - -msgid "" -"Network traffic between servers in the same cloud or data center. See also " -"north-south traffic." -msgstr "" -"同じクラウドやデータセンターにあるサーバー間のネットワーク通信。ノース・サウ" -"ス通信も参照。" - -msgid "Networking API" -msgstr "Networking API" - -msgid "" -"New users are assigned to this tenant if no tenant is specified when a user " -"is created." -msgstr "" -"ユーザーを作成したときに、テナントを指定していない場合、新規ユーザーはこのテ" -"ナントに割り当てられる。" - -msgid "Newton" -msgstr "Newton" - -msgid "Nexenta volume driver" -msgstr "Nexenta ボリュームドライバー" - -msgid "No ACK" -msgstr "No ACK" - -msgid "Nova API" -msgstr "Nova API" - -msgid "" -"Number that is unique to every computer system on the Internet. Two versions " -"of the Internet Protocol (IP) are in use for addresses: IPv4 and IPv6." -msgstr "" -"インターネットにあるすべてのコンピューターシステムを一意にする番号。Internet " -"Protocol (IP) は、IPv4 と IPv6 の 2 つのバージョンがアドレス付けのために使用" -"中です。" - -msgid "Numbers" -msgstr "数字" - -msgid "O" -msgstr "O" - -msgid "Object Storage" -msgstr "オブジェクトストレージ" - -msgid "Object Storage API" -msgstr "Object Storage API" - -msgid "Object Storage Device (OSD)" -msgstr "Object Storage Device (OSD)" - -msgid "" -"Object Storage middleware that uploads (posts) an image through a form on a " -"web page." -msgstr "" -"Web ページのフォームからイメージをアップロード (投稿) する、Object Storage の" -"ミドルウェア。" - -msgid "" -"Object storage service by Amazon; similar in function to Object Storage, it " -"can act as a back-end store for Image service VM images." -msgstr "" -"Amazon により提供されるオブジェクトストレージ。Object Storage の機能に似てい" -"る。Image service の仮想マシンイメージのバックエンドとして利用できる。" - -msgid "Ocata" -msgstr "Ocata" - -msgid "Oldie" -msgstr "Oldie" - -msgid "" -"On the Internet, separates a website from other sites. Often, the domain " -"name has two or more parts that are separated by dots. For example, yahoo." -"com, usa.gov, harvard.edu, or mail.yahoo.com." -msgstr "" -"インターネット上で Web サイトを他のサイトから分離する。ドメイン名はよく、ドッ" -"トにより区切られた 2 つ以上の部分を持つ。例えば、yahoo.com、usa.gov、harvard." -"edu、mail.yahoo.com。" - -msgid "" -"One of the RPC primitives used by the OpenStack message queue software. " -"Sends a message and does not wait for a response." -msgstr "" -"OpenStack メッセージキューソフトウェアにより使用される RPC プリミティブの 1 " -"つ。メッセージを送信し、応答を待たない。" - -msgid "" -"One of the RPC primitives used by the OpenStack message queue software. " -"Sends a message and waits for a response." -msgstr "" -"OpenStack のメッセージキューソフトウェアにより使用される、RPC プリミティブの " -"1 つ。メッセージを送信し、応答を待つ。" - -msgid "One of the VM image disk formats supported by Image service." -msgstr "" -"Image service によりサポートされる、仮想マシンイメージディスク形式の 1 つ。" - -msgid "" -"One of the VM image disk formats supported by Image service; an unstructured " -"disk image." -msgstr "" -"Image service によりサポートされる仮想マシンイメージのディスク形式の 1 つ。" - -msgid "" -"One of the default roles in the Compute RBAC system and the default role " -"assigned to a new user." -msgstr "" -"Compute RBAC システムにあるデフォルトのロールの 1 つ。新規ユーザーに割り当て" -"られるデフォルトのロール。" - -msgid "" -"One of the default roles in the Compute RBAC system. Enables a user to add " -"other users to a project, interact with VM images that are associated with " -"the project, and start and stop VM instances." -msgstr "" -"Compute RBAC システムにおけるデフォルトのロールの 1 つ。ユーザーが他のユー" -"ザーをプロジェクトに追加でき、プロジェクトに関連付けられた仮想マシンイメージ" -"を操作でき、仮想マシンインスタンスを起動および終了できるようになる。" - -msgid "" -"One of the default roles in the Compute RBAC system. Enables the user to " -"allocate publicly accessible IP addresses to instances and change firewall " -"rules." -msgstr "" -"Compute RBAC システムにおけるデフォルトのロールの 1 つ。ユーザーが、パブリッ" -"クにアクセス可能な IP アドレスをインスタンスに割り当てられ、ファイアウォール" -"ルールを変更できるようになる。" - -msgid "" -"One of the default roles in the Compute RBAC system. Grants complete system " -"access." -msgstr "" -"Compute RBAC システムにおけるデフォルトのロールの 1 つ。システムの完全なアク" -"セス権を付与する。" - -msgid "" -"One of the hypervisors supported by OpenStack, generally used for " -"development purposes." -msgstr "" -"OpenStack がサポートするハイパーバイザーの一つ。一般に、開発目的で使用され" -"る。" - -msgid "One of the hypervisors supported by OpenStack." -msgstr "OpenStack によりサポートされるハイパーバイザーの一つ。" - -msgid "One of the supported response formats in OpenStack." -msgstr "OpenStack でサポートされる応答形式の 1 つ。" - -msgid "Open Cloud Computing Interface (OCCI)" -msgstr "Open Cloud Computing Interface (OCCI)" - -msgid "Open Virtualization Format (OVF)" -msgstr "Open Virtualization Format (OVF)" - -msgid "" -"Open source GUI and CLI tools used for remote console access to VMs. " -"Supported by Compute." -msgstr "" -"仮想マシンへのリモートコンソールアクセスに使用される、オープンソースの GUI / " -"CUI ツール。" - -msgid "" -"Open source tool used to access remote hosts through an encrypted " -"communications channel, SSH key injection is supported by Compute." -msgstr "" -"暗号化した通信チャネル経由でリモートホストにアクセスするために使用されるオー" -"プンソースのツール。SSH 鍵インジェクションが Compute によりサポートされる。" - -msgid "Open vSwitch" -msgstr "Open vSwitch" - -msgid "Open vSwitch (OVS) agent" -msgstr "Open vSwitch (OVS) エージェント" - -msgid "" -"Open vSwitch is a production quality, multilayer virtual switch licensed " -"under the open source Apache 2.0 license. It is designed to enable massive " -"network automation through programmatic extension, while still supporting " -"standard management interfaces and protocols (for example NetFlow, sFlow, " -"SPAN, RSPAN, CLI, LACP, 802.1ag)." -msgstr "" -"Open vSwitch は、商用品質、複数階層の仮想スイッチ。オープンソースの Apache " -"2.0 license に基づき許諾される。標準的な管理インターフェースやプロトコルと使" -"用ながら、プログラム拡張により大規模なネットワーク自動化を実現できるよう設計" -"されている (例えば、NetFlow、sFlow、SPAN、RSPAN、CLI、LACP、802.1ag)。" - -msgid "Open vSwitch neutron plug-in" -msgstr "Open vSwitch neutron プラグイン" - -msgid "OpenLDAP" -msgstr "OpenLDAP" - -msgid "OpenStack" -msgstr "OpenStack" - -msgid "" -"OpenStack Networking agent that provides DHCP services for virtual networks." -msgstr "" -"仮想ネットワーク向けに DHCP サービスを提供する OpenStack Networking エージェ" -"ント。" - -msgid "" -"OpenStack Networking agent that provides layer-2 connectivity for virtual " -"networks." -msgstr "" -"仮想ネットワーク向けに L2 接続性を提供する OpenStack Networking エージェン" -"ト。" - -msgid "" -"OpenStack Networking agent that provides layer-3 (routing) services for " -"virtual networks." -msgstr "" -"仮想ネットワーク向けに L3 (ルーティング) サービスを提供する OpenStack " -"Networking エージェント。" - -msgid "" -"OpenStack Networking agent that provides metadata services for instances." -msgstr "" -"インスタンスにメタデータサービスを提供する OpenStack Networking エージェン" -"ト。" - -msgid "OpenStack code name" -msgstr "OpenStack コード名" - -msgid "OpenStack glossary" -msgstr "OpenStack 用語集" - -msgid "" -"OpenStack is a cloud operating system that controls large pools of compute, " -"storage, and networking resources throughout a data center, all managed " -"through a dashboard that gives administrators control while empowering their " -"users to provision resources through a web interface. OpenStack is an open " -"source project licensed under the Apache License 2.0." -msgstr "" -"OpenStack は、データセンター全体のコンピュートリソース、ストレージリソース、" -"ネットワークリソースの大規模なプールを制御する、クラウドオペレーティングシス" -"テム。管理者はすべてダッシュボードから制御できる。ユーザーは Web インター" -"フェースからリソースを配備できる。Apache License 2.0 に基づき許諾されるオープ" -"ンソースのプロジェクト。" - -msgid "" -"OpenStack project that aims to make cloud services easier to consume and " -"integrate with application development process by automating the source-to-" -"image process, and simplifying app-centric deployment. The project name is " -"solum." -msgstr "" -"クラウドサービスをより簡単に利用し、アプリケーション開発プロセスと統合するこ" -"とを目的とする OpenStack プロジェクト。ソースからイメージまでの手順を自動化" -"し、アプリケーション中心の開発を単純化します。プロジェクト名は solum。" - -msgid "" -"OpenStack project that aims to produce an OpenStack messaging service that " -"affords a variety of distributed application patterns in an efficient, " -"scalable and highly-available manner, and to create and maintain associated " -"Python libraries and documentation. The code name for the project is zaqar." -msgstr "" -"効率的、拡張可能、高可用な方法で、さまざまな分散アプリケーションのパターンを" -"提供する、OpenStack messaging service を開発することを目指している OpenStack " -"プロジェクト。また、関連する Python ライブラリーやドキュメントを作成してメン" -"テナンスする。このプロジェクトのコード名は zaqar。" - -msgid "" -"OpenStack project that produces a secret storage and generation system " -"capable of providing key management for services wishing to enable " -"encryption features. The code name of the project is barbican." -msgstr "" -"暗号化機能を有効化したいサービスに鍵管理機能を提供する機能を持つ、シークレッ" -"トストレージと生成システムを開発する OpenStack プロジェクト。このプロジェクト" -"の名前は barbican。" - -msgid "" -"OpenStack project that produces a set of Python libraries containing code " -"shared by OpenStack projects." -msgstr "" -"OpenStack プロジェクトに共有されるコードを含む Python ライブラリー群を作成す" -"る OpenStack プロジェクト。" - -msgid "OpenStack project that provides a Clustering service." -msgstr "クラスタリングサービスを提供する OpenStack プロジェクト。" - -msgid "OpenStack project that provides a Monitoring service." -msgstr "モニタリングサービスを提供する OpenStack プロジェクト。" - -msgid "" -"OpenStack project that provides a Software Development Lifecycle Automation " -"service." -msgstr "" -"ソフトウェア開発ライフサイクル自動化サービスを提供する OpenStack プロジェク" -"ト。" - -msgid "OpenStack project that provides a dashboard, which is a web interface." -msgstr "" -"ダッシュボードを提供する OpenStack プロジェクト。Web インターフェース。" - -msgid "" -"OpenStack project that provides a framework for performance analysis and " -"benchmarking of individual OpenStack components as well as full production " -"OpenStack cloud deployments. The code name of the project is rally." -msgstr "" -"各 OpenStack コンポーネント、本番の OpenStack 環境のパフォーマンス分析とベン" -"チマーク向けにフレームワークを提供する OpenStack プロジェクト。このプロジェク" -"トの名前は rally。" - -msgid "OpenStack project that provides a message service to applications." -msgstr "" -"メッセージサービスをアプリケーションに提供する OpenStack のプロジェクト。" - -msgid "" -"OpenStack project that provides a scalable data-processing stack and " -"associated management interfaces." -msgstr "" -"スケールアウト可能なデータ処理基盤と関連する管理インターフェースを提供する、" -"OpenStack のプロジェクト。" - -msgid "" -"OpenStack project that provides a scalable data-processing stack and " -"associated management interfaces. The code name for the project is sahara." -msgstr "" -"スケールアウト可能なデータ処理基盤と関連する管理インターフェースを提供する、" -"OpenStack のプロジェクト。プロジェクトのコード名は sahara です。" - -msgid "" -"OpenStack project that provides a set of services for management of " -"application containers in a multi-tenant cloud environment. The code name of " -"the project name is magnum." -msgstr "" -"マルチテナントクラウド環境において、アプリケーションコンテナーの管理サービス" -"を提供する、OpenStack のプロジェクト。プロジェクトのコード名は magnum です。" - -msgid "" -"OpenStack project that provides a simple YAML-based language to write " -"workflows, tasks and transition rules, and a service that allows to upload " -"them, modify, run them at scale and in a highly available manner, manage and " -"monitor workflow execution state and state of individual tasks. The code " -"name of the project is mistral." -msgstr "" -"ワークフロー、タスク、状態遷移ルールを書くための YAML ベースの言語を提供し、" -"それらをアップロード、編集できるサービス、それらを大規模かつ高可用に実行でき" -"るサービス、ワークフローの実行状態および個々のタスクの状態を管理および監視で" -"きるサービスを提供する OpenStack プロジェクト。このプロジェクトのコード名は " -"mistral。" - -msgid "OpenStack project that provides an Application catalog." -msgstr "アプリケーションカタログを提供する OpenStack のプロジェクト。" - -msgid "" -"OpenStack project that provides an application catalog service so that users " -"can compose and deploy composite environments on an application abstraction " -"level while managing the application lifecycle. The code name of the project " -"is murano." -msgstr "" -"ユーザーがアプリケーションのライフサイクルを管理しながら、アプリケーションの" -"抽象的なレベルで合成環境を作成して配備できるよう、アプリケーションカタログ" -"サービスを提供する OpenStack プロジェクト。このプロジェクトのコード名は " -"murano。" - -msgid "" -"OpenStack project that provides backup restore and disaster recovery as a " -"service." -msgstr "" -"バックアップリストアとディザスターリカバリーをサービスとして提供する " -"OpenStack プロジェクト。" - -msgid "OpenStack project that provides compute services." -msgstr "コンピュートサービスを提供する OpenStack プロジェクト。" - -msgid "OpenStack project that provides database services to applications." -msgstr "" -"データベースサービスをアプリケーションに提供する OpenStack のプロジェクト。" - -msgid "" -"OpenStack project that provides scalable, on demand, self service access to " -"authoritative DNS services, in a technology-agnostic manner. The code name " -"for the project is designate." -msgstr "" -"技術によらない方法で、権威 DNS サービスへの拡張可能、オンデマンド、セルフサー" -"ビスのアクセスを提供する OpenStack プロジェクト。このプロジェクトのコード名" -"は designate。" - -msgid "" -"OpenStack project that provides shared file systems as service to " -"applications." -msgstr "" -"共有ファイルシステムをアプリケーションに提供する OpenStack のプロジェクト。" - -msgid "OpenStack project that provides the Benchmark service." -msgstr "Benchmark service を提供する OpenStack プロジェクト。" - -msgid "OpenStack project that provides the Governance service." -msgstr "Governance service を提供する OpenStack プロジェクト。" - -msgid "OpenStack project that provides the Workflow service." -msgstr "ワークフローサービスを提供する OpenStack プロジェクト。" - -msgid "" -"OpenStack project that provisions bare metal, as opposed to virtual, " -"machines." -msgstr "" -"マシンを仮想とみなして、ベアメタルに展開する OpenStack のプロジェクト。" - -msgid "" -"OpenStack project that provisions bare metal, as opposed to virtual, " -"machines. The code name for the project is ironic." -msgstr "" -"マシンを仮想とみなして、ベアメタルに展開する OpenStack のプロジェクト。このプ" -"ロジェクトのコード名は ironic です。" - -msgid "" -"OpenStack project to provide Governance-as-a-Service across any collection " -"of cloud services in order to monitor, enforce, and audit policy over " -"dynamic infrastructure. The code name for the project is congress." -msgstr "" -"動的なインフラストラクチャー全体でポリシーを監視、強制、監査するために、さま" -"ざまなクラウドサービス群にわたり、Governance as a Service を提供する " -"OpenStack プロジェクト。このプロジェクトのコード名は congress。" - -msgid "OpenStack supports accessing the Amazon EC2 API through Compute." -msgstr "" -"OpenStack は、Compute 経由で Amazon EC2 API へのアクセスをサポートする。" - -msgid "" -"OpenStack supports encryption technologies such as HTTPS, SSH, SSL, TLS, " -"digital certificates, and data encryption." -msgstr "" -"OpenStack は、HTTPS、SSH、SSL、TLS、電子証明書、データ暗号化などの暗号化技術" -"をサポートします。" - -msgid "" -"OpenStack-on-OpenStack program. The code name for the OpenStack Deployment " -"program." -msgstr "" -"OpenStack-on-OpenStack プログラム。OpenStack Deployment プログラムのコード" -"名。" - -msgid "" -"Opens all objects for an object server and verifies the MD5 hash, size, and " -"metadata for each object." -msgstr "" -"あるオブジェクトサーバー用の全オブジェクトを開き、各オブジェクトの MD5 ハッ" -"シュ、サイズ、メタデータを検証する。" - -msgid "" -"Organizes and stores objects in Object Storage. Similar to the concept of a " -"Linux directory but cannot be nested. Alternative term for an Image service " -"container format." -msgstr "" -"Object Storage でオブジェクトを整理して保存する。Linux のディレクトリと似てい" -"るが、入れ子にできない。Image service のコンテナー形式の別名。" - -msgid "Oslo" -msgstr "Oslo" - -msgid "P" -msgstr "P" - -msgid "PCI passthrough" -msgstr "PCI パススルー" - -msgid "" -"Pages that use HTML, JavaScript, and Cascading Style Sheets to enable users " -"to interact with a web page or show simple animation." -msgstr "" -"ユーザーが Web ページと通信したり、簡単なアニメーションを表示したりするため" -"に、HTML、JavaScript、CSS を使用するページ。" - -msgid "" -"Passed to API requests and used by OpenStack to verify that the client is " -"authorized to run the requested operation." -msgstr "" -"クライアントが要求した操作を実行する権限を持つことを検証するために、API リク" -"エストに渡され、OpenStack により使用される。" - -msgid "" -"Passes requests from clients to the appropriate workers and returns the " -"output to the client after the job completes." -msgstr "" -"クライアントからのリクエストを適切なワーカーに渡す。ジョブ完了後、出力をクラ" -"イアントに返す。" - -msgid "Physical host dedicated to running compute nodes." -msgstr "コンピュートノード実行専用の物理ホスト。" - -msgid "Platform-as-a-Service (PaaS)" -msgstr "Platform-as-a-Service (PaaS)" - -msgid "" -"Point in time since the last container and accounts database sync among " -"nodes within Object Storage." -msgstr "" -"最新のコンテナーとアカウントのデータベースが Object Storage 内のノード間で同" -"期された基準時間。" - -msgid "" -"Principal communications protocol in the internet protocol suite for " -"relaying datagrams across network boundaries." -msgstr "" -"ネットワーク境界を越えてデータグラムを中継するための、インターネットプロトコ" -"ルにおける中心的な通信プロトコル。" - -msgid "" -"Processes client requests for VMs, updates Image service metadata on the " -"registry server, and communicates with the store adapter to upload VM images " -"from the back-end store." -msgstr "" -"仮想マシンに対するクライアントリクエスト、レジストリーサーバーにおける Image " -"サービスのメタデータの更新、バックエンドストアから仮想マシンイメージをアップ" -"ロードするためのストアアダプターを用いた通信を処理する。" - -msgid "Programming language used extensively in OpenStack." -msgstr "OpenStack において幅広く使用されるプログラミング言語。" - -msgid "" -"Project name for OpenStack Network Information Service. To be merged with " -"Networking." -msgstr "" -"OpenStack Network Information Service のプロジェクト名。Networking と統合予" -"定。" - -msgid "" -"Projects represent the base unit of “ownership” in OpenStack, in that all " -"resources in OpenStack should be owned by a specific project. In OpenStack " -"Identity, a project must be owned by a specific domain." -msgstr "" -"プロジェクトは OpenStack における「所有権」の基本的な単位で、OpenStack におけ" -"るあらゆるリソースは何らかのテナントに属する。 OpenStack Identity では、プロ" -"ジェクトは特定のドメインに何らかのドメインに属する。" - -msgid "" -"Protocol that encapsulates a wide variety of network layer protocols inside " -"virtual point-to-point links." -msgstr "" -"仮想のポイントツーポイントリンク内で、さまざまなネットワーク層のプロトコルを" -"カプセル化するプロトコル。" - -msgid "" -"Provided by Compute in the form of cloudpipes, specialized instances that " -"are used to create VPNs on a per-project basis." -msgstr "" -"Compute では cloudpipe の形で提供される。 cloudpipe では、特別なインスタンス" -"を使って、プロジェクト毎を基本として " -"VPN が作成される。" - -msgid "Provided in Compute through the system usage data facility." -msgstr "システム使用状況データ機能経由で Compute において提供される。" - -msgid "" -"Provides a method of allocating space on mass-storage devices that is more " -"flexible than conventional partitioning schemes." -msgstr "" -"伝統的なパーティションスキーマよりも柔軟に、大規模ストレージデバイスに領域を" -"割り当てる方式を提供する。" - -msgid "" -"Provides a predefined list of actions that the user can perform, such as " -"start or stop VMs, reset passwords, and so on. Supported in both Identity " -"and Compute and can be configured using the horizon dashboard." -msgstr "" -"仮想マシンの起動や停止、パスワードの初期化など、ユーザーが実行できる操作の事" -"前定義済み一覧を提供する。Identity と Compute においてサポートされる。ダッ" -"シュボードを使用して設定できる。" - -msgid "" -"Provides an interface to the underlying Open vSwitch service for the " -"Networking plug-in." -msgstr "" -"Networking のプラグインに対して、バックエンドの Open vSwitch サービスへのイン" -"ターフェースを提供する。" - -msgid "" -"Provides data redundancy and fault tolerance by creating copies of Object " -"Storage objects, accounts, and containers so that they are not lost when the " -"underlying storage fails." -msgstr "" -"Object Storage のオブジェクト、アカウント、コンテナーのコピーを作成すること" -"で、データ冗長性や耐障害性を実現する。これにより、バックエンドのストレージが" -"故障した場合でもデータは失わない。" - -msgid "" -"Provides logical partitioning of Compute resources in a child and parent " -"relationship. Requests are passed from parent cells to child cells if the " -"parent cannot provide the requested resource." -msgstr "" -"親子関係で Compute リソースの論理パーティションを提供する。親セルが要求された" -"リソースを提供できない場合、親セルからのリクエストは子セルに渡される。" - -msgid "Provides support for NexentaStor devices in Compute." -msgstr "Compute において NexentaStor デバイスのサポートを提供する。" - -msgid "Provides support for Open vSwitch in Networking." -msgstr "Networking で Open vSwitch のサポートを提供する。" - -msgid "Provides support for VMware NSX in Neutron." -msgstr "Neutron における VMware NSX サポートを提供する。" - -msgid "" -"Provides support for new and specialized types of back-end storage for the " -"Block Storage volume manager." -msgstr "" -"Block Storage のボリュームマネージャーに対して、新しい特別な種類のバックエン" -"ドストレージのサポートを提供する。" - -msgid "" -"Provides to the consumer the ability to deploy applications through a " -"programming language or tools supported by the cloud platform provider. An " -"example of Platform-as-a-Service is an Eclipse/Java programming platform " -"provided with no downloads required." -msgstr "" -"クラウドプラットフォームプロバイダーによりサポートされるプログラミング言語や" -"ツールを用いてアプリケーションを配備する機能を利用者に提供する。PaaS の例は、" -"ダウンロードする必要がない、Eclipse/Java プログラミングプラットフォームです。" - -msgid "Puppet" -msgstr "Puppet" - -msgid "Python" -msgstr "Python" - -msgid "Q" -msgstr "Q" - -msgid "QEMU Copy On Write 2 (QCOW2)" -msgstr "QEMU Copy On Write 2 (QCOW2)" - -msgid "QEMU is a generic and open source machine emulator and virtualizer." -msgstr "QEMUはオープンソースのマシンエミュレーターと仮想化ツールである。" - -msgid "Qpid" -msgstr "Qpid" - -msgid "Quick EMUlator (QEMU)" -msgstr "Quick EMUlator (QEMU)" - -msgid "R" -msgstr "R" - -msgid "RADOS Block Device (RBD)" -msgstr "RADOS Block Device (RBD)" - -msgid "RAM filter" -msgstr "RAM フィルター" - -msgid "RAM overcommit" -msgstr "RAM オーバーコミット" - -msgid "RESTful" -msgstr "RESTful" - -msgid "RESTful web services" -msgstr "RESTful Web サービス" - -msgid "RPC driver" -msgstr "RPC ドライバー" - -msgid "RPC drivers" -msgstr "RPC ドライバー" - -msgid "RXTX cap" -msgstr "RXTX キャップ" - -msgid "RXTX cap/quota" -msgstr "RXTX キャップ/クォータ" - -msgid "RXTX quota" -msgstr "RXTX クォータ" - -msgid "RabbitMQ" -msgstr "RabbitMQ" - -msgid "Rackspace Cloud Files" -msgstr "Rackspace Cloud Files" - -msgid "Recon" -msgstr "recon" - -msgid "Red Hat Enterprise Linux (RHEL)" -msgstr "Red Hat Enterprise Linux (RHEL)" - -msgid "" -"Reducing the size of files by special encoding, the file can be decompressed " -"again to its original content. OpenStack supports compression at the Linux " -"file system level but does not support compression for things such as Object " -"Storage objects or Image service VM images." -msgstr "" -"特別なエンコーディングによりファイル容量を減らすこと。このファイルは、元の内" -"容に展開できます。OpenStack は、Linux ファイルシステムレベルの圧縮をサポート" -"しますが、Object Storage のオブジェクトや Image service の仮想マシンイメージ" -"などの圧縮をサポートしません。" - -msgid "Released as open source by NASA in 2010 and is the basis for Compute." -msgstr "" -"2010 年に NASA によりオープンソースとしてリリースされた。Compute の基になっ" -"た。" - -msgid "" -"Released as open source by Rackspace in 2010; the basis for Object Storage." -msgstr "" -"Rackspace により 2010 年にオープンソースとして公開された。Object Storage の" -"ベース。" - -msgid "Reliable, Autonomic Distributed Object Store (RADOS)" -msgstr "Reliable, Autonomic Distributed Object Store (RADOS)" - -msgid "Remote Procedure Call (RPC)" -msgstr "Remote Procedure Call (RPC)" - -msgid "" -"Removes all data on the server and replaces it with the specified image. " -"Server ID and IP addresses remain the same." -msgstr "" -"サーバからすべてのデータを消去し、特定のイメージで置き換える。サーバのIDとIP" -"アドレスは変更されない。" - -msgid "Represents a virtual, isolated OSI layer-2 subnet in Networking." -msgstr "Networking における仮想の分離された OSI L-2 サブネットを表す。" - -msgid "Role Based Access Control (RBAC)" -msgstr "Role Based Access Control (RBAC)" - -msgid "Runs automated tests against the core OpenStack API; written in Rails." -msgstr "" -"コア OpenStack API に対して自動テストを実行する。Rails で書かれている。" - -msgid "S" -msgstr "S" - -msgid "S3" -msgstr "S3" - -msgid "S3 storage service" -msgstr "S3 ストレージサービス" - -msgid "SAML assertion" -msgstr "SAML アサーション" - -msgid "SELinux" -msgstr "SELinux" - -msgid "" -"SINA standard that defines a RESTful API for managing objects in the cloud, " -"currently unsupported in OpenStack." -msgstr "" -"クラウドにあるオブジェクトを管理するための RESTful API を定義する SINA 標準。" -"現在 OpenStack ではサポートされていない。" - -msgid "SPICE" -msgstr "SPICE" - -msgid "SPICE (Simple Protocol for Independent Computing Environments)" -msgstr "SPICE (Simple Protocol for Independent Computing Environments)" - -msgid "SQL-Alchemy" -msgstr "SQL-Alchemy" - -msgid "SQLite" -msgstr "SQLite" - -msgid "SUSE Linux Enterprise Server (SLES)" -msgstr "SUSE Linux Enterprise Server (SLES)" - -msgid "See API endpoint." -msgstr "API エンドポイントを参照。" - -msgid "See access control list." -msgstr "「アクセス制御リスト」参照。" - -msgid "Service Level Agreement (SLA)" -msgstr "サービス水準合意 (SLA; Service Level Agreement)" - -msgid "" -"Set of bits that make up a single character; there are usually 8 bits to a " -"byte." -msgstr "1 つの文字を構成するビットの組。通常は 8 ビットで 1 バイトになる。" - -msgid "" -"Setting for the Compute RabbitMQ message delivery mode; can be set to either " -"transient or persistent." -msgstr "" -"Compute RabbitMQ メッセージ配信モード用設定。transient(一時)又は " -"persistent(永続)のいずれかを設定できる。" - -msgid "Shared File Systems API" -msgstr "Shared File Systems API" - -msgid "Shared File Systems service" -msgstr "Shared File Systems サービス" - -msgid "Sheepdog" -msgstr "Sheepdog" - -msgid "Simple Cloud Identity Management (SCIM)" -msgstr "Simple Cloud Identity Management (SCIM)" - -msgid "Single-root I/O Virtualization (SR-IOV)" -msgstr "Single-root I/O Virtualization (SR-IOV)" - -msgid "SmokeStack" -msgstr "SmokeStack" - -msgid "" -"Soft limit on the amount of network traffic a Compute VM instance can send " -"and receive." -msgstr "" -"Compute の仮想マシンインスタンスが送受信できるネットワーク通信量のソフト制" -"限。" - -msgid "Software Development Lifecycle Automation service" -msgstr "ソフトウェア開発ライフサイクル自動化サービス" - -msgid "" -"Software component providing the actual implementation for Networking APIs, " -"or for Compute APIs, depending on the context." -msgstr "" -"利用形態に応じた、Networking API や Compute API の具体的な実装を提供するソフ" -"トウェアコンポーネント。" - -msgid "" -"Software that arbitrates and controls VM access to the actual underlying " -"hardware." -msgstr "VM のアクセスを実際の下位ハードウェアに仲介して制御するソフトウェア。" - -msgid "" -"Software that enables multiple VMs to share a single physical NIC within " -"Compute." -msgstr "" -"複数の仮想マシンが Compute 内で単一の物理 NIC を共有するためのソフトウェア。" - -msgid "" -"Software that runs on a host or node and provides the features and functions " -"of a hardware-based network switch." -msgstr "" -"ホストやノードで実行され、ハードウェアのネットワークスイッチの機能を提供する" -"ソフトウェア。" - -msgid "SolidFire Volume Driver" -msgstr "SolidFire Volume Driver" - -msgid "" -"Special tenant that contains all services that are listed in the catalog." -msgstr "カタログに一覧化される全サービスを含む特別なテナント。" - -msgid "" -"Specification for managing identity in the cloud, currently unsupported by " -"OpenStack." -msgstr "" -"クラウドで認証情報を管理するための仕様。現在、OpenStack によりサポートされて" -"いない。" - -msgid "" -"Specifies additional requirements when Compute determines where to start a " -"new instance. Examples include a minimum amount of network bandwidth or a " -"GPU." -msgstr "" -"Compute が新しいインスタンスを起動する場所を判断するとき、追加の要件を指定す" -"る。例えば、ネットワーク帯域の最小量、GPU などがある。" - -msgid "" -"Specifies the authentication source used by Image service or Identity. In " -"the Database service, it refers to the extensions implemented for a data " -"store." -msgstr "" -"Image サービスや Identity サービスが使用する認証元を指定する。 Database サー" -"ビスでは、データストア用に実装された拡張を指す。" - -msgid "StackTach" -msgstr "StackTach" - -msgid "Standard for packaging VM images. Supported in OpenStack." -msgstr "仮想マシンイメージのパッケージ化の標準。OpenStack でサポートされる。" - -msgid "StaticWeb" -msgstr "StaticWeb" - -msgid "" -"Storage protocol similar in concept to TCP/IP; encapsulates SCSI commands " -"and data." -msgstr "" -"TCP/IP に似た概念のストレージプロトコル。SCSI コマンドとデータをカプセル化す" -"る。" - -msgid "" -"Storage protocol that encapsulates SCSI frames for transport over IP " -"networks." -msgstr "" -"IP ネットワーク上で転送するために、SCSI フレームをカプセル化するストレージプ" -"ロトコル。" - -msgid "Stores CephFS metadata." -msgstr "CephFS メタデータを格納する。" - -msgid "" -"String of text known only by the user; used along with an access key to make " -"requests to the Compute API." -msgstr "" -"ユーザーのみが知っているテキスト文字列。リクエストを Compute API に発行するた" -"めに、アクセスキーと一緒に使用される。" - -msgid "Subdivides physical CPUs. Instances can then use those divisions." -msgstr "" -"物理 CPU を分割する。インスタンスは、これらの分割したものを使用できる。" - -msgid "Supports interaction with VMware products in Compute." -msgstr "Compute で VMware 製品の操作をサポートする。" - -msgid "T" -msgstr "T" - -msgid "TempAuth" -msgstr "TempAuth" - -msgid "TempURL" -msgstr "TempURL" - -msgid "Tempest" -msgstr "Tempest" - -msgid "Tenant API" -msgstr "テナント API" - -msgid "" -"Term for an Object Storage process that runs for a long time. Can indicate a " -"hung process." -msgstr "" -"長時間動作している Object Storage のプロセスを指す用語。ハングしたプロセスを" -"意味する可能性もある。" - -msgid "" -"Term used in the OSI network architecture for the data link layer. The data " -"link layer is responsible for media access control, flow control and " -"detecting and possibly correcting errors that may occur in the physical " -"layer." -msgstr "" -"OSI ネットワークアーキテクチャーにおけるデータリンク層に使用される用語。デー" -"タリンク層は、メディアアクセス制御、フロー制御、物理層で発生する可能性のある" -"エラー検知、できる限りエラー訂正に責任を持つ。" - -msgid "" -"Term used in the OSI network architecture for the network layer. The network " -"layer is responsible for packet forwarding including routing from one node " -"to another." -msgstr "" -"OSI ネットワークアーキテクチャーにおけるネットワーク層に使用される用語。ネッ" -"トワーク層は、パケット転送、あるノードから別のノードへのルーティングに責任を" -"持つ。" - -msgid "" -"The nova-network worker daemon; provides services such as " -"giving an IP address to a booting nova instance." -msgstr "" -"nova-network のワーカーデーモン。起動中の nova インスタン" -"スに IP アドレスを提供するなどのサービスを提供する。" - -msgid "" -"The nova-api daemon provides " -"access to nova services. Can communicate with other APIs, such as the Amazon " -"EC2 API." -msgstr "" -"nova-api デーモンは nova サービス" -"へのアクセスを提供する。Amazon EC2 API など、他の API と通信できる。" - -msgid "" -"The API used to access the OpenStack Identity service provided through " -"keystone." -msgstr "" -"Keystone が提供する OpenStack Identity サービスアクセスに使用される API。" - -msgid "The Amazon commercial block storage product." -msgstr "Amazon のブロックストレージの商用製品。" - -msgid "The Amazon commercial compute product, similar to Compute." -msgstr "Amazon の商用コンピュート製品。Compute と似ている。" - -msgid "" -"The Apache Software Foundation supports the Apache community of open-source " -"software projects. These projects provide software products for the public " -"good." -msgstr "" -"The Apache Software Foundation は、オープンソースソフトウェアプロジェクトの " -"Apache コミュニティーをサポートする。これらのプロジェクトは、公共財のためにソ" -"フトウェア製品を提供する。" - -msgid "The Block Storage driver for the SolidFire iSCSI storage appliance." -msgstr "" -"SolidFire iSCSI ストレージアプライアンス向けの Block Storage ドライバー。" - -msgid "" -"The Border Gateway Protocol is a dynamic routing protocol that connects " -"autonomous systems. Considered the backbone of the Internet, this protocol " -"connects disparate networks to form a larger network." -msgstr "" -"Border Gateway Protocol は、自律システムを接続する、動的ルーティングプロトコ" -"ルである。インターネットのバックボーンと比べて、このプロトコルは、より大きな" -"ネットワークを形成するために、異なるネットワークを接続する。" - -msgid "The Ceph storage daemon." -msgstr "Ceph ストレージデーモン。" - -msgid "" -"The Compute RabbitMQ message exchange that remains active when the server " -"restarts." -msgstr "" -"サーバーの再起動時に有効なままになる Compute の RabbitMQ メッセージ交換。" - -msgid "" -"The Compute VM scheduling algorithm that attempts to start a new VM on the " -"host with the least amount of load." -msgstr "" -"新規仮想マシンを合計負荷の最も低いホストで起動しようとする、Compute 仮想マシ" -"ンスケジューリングアルゴリズム。" - -msgid "" -"The Compute component that chooses suitable hosts on which to start VM " -"instances." -msgstr "" -"仮想マシンインスタンスを起動するために適切なホストを選択する Compute のコン" -"ポーネント。" - -msgid "" -"The Compute component that contains a list of the current capabilities of " -"each host within the cell and routes requests as appropriate." -msgstr "" -"セル内にある各ホストの現在のキャパシティー一覧を持ち、リクエストを適切にルー" -"ティングする、Compute のコンポーネント。" - -msgid "" -"The Compute component that gives IP addresses to authorized nodes and " -"assumes DHCP, DNS, and routing configuration and services are provided by " -"something else." -msgstr "" -"認可されたノードに IP アドレスを割り当てる Compute のコンポーネント。DHCP、" -"DNS、ルーティングの設定とサービスが別の何かにより提供されることを仮定してい" -"る。" - -msgid "" -"The Compute component that manages various network components, such as " -"firewall rules, IP address allocation, and so on." -msgstr "" -"ファイアウォールのルール、IP アドレスの割り当てなど、さまざまなネットワークの" -"コンポーネントを管理する、Compute のコンポーネント。" - -msgid "" -"The Compute component that provides dnsmasq (DHCP, DNS, BOOTP, TFTP) and " -"radvd (routing) services." -msgstr "" -"dnsmasq (DHCP、DNS、BOOTP、TFTP) や radvd (ルーティング) のサービスを提供す" -"る Compute のコンポーネント。" - -msgid "" -"The Compute component that runs on each compute node and manages the VM " -"instance lifecycle, including run, reboot, terminate, attach/detach volumes, " -"and so on. Provided by the nova-compute daemon." -msgstr "" -"各ノードで動作し、仮想マシンインスタンスのライフサイクル (実行、再起動、終" -"了、ボリュームの接続や切断など) を管理する、Compute のコンポーネント。" -"nova-compute デーモンにより提供さ" -"れる。" - -msgid "" -"The Compute direct exchanges, fanout exchanges, and topic exchanges use this " -"key to determine how to process a message; processing varies depending on " -"exchange type." -msgstr "" -"Compute の直接交換、ファンアウト交換、トピック交換は、このキーを使用して、" -"メッセージを処理する方法を判断する。処理内容は交換形式に応じて変化する。" - -msgid "" -"The Compute scheduling method that attempts to fill a host with VMs rather " -"than starting new VMs on a variety of hosts." -msgstr "" -"様々なホスト上で新しい VM を起動するよりも、なるべく一つのホストを埋めようと" -"する Compute スケジューリング手法。" - -msgid "" -"The Compute service can send alerts through its notification system, which " -"includes a facility to create custom notification drivers. Alerts can be " -"sent to and displayed on the horizon dashboard." -msgstr "" -"Compute のサービスは、通知システム経由で警告を送信できる。カスタム通知ドライ" -"バーを作成する機能がある。警告は、送信したり、ダッシュボードに表示したりでき" -"る。" - -msgid "" -"The Compute service provides accounting information through the event " -"notification and system usage data facilities." -msgstr "" -"Compute サービスは、イベント通知やシステム使用状況データ機能からアカウンティ" -"ング情報を提供する。" - -msgid "The Compute setting that enables or disables RAM overcommitment." -msgstr "RAM オーバーコミットを有効化または無効化する Compute の設定。" - -msgid "The Identity component that provides high-level authorization services." -msgstr "高レベルの認可サービスを提供する Identity のコンポーネント。" - -msgid "The Identity service component that provides authentication services." -msgstr "認証サービスを提供する Identity のコンポーネント。" - -msgid "" -"The Identity service endpoint template that contains services available to " -"all tenants." -msgstr "" -"すべてのテナントが利用可能なサービスを含む、Identity のエンドポイントテンプ" -"レート。" - -msgid "The Image service API endpoint for management of VM images." -msgstr "仮想マシンイメージの管理用の Image service API エンドポイント。" - -msgid "" -"The Network Controller provides virtual networks to enable compute servers " -"to interact with each other and with the public network. All machines must " -"have a public and private network interface. A VLAN network is a private " -"network interface, which is controlled by the vlan_interface option with VLAN managers." -msgstr "" -"ネットワークコントローラーは、コンピュートのサーバーが、お互いに通信したり、" -"パブリックなネットワークと通信したりできるようにするために、仮想ネットワーク" -"を提供する。すべてのマシンは、パブリックネットワークインターフェースとプライ" -"ベートネットワークインターフェースを持つ必要がある。VLAN ネットワークは、" -"VLAN マネージャーを用いた vlan_interface オプションにより" -"制御される、プライベートネットワークインターフェースである。" - -msgid "" -"The Network Controller provides virtual networks to enable compute servers " -"to interact with each other and with the public network. All machines must " -"have a public and private network interface. A private network interface can " -"be a flat or VLAN network interface. A flat network interface is controlled " -"by the flat_interface with flat managers. A VLAN network interface is " -"controlled by the vlan_interface option with VLAN " -"managers." -msgstr "" -"ネットワークコントローラーは、コンピュートサーバー間、およびコンピュートサー" -"バーとパブリックネットワークとの通信を行う仮想ネットワークを用意する。すべて" -"の物理マシンにはパブリック側とプライベート側のネットワークインタフェースが必" -"要。プライベートネットワークインターフェースは、フラットネットワークまたは " -"VLAN ネットワークインターフェースにできる。フラットネットワークインターフェー" -"スは、フラットマネージャーを用いて flat_interface により制御される。VLAN ネッ" -"トワークインターフェースは、VLAN マネージャーの vlan_interface オプションにより制御される。" - -msgid "" -"The Network Controller provides virtual networks to enable compute servers " -"to interact with each other and with the public network. All machines must " -"have a public and private network interface. The public network interface is " -"controlled by the public_interface option." -msgstr "" -"compute サーバーがパブリックネットワークと相互通信できるよう、ネットワークコ" -"ントローラーが仮想ネットワークを提供する。全マシンにはパブリックとプライベー" -"トのネットワークインターフェースがなければならない。パブリックネットワークイ" -"ンターフェースは public_interface オプションにより制御され" -"る。" - -msgid "" -"The Object Storage back-end process that creates and manages object replicas." -msgstr "" -"オブジェクトの複製を作成および管理する Object Storage のバックエンドプロセ" -"ス。" - -msgid "" -"The Object Storage component that provides container services, such as " -"create, delete, list, and so on." -msgstr "" -"作成、削除、一覧表示などのコンテナーサービスを提供する Object Storage のコン" -"ポーネント。" - -msgid "" -"The Object Storage context of an account. Do not confuse with a user account " -"from an authentication service, such as Active Directory, /etc/passwd, " -"OpenLDAP, OpenStack Identity, and so on." -msgstr "" -"Object Storage のアカウントのコンテキスト。Active Directory、/etc/passwd、" -"OpenLDAP、OpenStack Identity などの認証サービスのユーザーアカウントと混同しな" -"いこと。" - -msgid "" -"The OpenStack configuration files use an INI format to describe options and " -"their values. It consists of sections and key value pairs." -msgstr "" -"OpenStack 設定ファイルは、オプションやその値を記述するために、INI 形式を使用" -"する。セクションとキーバリューペアから構成される。" - -msgid "" -"The OpenStack core project that enables management of volumes, volume " -"snapshots, and volume types. The project name of Block Storage is cinder." -msgstr "" -"ボリューム、ボリュームのスナップショット、ボリューム種別を管理する、" -"OpenStack のコアプロジェクト。Block Storage のプロジェクト名は cinder。" - -msgid "" -"The OpenStack core project that provides a central directory of users mapped " -"to the OpenStack services they can access. It also registers endpoints for " -"OpenStack services. It acts as a common authentication system. The project " -"name of Identity is keystone." -msgstr "" -"ユーザーがアクセスできる OpenStack サービスに対応付けられた、ユーザーの中央" -"ディレクトリーを提供する、OpenStack コアプロジェクト。OpenStack サービスのエ" -"ンドポイントも登録する。一般的な認証システムとして動作する。Identity のプロ" -"ジェクト名は keystone。" - -msgid "" -"The OpenStack core project that provides compute services. The project name " -"of Compute service is nova." -msgstr "" -"コンピュートサービスを提供する OpenStack のコアプロジェクト。Compute のプロ" -"ジェクト名は nova。" - -msgid "" -"The OpenStack core project that provides eventually consistent and redundant " -"storage and retrieval of fixed digital content. The project name of " -"OpenStack Object Storage is swift." -msgstr "" -"結果整合性(eventually consistent)、ストレージ冗長化、静的デジタルコンテンツ" -"取得、といった機能を提供する、OpenStack のコアプロジェクト。OpenStack Object " -"Storage のプロジェクト名は swift。" - -msgid "" -"The OpenStack project that OpenStack project that implements clustering " -"services and libraries for the management of groups of homogeneous objects " -"exposed by other OpenStack services. The project name of Clustering service " -"is senlin." -msgstr "" -"クラスタリングサービスと、他の OpenStack サービスにより公開された均質なオブ" -"ジェクトグループを管理するためのライブラリーを実現する OpenStack プロジェク" -"ト。このプロジェクトのコード名は senlin。" - -msgid "" -"The OpenStack project that provides a multi-tenant, highly scalable, " -"performant, fault-tolerant Monitoring-as-a-Service solution for metrics, " -"complex event processing, and logging. It builds an extensible platform for " -"advanced monitoring services that can be used by both operators and tenants " -"to gain operational insight and visibility, ensuring availability and " -"stability. The project name is monasca." -msgstr "" -"マルチテナントで、高いスケーラビリティーを持ち、高性能で、耐障害性のある、" -"Monitoring-as-a-Service ソリューションを提供する OpenStack プロジェクト。 計" -"測情報、複合イベント処理 (complex event processing)、ログ監視が対象。オペレー" -"ター、テナントの両者が利用できる、高度なモニタリングサービスに対応できる拡張" -"性のあるプラットフォームを開発しており、可用性と安定性を確保しながら、運用上" -"の問題の特定や可視化を実現できる。プロジェクト名は monasca。" - -msgid "" -"The OpenStack project that provides integrated tooling for backing up, " -"restoring, and recovering file systems, instances, or database backups. The " -"project name is freezer." -msgstr "" -"ファイルシステム、インスタンス、データベースバックアップのバックアップ、リス" -"トア、リカバリー用の統合ツールを提供する OpenStack プロジェクト。プロジェクト" -"名は freezer。" - -msgid "The POSIX-compliant file system provided by Ceph." -msgstr "Ceph により提供される POSIX 互換ファイルシステム。" - -msgid "" -"The SCSI disk protocol tunneled within Ethernet, supported by Compute, " -"Object Storage, and Image service." -msgstr "" -"イーサネット内でトンネルされる SCSI ディスクプロトコル。Compute、Object " -"Storage、Image service によりサポートされる。" - -msgid "" -"The Simple Protocol for Independent Computing Environments (SPICE) provides " -"remote desktop access to guest virtual machines. It is an alternative to " -"VNC. SPICE is supported by OpenStack." -msgstr "" -"Simple Protocol for Independent Computing Environments (SPICE) は、ゲスト仮想" -"マシンに対するリモートデスクトップアクセスを提供する。VNC の代替品。SPICE は " -"OpenStack によりサポートされる。" - -msgid "The Xen administrative API, which is supported by Compute." -msgstr "Xen 管理 API。Compute によりサポートされる。" - -msgid "" -"The ability to encrypt data at the file system, disk partition, or whole-" -"disk level. Supported within Compute VMs." -msgstr "" -"ファイルシステム、ディスクパーティション、ディスク全体を暗号化する機能。" -"Compute の仮想マシン内でサポートされる。" - -msgid "" -"The ability to start new VM instances based on the actual memory usage of a " -"host, as opposed to basing the decision on the amount of RAM each running " -"instance thinks it has available. Also known as RAM overcommit." -msgstr "" -"実行中の各インスタンスが利用可能と考えている RAM 量に基づく判断をベースにする" -"代わりに、ホスト上の実際のメモリ使用量をベースにした、新しい VM インスタンス" -"を起動する機能。" - -msgid "" -"The ability to start new VM instances based on the actual memory usage of a " -"host, as opposed to basing the decision on the amount of RAM each running " -"instance thinks it has available. Also known as memory overcommit." -msgstr "" -"実行中の各インスタンスが利用可能と考えている RAM 量に基づく判断をベースにする" -"代わりに、ホスト上の実際のメモリ使用量をベースにした、新しい VM インスタンス" -"を起動する機能。" - -msgid "" -"The ability within Compute to move running virtual machine instances from " -"one host to another with only a small service interruption during switchover." -msgstr "" -"切り替え中のわずかなサービス中断のみで、実行中の仮想マシンをあるホストから別" -"のホストに移動する、Compute 内の機能。" - -msgid "" -"The act of verifying that a user, process, or client is authorized to " -"perform an action." -msgstr "" -"ユーザー、プロセス、クライアントが操作を実行する権限を持つかどうかを確認する" -"こと。" - -msgid "" -"The amount of available data used by communication resources, such as the " -"Internet. Represents the amount of data that is used to download things or " -"the amount of data available to download." -msgstr "" -"インターネットなどの通信リソースにより使用される、利用可能なデータ量。何かを" -"ダウンロードするために使用されるデータの合計量、またはダウンロードするために" -"利用可能なデータの合計量を表す。" - -msgid "" -"The amount of time it takes for a new Object Storage object to become " -"accessible to all clients." -msgstr "" -"Object Storage の新規オブジェクトがすべてのクライアントからアクセス可能になる" -"までにかかる時間。" - -msgid "" -"The association between an Image service VM image and a tenant. Enables " -"images to be shared with specified tenants." -msgstr "" -"Image service の仮想マシンイメージとテナント間の関連。イメージを特別なテナン" -"トと共有できるようになる。" - -msgid "" -"The back-end store used by Image service to store VM images, options include " -"Object Storage, local file system, S3, or HTTP." -msgstr "" -"仮想マシンイメージを保存するために、Image service により使用されるバックエン" -"ドストア。オプションとして、Object Storage、ローカルファイルシステム、S3、" -"HTTP がある。" - -msgid "" -"The code name for the eighth release of OpenStack. The design summit took " -"place in Portland, Oregon, US and Havana is an unincorporated community in " -"Oregon." -msgstr "" -"OpenStack の 8 番目のリリースのコード名。デザインサミットがアメリカ合衆国オレ" -"ゴン州ポートランドで開催された。Havana は、オレゴン州の非法人コミュニティーで" -"ある。" - -msgid "" -"The code name for the eleventh release of OpenStack. The design summit took " -"place in Paris, France. Due to delays in the name selection, the release was " -"known only as K. Because k is the unit symbol for kilo " -"and the reference artifact is stored near Paris in the Pavillon de Breteuil " -"in Sèvres, the community chose Kilo as the release name." -msgstr "" -"OpenStack の 11 番目のリリースのコード名。デザインサミットは、フランスのパリ" -"で開催された。名前選定の遅れにより、このリリースは K のみで知られていた。" -"k はキロを表す単位記号であり、その原器がパリ近郊の " -"Pavillon de Breteuil in Sèvres に保存されているので、コミュニティーはリリース" -"名として Kilo を選択した" - -msgid "" -"The code name for the fifteenth release of OpenStack. The design summit will " -"take place in Barcelona, Spain. Ocata is a beach north of Barcelona." -msgstr "" -"OpenStack の 14 番目のリリースのコード名。デザインサミットは、スペインのバル" -"セロナで開催される。Ocata はバルセロナ北部のビーチ。" - -msgid "" -"The code name for the fourteenth release of OpenStack. The design summit " -"will take place in Austin, Texas, US. The release is named after \"Newton " -"House\" which is located at 1013 E. Ninth St., Austin, TX. which is listed " -"on the National Register of Historic Places." -msgstr "" -"OpenStack の 14 番目のリリースのコード名。デザインサミットは、アメリカ合衆国" -"テキサス州オースチンで開催される。リリースの名前は、テキサス州オースチンの " -"1013 E. Ninth St. にある「Newton House」にちなんでいる。これはアメリカ合衆国" -"国家歴史登録財に登録されている。" - -msgid "" -"The code name for the initial release of OpenStack. The first design summit " -"took place in Austin, Texas, US." -msgstr "" -"OpenStack の初期リリースのコード名。最初のデザインサミットは、アメリカ合衆国" -"テキサス州オースチンで開催された。" - -msgid "" -"The code name for the ninth release of OpenStack. The design summit took " -"place in Hong Kong and Ice House is a street in that city." -msgstr "" -"OpenStack の 9 番目のリリースのコード名。デザインサミットは、香港で開催され" -"た。Ice House は、その近くにある通りである。" - -msgid "" -"The code name for the seventh release of OpenStack. The design summit took " -"place in San Diego, California, US and Grizzly is an element of the state " -"flag of California." -msgstr "" -"OpenStack の 7 番目のリリースのコード名。デザインサミットがアメリカ合衆国カリ" -"フォルニア州サンディエゴで開催された。Grizzly は、カリフォルニア州の州旗に使" -"われている。" - -msgid "" -"The code name for the tenth release of OpenStack. The design summit took " -"place in Atlanta, Georgia, US and Juno is an unincorporated community in " -"Georgia." -msgstr "" -"OpenStack の 10 番目のリリースのコード名。デザインサミットはアメリカ合衆国" -"ジョージア州アトランタにて開催された。Juno は、ジョージア州の非公式コミュニ" -"ティー。" - -msgid "" -"The code name for the thirteenth release of OpenStack. The design summit " -"took place in Tokyo, Japan. Mitaka is a city in Tokyo." -msgstr "" -"OpenStack の 13 番目のリリースのコード名。デザインサミットは、日本の東京で開" -"催された。三鷹は、東京にある都市です。" - -msgid "" -"The code name for the twelfth release of OpenStack. The design summit took " -"place in Vancouver, Canada and Liberty is the name of a village in the " -"Canadian province of Saskatchewan." -msgstr "" -"OpenStack の 12 番目のリリースのコード名。デザインサミットは、カナダのバン" -"クーバーで開催された。Liberty は、サスカチュワン州にある村の名前。" - -msgid "The collaboration site for OpenStack." -msgstr "OpenStack 用コラボレーションサイト。" - -msgid "" -"The cooperative threading model used by Python; reduces race conditions and " -"only context switches when specific library calls are made. Each OpenStack " -"service is its own thread." -msgstr "" -"Python により使用される協調スレッドモデル。特定のライブラリーコールが発行され" -"るときの競合状態とコンテキストスイッチを減らす。各 OpenStack サービスは自身の" -"スレッドである。" - -msgid "The current state of a guest VM image." -msgstr "ゲスト仮想マシンイメージの現在の状態。" - -msgid "" -"The current status of a VM image in Image service, not to be confused with " -"the status of a running instance." -msgstr "" -"Image service における仮想マシンイメージの現在の状態。実行中のインスタンスの" -"状態と混同しないこと。" - -msgid "" -"The daemon, worker, or service that a client communicates with to access an " -"API. API endpoints can provide any number of services, such as " -"authentication, sales data, performance meters, Compute VM commands, census " -"data, and so on." -msgstr "" -"クライアントが API にアクセスするために通信するデーモン、ワーカーまたはサービ" -"ス。API エンドポイントは、認証、売上データ、パフォーマンス統計、Compute 仮想" -"マシンコマンド、センサスデータなどのような数多くのサービスを提供できます。" - -msgid "The default message queue software used by OpenStack." -msgstr "OpenStackでデフォルトで採用されているメッセージキューのソフトウェア。" - -msgid "" -"The default panel that is displayed when a user accesses the horizon " -"dashboard." -msgstr "" -"ユーザーがダッシュボードにアクセスした際に表示されるデフォルトのパネル。" - -msgid "The fibre channel protocol tunneled within Ethernet." -msgstr "イーサネットでトンネルされるファイバーチャネルプロトコル。" - -msgid "" -"The main virtual communication line used by all AMQP messages for inter-" -"cloud communications within Compute." -msgstr "" -"Compute 内でクラウド内通信のためにすべての AMQP メッセージにより使用されるメ" -"インの仮想通信ライン。" - -msgid "" -"The method of storage used by horizon to track client sessions, such as " -"local memory, cookies, a database, or memcached." -msgstr "" -"クライアントのセッションを管理するために、horizon により使用される保存方法。" -"ローカルメモリー、クッキー、データベース、memcached など。" - -msgid "" -"The method that a service uses for persistent storage, such as iSCSI, NFS, " -"or local disk." -msgstr "" -"サービスが、iSCSI、NFS、ローカルディスクなどの永続ストレージを使用する方式。" - -msgid "" -"The method used by the Compute RabbitMQ for intra-service communications." -msgstr "内部サービス通信のために Compute RabbitMQ により使用される方法。" - -msgid "The most common web server software currently used on the Internet." -msgstr "" -"現在インターネットにおいて使用されている最も一般的な Web サーバーソフトウェ" -"ア。" - -msgid "The number of replicas of the data in an Object Storage ring." -msgstr "Object Storage リングにおけるデータ複製数。" - -msgid "" -"The open standard messaging protocol used by OpenStack components for intra-" -"service communications, provided by RabbitMQ, Qpid, or ZeroMQ." -msgstr "" -"インフラサービス通信のために OpenStack コンポーネントにより使用されるオープン" -"な標準メッセージングプロトコル。RabbitMQ、Qpid、ZeroMQ により提供される。" - -msgid "" -"The persistent data store used to save and retrieve information for a " -"service, such as lists of Object Storage objects, current state of guest " -"VMs, lists of user names, and so on. Also, the method that the Image service " -"uses to get and store VM images. Options include Object Storage, local file " -"system, S3, and HTTP." -msgstr "" -"Object Storage のオブジェクトの一覧、ゲスト仮想マシンの現在の状態、ユーザー名" -"の一覧など、サービスに関する情報を保存および取得するために使用される永続デー" -"タストア。また、Image service が仮想マシンイメージを取得および保存するために" -"使用する方式。Object Storage、ローカルファイルシステム、S3、HTTP などの選択肢" -"がある。" - -msgid "" -"The person responsible for planning and maintaining an OpenStack " -"installation." -msgstr "OpenStack インストールを計画し、管理する責任者。" - -msgid "" -"The point where a user interacts with a service; can be an API endpoint, the " -"horizon dashboard, or a command-line tool." -msgstr "" -"ユーザーがサービスと通信する箇所。API エンドポイント、ダッシュボード、コマン" -"ドラインツールの可能性がある。" - -msgid "" -"The practice of placing one packet type within another for the purposes of " -"abstracting or securing data. Examples include GRE, MPLS, or IPsec." -msgstr "" -"データを抽象化やセキュア化する目的で、あるパケット形式を別の形式の中に入れる" -"ための方法。例えば、GRE、MPLS、IPsec などがある。" - -msgid "" -"The practice of utilizing a secondary environment to elastically build " -"instances on-demand when the primary environment is resource constrained." -msgstr "" -"主環境がリソース制限されたとき、要求時に応じてインスタンスを伸縮自在に構築す" -"るために、副環境を利用する慣習。" - -msgid "" -"The primary load balancing configuration object. Specifies the virtual IP " -"address and port where client traffic is received. Also defines other " -"details such as the load balancing method to be used, protocol, and so on. " -"This entity is sometimes known in load-balancing products as a virtual " -"server, vserver, or listener." -msgstr "" -"主たる負荷分散の設定オブジェクト。クライアント通信を受け付ける仮想 IP とポー" -"トを指定する。使用する負荷分散方式、プロトコルなどの詳細も定義する。このエン" -"ティティは、virtual server、vserver、listener のような負荷分散製品においても" -"知られている。" - -msgid "" -"The process associating a Compute floating IP address with a fixed IP " -"address." -msgstr "" -"Compute の Floating IP アドレスと Fixed IP アドレスを関連づけるプロセス。" - -msgid "" -"The process of automating IP address allocation, deallocation, and " -"management. Currently provided by Compute, melange, and Networking." -msgstr "" -"IP アドレスの割り当て、割り当て解除、管理を自動化するプロセス。現在、" -"Compute、melange、Networking により提供される。" - -msgid "" -"The process of connecting a VIF or vNIC to a L2 network in Networking. In " -"the context of Compute, this process connects a storage volume to an " -"instance." -msgstr "" -"Networking において、仮想インターフェースや仮想 NIC を L2 ネットワークに接続" -"するプロセス。Compute の文脈では、ストレージボリュームをインスタンスに接続す" -"るプロセス。" - -msgid "" -"The process of copying data to a separate physical device for fault " -"tolerance and performance." -msgstr "" -"別の物理デバイスにデータをコピーする処理。耐障害性や性能のために行われる。" - -msgid "" -"The process of distributing Object Storage partitions across all drives in " -"the ring; used during initial ring creation and after ring reconfiguration." -msgstr "" -"リング内のすべてのドライブにわたり、Object Storage のパーティションを分散させ" -"る処理。初期リング作成中、リング再設定後に使用される。" - -msgid "" -"The process of filtering incoming network traffic. Supported by Compute." -msgstr "" -"入力ネットワーク通信をフィルタリングする処理。Compute によりサポートされる。" - -msgid "" -"The process of finding duplicate data at the disk block, file, and/or object " -"level to minimize storage usecurrently unsupported within OpenStack." -msgstr "" -"ディスク使用を最小化するために、ディスクブロック、ファイル、オブジェクトレベ" -"ルにあるデータの重複を見つけるプロセス。現在 OpenStack 内では未サポート。" - -msgid "" -"The process of migrating one or all virtual machine (VM) instances from one " -"host to another, compatible with both shared storage live migration and " -"block migration." -msgstr "" -"1つまたは全ての仮想マシン(VM)インスタンスをあるホストから別のホストにマイ" -"グレーションする処理。共有ストレージのライブマイグレーションとブロックマイグ" -"レーション両方と互換がある。" - -msgid "The process of moving a VM instance from one host to another." -msgstr "VM インスタンスをあるホストから別のホストに移動させる処理。" - -msgid "" -"The process of putting a file into a virtual machine image before the " -"instance is started." -msgstr "" -"インスタンスが起動する前に、仮想マシンイメージ中にファイルを配置する処理。" - -msgid "" -"The process of removing the association between a floating IP address and a " -"fixed IP address. Once this association is removed, the floating IP returns " -"to the address pool." -msgstr "" -"Floating IP アドレスと Fixed IP アドレスの関連付けを解除する処理。この関連付" -"けが解除されると、Floating IP はアドレスプールに戻されます。" - -msgid "" -"The process of removing the association between a floating IP address and " -"fixed IP and thus returning the floating IP address to the address pool." -msgstr "" -"Floating IP アドレスと Fixed IP の関連付けを削除する処理。これにより、" -"Floating IP アドレスをアドレスプールに返す。" - -msgid "" -"The process of spreading client requests between two or more nodes to " -"improve performance and availability." -msgstr "" -"パフォーマンスや可用性を向上するために、2 つ以上のノード間でクライアントリク" -"エストを分散する処理。" - -msgid "" -"The process of taking a floating IP address from the address pool so it can " -"be associated with a fixed IP on a guest VM instance." -msgstr "" -"アドレスプールから Floating IP アドレスを取得するプロセス。ゲスト仮想マシンイ" -"ンスタンスに Fixed IP を関連付けられるようにする。" - -msgid "" -"The process that confirms that the user, process, or client is really who " -"they say they are through private key, secret token, password, fingerprint, " -"or similar method." -msgstr "" -"ユーザー、プロセスまたはクライアントが、秘密鍵、秘密トークン、パスワード、指" -"紋または同様の方式により示されている主体と本当に同じであることを確認するプロ" -"セス。" - -msgid "" -"The project name for the Telemetry service, which is an integrated project " -"that provides metering and measuring facilities for OpenStack." -msgstr "" -"Telemetry サービスのプロジェクト名。OpenStack 向けにメータリングと計測機能を" -"提供する、統合プロジェクト。" - -msgid "The project that provides OpenStack Identity services." -msgstr "OpenStack Identity サービスを提供するプロジェクト。" - -msgid "" -"The protocol by which layer-3 IP addresses are resolved into layer-2 link " -"local addresses." -msgstr "L3 IP プロトコルが L2 リンクローカルアドレスに解決されるプロトコル。" - -msgid "" -"The router advertisement daemon, used by the Compute VLAN manager and " -"FlatDHCP manager to provide routing services for VM instances." -msgstr "" -"ルーター通知デーモン。仮想マシンインスタンスにルーティングサービスを提供する" -"ために、Compute の VLAN マネージャーと FlatDHCP マネージャーにより使用され" -"る。" - -msgid "" -"The software package used to provide AMQP messaging capabilities within " -"Compute. Default package is RabbitMQ." -msgstr "" -"Compute 内で AMQP メッセージング機能を提供するために使用されるソフトウェア" -"パッケージ。標準のパッケージは RabbitMQ。" - -msgid "" -"The source used by Identity service to retrieve user information; an " -"OpenLDAP server, for example." -msgstr "" -"ユーザー情報を取得するために、Identity により使用されるソース。例えば、" -"OpenLDAP。" - -msgid "" -"The step in the Compute scheduling process when hosts that cannot run VMs " -"are eliminated and not chosen." -msgstr "" -"VM を実行できないホストを排除し、選択されないようにする Compute のスケジュー" -"リング処理の段階。" - -msgid "" -"The storage method used by the Identity service catalog service to store and " -"retrieve information about API endpoints that are available to the client. " -"Examples include an SQL database, LDAP database, or KVS back end." -msgstr "" -"クライアントが利用可能な API エンドポイントに関する情報を保存、取得するのに、" -"Identity サービスのカタログサービスが使用する保存方式。SQL データベース、" -"LDAP データベース、KVS バックエンドなどがある。" - -msgid "" -"The sum of each cost used when deciding where to start a new VM instance in " -"Compute." -msgstr "" -"Compute で新しい仮想マシンを起動する場所を判断するときに使用される各コストの" -"合計。" - -msgid "The tenant who owns an Image service virtual machine image." -msgstr "Image サービスの仮想マシンイメージを所有するテナント。" - -msgid "" -"The transfer of data, usually in the form of files, from one computer to " -"another." -msgstr "" -"あるコンピューターから他のコンピューターへのデータの転送。通常はファイルの形" -"式。" - -msgid "" -"The underlying format that a disk image for a VM is stored as within the " -"Image service back-end store. For example, AMI, ISO, QCOW2, VMDK, and so on." -msgstr "" -"仮想マシンのディスクイメージが Image service のバックエンドストア内で保存され" -"る、バックエンドの形式。AMI、ISO、QCOW2、VMDK などがある。" - -msgid "" -"The universal measurement of how quickly data is transferred from place to " -"place." -msgstr "" -"データがある場所から別の場所にどのくらい速く転送されるかの普遍的な計測基準。" - -msgid "" -"The web-based management interface for OpenStack. An alternative name for " -"horizon." -msgstr "OpenStack 用 Web ベース管理インターフェース。Horizon の別名。" - -msgid "" -"This glossary offers a list of terms and definitions to define a vocabulary " -"for OpenStack-related concepts." -msgstr "" -"この用語集は、OpenStack 関連の概念の語彙を定義するために、用語や定義の一覧を" -"提供します。" - -msgid "" -"To add to OpenStack glossary, clone the openstack/" -"openstack-manuals repository and update the source file " -"doc/glossary/glossary-terms.xml through the OpenStack " -"contribution process." -msgstr "" -"OpenStack 用語集に追加する場合、OpenStack の貢献プロセスに沿って、openstack/openstack-manuals リポジトリー をク" -"ローンし、ソースファイル doc/glossary/glossary-terms.xml を更新してください。" - -msgid "" -"To add to this glossary follow the OpenStack Documentation Contributor Guide." -msgstr "" -"この用語集に追加する場合、 OpenStack Documentation Contributor Guide を参照" -"してください。" - -msgid "" -"Tool used for maintaining Address Resolution Protocol packet filter rules in " -"the Linux kernel firewall modules. Used along with iptables, ebtables, and " -"ip6tables in Compute to provide firewall services for VMs." -msgstr "" -"Linux カーネルファイアウォールモジュールで ARP パケットフィルタールールを維持" -"するために使用されるツール。仮想マシン向けのファイアウォールサービスを提供す" -"るために、Compute で iptables、ebtables、ip6tables と一緒に使用される。" - -msgid "" -"Tool used in OpenStack development to ensure correctly ordered testing of " -"changes in parallel." -msgstr "" -"OpenStack 開発で使用されているツールで、変更のテストを正しい順番を保証しなが" -"ら並列に実行する。" - -msgid "Tool used to run jobs automatically for OpenStack development." -msgstr "OpenStack 開発のためにジョブを自動的に実行するために使用されるツール。" - -msgid "" -"Tool used to set up, maintain, and inspect the tables of IPv6 packet filter " -"rules in the Linux kernel. In OpenStack Compute, ip6tables is used along " -"with arptables, ebtables, and iptables to create firewalls for both nodes " -"and VMs." -msgstr "" -"Linux カーネルで IPv6 パケットフィルタールールのテーブルをセットアップ、維" -"持、検査するために使用されるツール。OpenStack Compute では、ノードと仮想マシ" -"ンの両方に対するファイアウォールを作成するために、ip6tables が arptables、" -"ebtables、iptables と一緒に使用される。" - -msgid "Torpedo" -msgstr "Torpedo" - -msgid "TripleO" -msgstr "TripleO" - -msgid "" -"Type of Compute scheduler that evenly distributes instances among available " -"hosts." -msgstr "" -"利用可能なホスト間でインスタンスを平等に分散させる、Compute のスケジューラー" -"の一種。" - -msgid "U" -msgstr "U" - -msgid "UUID for each Compute or Image service VM flavor or instance type." -msgstr "" -"Compute や Image service の仮想マシンの各フレーバーやインスタンスタイプの " -"UUID。" - -msgid "UUID used by Image service to uniquely identify each VM image." -msgstr "" -"各仮想マシンイメージを一意に識別するために Image service により使用される " -"UUID。" - -msgid "Ubuntu" -msgstr "Ubuntu" - -msgid "" -"Under the Compute distributed scheduler, this is calculated by looking at " -"the capabilities of each host relative to the flavor of the VM instance " -"being requested." -msgstr "" -"Compute の分散スケジューラーにおいて、要求している仮想マシンインスタンスのフ" -"レーバーに関連する、各ホストのキャパシティーにより計算される。" - -msgid "" -"Unique ID applied to each storage volume under the Block Storage control." -msgstr "" -"Block Storage の管理下にある各ストレージボリュームに適用される一意な ID。" - -msgid "Unique ID assigned to each Networking VIF." -msgstr "各 Networking VIF に割り当てられる一意な ID。" - -msgid "" -"Unique ID assigned to each Object Storage request; used for debugging and " -"tracing." -msgstr "" -"Object Storage の各リクエストに割り当てられる一意な ID。デバッグやトレースに使用される。" - -msgid "" -"Unique ID assigned to each guest VM instance." -msgstr "" -"各ゲスト仮想マシンインスタンスに割り" -"当てられる一意な ID。" - -msgid "" -"Unique ID assigned to each network segment within Networking. Same as " -"network UUID." -msgstr "" -"Networking 内の各ネットワークセグメントに割り当てられる一意な ID。ネットワー" -"ク UUID と同じ。" - -msgid "Unique ID assigned to each request sent to Compute." -msgstr "Compute に送られる各リクエストに割り振られる一意な ID。" - -msgid "" -"Unique ID assigned to each service that is available in the Identity service " -"catalog." -msgstr "" -"Identity のサービスカタログで利用可能な各サービスに割り当てられる一意な ID。" - -msgid "" -"Unique ID assigned to each tenant within the Identity service. The project " -"IDs map to the tenant IDs." -msgstr "" -"Identity 内で各テナントに割り当てられる一意な ID。プロジェクト ID は、テナン" -"ト ID に対応付けられる。" - -msgid "Unique ID for a Networking VIF or vNIC in the form of a UUID." -msgstr "Networking 仮想インターフェースや vNIC 用の一意な UUID 形式の ID。" - -msgid "" -"Unique ID for a Networking network segment." -msgstr "" -"Networking のネットワークセグメントの" -"一意な ID。" - -msgid "Unique ID for a Networking port." -msgstr "Networking ポートのユニーク ID。" - -msgid "" -"Unique numeric ID associated with each user in Identity, conceptually " -"similar to a Linux or LDAP UID." -msgstr "" -"Identity で各ユーザーと関連付けられた一意な数値 ID。概念として、Linux や " -"LDAP の UID を同じ。" - -msgid "Uniquely ID for an Object Storage object." -msgstr "Object Storage オブジェクト用の一意な ID。" - -msgid "" -"Unless required by applicable law or agreed to in writing, software " -"distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT " -"WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the " -"License for the specific language governing permissions and limitations " -"under the License." -msgstr "" -"Unless required by applicable law or agreed to in writing, software " -"distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT " -"WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the " -"License for the specific language governing permissions and limitations " -"under the License." - -msgid "" -"Use this glossary to get definitions of OpenStack-related words and phrases." -msgstr "" -"OpenStack 関連の用語や言い回しの定義を確認するために、この用語集を使用してく" -"ださい。" - -msgid "" -"Used along with an EC2 access key when communicating with the Compute EC2 " -"API; used to digitally sign each request." -msgstr "" -"Compute EC2 API 利用時に EC2 アクセスキーと一緒に使用される。各リクエストを電" -"子署名するために使用される。" - -msgid "Used along with an EC2 secret key to access the Compute EC2 API." -msgstr "Compute EC2 API にアクセスするために、EC2 秘密鍵と一緒に使用される。" - -msgid "Used along with an EKI to create an EMI." -msgstr "EMI を作成するために、EKI と一緒に使用する。" - -msgid "Used along with an ERI to create an EMI." -msgstr "EMI を作成するために、ERI と一緒に使用する。" - -msgid "" -"Used along with arptables and ebtables, iptables create firewalls in " -"Compute. iptables are the tables provided by the Linux kernel firewall " -"(implemented as different Netfilter modules) and the chains and rules it " -"stores. Different kernel modules and programs are currently used for " -"different protocols: iptables applies to IPv4, ip6tables to IPv6, arptables " -"to ARP, and ebtables to Ethernet frames. Requires root privilege to " -"manipulate." -msgstr "" -"Compute においてファイアウォールを作成する、arptables、ebtables、iptables と" -"一緒に使用される。iptables は、Linux カーネルファイアウォール (別の " -"Netfilter モジュール) により提供されるテーブル、それを保存するチェインやルー" -"ル。複数のカーネルモジュールとプログラムが、別々のプロトコルに対して使用され" -"る。iptables は IPv4、ip6tables は IPv6、arptables は ARP、ebtables は " -"Ethernet フレームに適用される。操作すうために root 権限が必要になる。" - -msgid "" -"Used by Image service to obtain images on the local host rather than re-" -"downloading them from the image server each time one is requested." -msgstr "" -"イメージが要求されたときに、イメージサーバーから再ダウンロードするのではな" -"く、ローカルホストにあるイメージを取得するために、Image service により使用さ" -"れる。" - -msgid "" -"Used by Object Storage devices to determine which storage devices are " -"suitable for the job. Devices are weighted by size." -msgstr "" -"どのストレージデバイスがジョブに対して適切であるかを判断するために、Object " -"Storage デバイスにより使用される。デバイスは容量により重み付けされる。" - -msgid "" -"Used by Object Storage to determine the location of an object in the ring. " -"Maps objects to partitions." -msgstr "" -"リング内でオブジェクトの場所を判断するために、Object Storage により使用され" -"る。オブジェクトをパーティションに対応付ける。" - -msgid "" -"Used by Object Storage to determine which partition data should reside on." -msgstr "" -"パーティションデータが配置されるべき場所を決めるために、Object Storage により" -"使用される。" - -msgid "Used by Object Storage to push object replicas." -msgstr "" -"オブジェクトの複製をプッシュするために Object Storage により使用される。" - -msgid "" -"Used to mark Object Storage objects that have been deleted; ensures that the " -"object is not updated on another node after it has been deleted." -msgstr "" -"Object Storage のオブジェクトが削除済みであることを示す印をつけるために使用さ" -"れる。オブジェクトの削除後、他のノードにおいて更新されないことを保証する。" - -msgid "" -"Used to restrict communications between hosts and/or nodes, implemented in " -"Compute using iptables, arptables, ip6tables, and ebtables." -msgstr "" -"ホストノード間の通信を制限する為に使用される。iptables, arptables, " -"ip6tables, ebtables を使用して Compute により実装される。" - -msgid "Used to track segments of a large object within Object Storage." -msgstr "Object Storage 内で大きなオブジェクトを管理するために使用される。" - -msgid "User Mode Linux (UML)" -msgstr "User Mode Linux (UML)" - -msgid "User-defined alphanumeric string in Compute; the name of a project." -msgstr "Compute でユーザーが定義した英数文字列。プロジェクトの名前。" - -msgid "" -"Users of Object Storage interact with the service through the proxy server, " -"which in turn looks up the location of the requested data within the ring " -"and returns the results to the user." -msgstr "" -"Object Storage のユーザーは、リング中にあるリクエストされたデータの場所を参照" -"してユーザに結果を返すプロキシサーバーを介して、このサービスに通信する。" - -msgid "V" -msgstr "V" - -msgid "VIF UUID" -msgstr "VIF UUID" - -msgid "VIP" -msgstr "仮想 IP" - -msgid "VLAN" -msgstr "VLAN" - -msgid "VLAN manager" -msgstr "VLAN マネージャー" - -msgid "VLAN network" -msgstr "VLAN ネットワーク" - -msgid "VM Remote Control (VMRC)" -msgstr "VM Remote Control (VMRC)" - -msgid "VM disk (VMDK)" -msgstr "VM disk (VMDK)" - -msgid "VM image" -msgstr "仮想マシンイメージ" - -msgid "VM image container format supported by Image service." -msgstr "Image service によりサポートされる仮想マシンイメージのコンテナー形式。" - -msgid "VMware API" -msgstr "VMware API" - -msgid "VMware NSX Neutron plug-in" -msgstr "VMware NSX Neutron プラグイン" - -msgid "VNC proxy" -msgstr "VNC プロキシ" - -msgid "VXLAN" -msgstr "VXLAN" - -msgid "Virtual Central Processing Unit (vCPU)" -msgstr "仮想CPU (vCPU)" - -msgid "Virtual Disk Image (VDI)" -msgstr "Virtual Disk Image (VDI)" - -msgid "Virtual Hard Disk (VHD)" -msgstr "Virtual Hard Disk (VHD)" - -msgid "Virtual Network Computing (VNC)" -msgstr "Virtual Network Computing (VNC)" - -msgid "Virtual Network InterFace (VIF)" -msgstr "仮想ネットワークインタフェース (VIF)" - -msgid "" -"Virtual network type that uses neither VLANs nor tunnels to segregate tenant " -"traffic. Each flat network typically requires a separate underlying physical " -"interface defined by bridge mappings. However, a flat network can contain " -"multiple subnets." -msgstr "" -"テナントの通信を分離するために、VLAN もトンネルも使用しない仮想ネットワーク方" -"式。各フラットネットワークは、一般的にブリッジマッピングにより定義された、" -"バックエンドに専用の物理インターフェースを必要とする。しかしながら、フラット" -"ネットワークは複数のサブネットを含められる。" - -msgid "VirtualBox" -msgstr "VirtualBox" - -msgid "" -"Virtualization API library used by OpenStack to interact with many of its " -"supported hypervisors." -msgstr "" -"多くのサポートハイパーバイザーと通信するために、OpenStack により使用される仮" -"想化 API ライブラリー。" - -msgid "Volume API" -msgstr "Volume API" - -msgid "" -"Volume that does not save the changes made to it and reverts to its original " -"state when the current user relinquishes control." -msgstr "" -"変更が保存されないボリューム。現在のユーザーが制御を解放したとき、元の状態に" -"戻される。" - -msgid "W" -msgstr "W" - -msgid "" -"WSGI middleware component of Object Storage that serves container data as a " -"static web page." -msgstr "" -"コンテナーデータを静的 Web ページとして取り扱う Object Storage の WSGI ミドル" -"ウェアコンポーネント。" - -msgid "" -"Within RabbitMQ and Compute, it is the messaging interface that is used by " -"the scheduler service to receive capability messages from the compute, " -"volume, and network nodes." -msgstr "" -"RabbitMQ と Compute の中で、コンピュートノード、ボリュームノード、ネットワー" -"クノードからのメッセージを受け付ける機能のために、スケジューラーサービスによ" -"り使用されるメッセージングインターフェース。" - -msgid "Workflow service" -msgstr "Workflow サービス" - -msgid "X" -msgstr "X" - -msgid "XFS" -msgstr "XFS" - -msgid "Xen" -msgstr "Xen" - -msgid "Xen API" -msgstr "Xen API" - -msgid "Xen Cloud Platform (XCP)" -msgstr "Xen Cloud Platform (XCP)" - -msgid "Xen Storage Manager Volume Driver" -msgstr "Xen Storage Manager Volume Driver" - -msgid "" -"Xen is a hypervisor using a microkernel design, providing services that " -"allow multiple computer operating systems to execute on the same computer " -"hardware concurrently." -msgstr "" -"Xen は、マイクロカーネル設計を使用したハイパーバイザー。複数のコンピューター" -"オペレーティングシステムを同じコンピューターハードウェアで同時に実行できるよ" -"うになるサービスを提供する。" - -msgid "XenServer" -msgstr "XenServer" - -msgid "XenServer hypervisor" -msgstr "XenServer ハイパーバイザー" - -msgid "Y" -msgstr "Y" - -msgid "Z" -msgstr "Z" - -msgid "ZeroMQ" -msgstr "ZeroMQ" - -msgid "Zuul" -msgstr "Zuul" - -msgid "absolute limit" -msgstr "絶対制限" - -msgid "access control list" -msgstr "アクセス制御リスト" - -msgid "access control list (ACL)" -msgstr "アクセス制御リスト (ACL)" - -msgid "access key" -msgstr "アクセスキー" - -msgid "account" -msgstr "アカウント" - -msgid "account auditor" -msgstr "account auditor" - -msgid "account database" -msgstr "アカウントデータベース" - -msgid "account reaper" -msgstr "account reaper" - -msgid "account server" -msgstr "account server" - -msgid "account service" -msgstr "account service" - -msgid "accounting" -msgstr "アカウンティング" - -msgid "accounts" -msgstr "アカウント" - -msgid "active/active configuration" -msgstr "アクティブ/アクティブ設定" - -msgid "active/passive configuration" -msgstr "アクティブ/パッシブ設定" - -msgid "address pool" -msgstr "アドレスプール" - -msgid "admin API" -msgstr "管理 API" - -msgid "admin server" -msgstr "管理サーバー" - -msgid "alert" -msgstr "アラート" - -msgid "alerts" -msgstr "アラート" - -msgid "allocate" -msgstr "確保" - -msgid "allocate, definition of" -msgstr "確保, 定義" - -msgid "applet" -msgstr "アプレット" - -msgid "application server" -msgstr "アプリケーションサーバー" - -msgid "application servers" -msgstr "アプリケーションサーバー" - -msgid "arptables" -msgstr "arptables" - -msgid "associate" -msgstr "割り当て" - -msgid "associate, definition of" -msgstr "割り当て, 定義" - -msgid "attach" -msgstr "接続" - -msgid "attach, definition of" -msgstr "接続, 定義" - -msgid "attachment (network)" -msgstr "アタッチ(ネットワーク)" - -msgid "auditing" -msgstr "監査" - -msgid "auditor" -msgstr "auditor" - -msgid "auth node" -msgstr "認可ノード" - -msgid "authentication" -msgstr "認証" - -msgid "authentication token" -msgstr "認証トークン" - -msgid "authentication tokens" -msgstr "認証トークン" - -msgid "authorization" -msgstr "認可" - -msgid "authorization node" -msgstr "認可ノード" - -msgid "auto declare" -msgstr "自動宣言" - -msgid "availability zone" -msgstr "アベイラビリティゾーン" - -msgid "back end" -msgstr "バックエンド" - -msgid "back-end catalog" -msgstr "バックエンドカタログ" - -msgid "back-end interactions" -msgstr "バックエンド操作" - -msgid "back-end store" -msgstr "バックエンドストア" - -msgid "backup restore and disaster recovery as a service" -msgstr "backup restore and disaster recovery as a service" - -msgid "bandwidth" -msgstr "帯域" - -msgid "barbican" -msgstr "barbican" - -msgid "bare" -msgstr "bare" - -msgid "bare, definition of" -msgstr "bare, 定義" - -msgid "base image" -msgstr "ベースイメージ" - -msgid "basics of" -msgstr "basics of" - -msgid "binary" -msgstr "バイナリ" - -msgid "bit" -msgstr "ビット" - -msgid "bits per second (BPS)" -msgstr "bps" - -msgid "bits, definition of" -msgstr "ビット, 定義" - -msgid "block device" -msgstr "ブロックデバイス" - -msgid "block migration" -msgstr "ブロックマイグレーション" - -msgid "bootable disk image" -msgstr "ブータブルディスクイメージ" - -msgid "browser" -msgstr "ブラウザー" - -msgid "browsers, definition of" -msgstr "ブラウザー, 定義" - -msgid "builder file" -msgstr "ビルダーファイル" - -msgid "builder files" -msgstr "ビルダーファイル" - -msgid "bursting" -msgstr "超過利用" - -msgid "button class" -msgstr "ボタンクラス" - -msgid "button classes" -msgstr "ボタンクラス" - -msgid "byte" -msgstr "バイト" - -msgid "bytes, definition of" -msgstr "バイト, 定義" - -msgid "cache pruner" -msgstr "cache pruner" - -msgid "cache pruners" -msgstr "cache pruner" - -msgid "capability" -msgstr "キャパシティ" - -msgid "capacity cache" -msgstr "capacity cache" - -msgid "capacity updater" -msgstr "capacity updater" - -msgid "catalog" -msgstr "カタログ" - -msgid "catalog service" -msgstr "カタログサービス" - -msgid "ceilometer" -msgstr "ceilometer" - -msgid "cell" -msgstr "セル" - -msgid "cell forwarding" -msgstr "セルフォワーディング" - -msgid "cell manager" -msgstr "セルマネージャー" - -msgid "cell managers" -msgstr "セルマネージャー" - -msgid "cells" -msgstr "セル" - -msgid "certificate authority" -msgstr "認証局" - -msgid "certificate authority (Compute)" -msgstr "認証局 (Compute)" - -msgid "chance scheduler" -msgstr "チャンススケジューラー" - -msgid "changes since" -msgstr "changes since" - -msgid "child cell" -msgstr "子セル" - -msgid "child cells" -msgstr "子セル" - -msgid "cinder" -msgstr "cinder" - -msgid "cloud architect" -msgstr "クラウドアーキテクト" - -msgid "cloud computing" -msgstr "クラウドコンピューティング" - -msgid "cloud controller" -msgstr "クラウドコントローラー" - -msgid "cloud controller node" -msgstr "クラウドコントローラーノード" - -msgid "cloud controller nodes" -msgstr "クラウドコントローラーノード" - -msgid "cloud controllers" -msgstr "クラウドコントローラー" - -msgid "cloud-init" -msgstr "cloud-init" - -msgid "cloudadmin" -msgstr "cloudadmin" - -msgid "cloudpipe" -msgstr "cloudpipe" - -msgid "cloudpipe image" -msgstr "cloudpipe イメージ" - -msgid "code name" -msgstr "コード名" - -msgid "command filter" -msgstr "コマンドフィルター" - -msgid "command filters" -msgstr "コマンドフィルター" - -msgid "community project" -msgstr "コミュニティープロジェクト" - -msgid "community projects" -msgstr "コミュニティープロジェクト" - -msgid "compression" -msgstr "圧縮" - -msgid "compute controller" -msgstr "コンピュートコントローラー" - -msgid "compute host" -msgstr "コンピュートホスト" - -msgid "compute node" -msgstr "コンピュートノード" - -msgid "compute nodes" -msgstr "コンピュートノード" - -msgid "compute worker" -msgstr "コンピュートワーカー" - -msgid "concatenated object" -msgstr "連結オブジェクト" - -msgid "concatenated objects" -msgstr "連結オブジェクト" - -msgid "conductor" -msgstr "コンダクター" - -msgid "conductors" -msgstr "コンダクター" - -msgid "congress" -msgstr "congress" - -msgid "consistency window" -msgstr "一貫性ウインドウ" - -msgid "console log" -msgstr "コンソールログ" - -msgid "console logs" -msgstr "コンソールログ" - -msgid "container" -msgstr "コンテナー" - -msgid "container auditor" -msgstr "コンテナーオーディター" - -msgid "container auditors" -msgstr "コンテナーオーディター" - -msgid "container database" -msgstr "コンテナーデータベース" - -msgid "container databases" -msgstr "コンテナーデータベース" - -msgid "container format" -msgstr "コンテナーフォーマット" - -msgid "container server" -msgstr "コンテナーサーバー" - -msgid "container servers" -msgstr "コンテナーサーバー" - -msgid "container service" -msgstr "コンテナーサービス" - -msgid "containers" -msgstr "コンテナー" - -msgid "content delivery network (CDN)" -msgstr "コンテンツ配信ネットワーク (CDN)" - -msgid "controller node" -msgstr "コントローラーノード" - -msgid "controller nodes" -msgstr "コントローラーノード" - -msgid "core API" -msgstr "コアAPI" - -msgid "cost" -msgstr "コスト" - -msgid "credentials" -msgstr "クレデンシャル" - -msgid "current workload" -msgstr "カレントワークロード" - -msgid "customer" -msgstr "カスタマー" - -msgid "customers" -msgstr "カスタマー" - -msgid "customization module" -msgstr "カスタムモジュール" - -msgid "daemon" -msgstr "デーモン" - -msgid "daemons" -msgstr "デーモン" - -msgid "data" -msgstr "データ" - -msgid "data encryption" -msgstr "データ暗号化" - -msgid "data store" -msgstr "データストア" - -msgid "data store, definition of" -msgstr "データストア, 定義" - -msgid "database ID" -msgstr "データベース ID" - -msgid "database replicator" -msgstr "データベースレプリケーター" - -msgid "database replicators" -msgstr "データベースレプリケーター" - -msgid "databases" -msgstr "データベース" - -msgid "deallocate" -msgstr "割り当て解除" - -msgid "deallocate, definition of" -msgstr "割り当て解除, 定義" - -msgid "deduplication" -msgstr "重複排除" - -msgid "default panel" -msgstr "デフォルトパネル" - -msgid "default panels" -msgstr "デフォルトパネル" - -msgid "default tenant" -msgstr "デフォルトテナント" - -msgid "default tenants" -msgstr "デフォルトテナント" - -msgid "default token" -msgstr "デフォルトトークン" - -msgid "default tokens" -msgstr "デフォルトトークン" - -msgid "definition of" -msgstr "定義" - -msgid "definitions of" -msgstr "定義" - -msgid "delayed delete" -msgstr "遅延削除" - -msgid "delivery mode" -msgstr "デリバリーモード" - -msgid "denial of service (DoS)" -msgstr "サービス妨害 (DoS)" - -msgid "deprecated auth" -msgstr "非推奨認証" - -msgid "designate" -msgstr "designate" - -msgid "developer" -msgstr "developer" - -msgid "device ID" -msgstr "デバイス ID" - -msgid "device weight" -msgstr "デバイスウェイト" - -msgid "direct consumer" -msgstr "直接使用者" - -msgid "direct consumers" -msgstr "直接使用者" - -msgid "direct exchange" -msgstr "直接交換" - -msgid "direct exchanges" -msgstr "直接交換" - -msgid "direct publisher" -msgstr "直接発行者" - -msgid "direct publishers" -msgstr "直接発行者" - -msgid "disassociate" -msgstr "関連付け解除" - -msgid "disk encryption" -msgstr "ディスク暗号化" - -msgid "disk format" -msgstr "ディスクフォーマット" - -msgid "dispersion" -msgstr "dispersion" - -msgid "distributed virtual router (DVR)" -msgstr "分散仮想ルーター (DVR)" - -msgid "dnsmasq" -msgstr "dnsmasq" - -msgid "domain" -msgstr "ドメイン" - -msgid "domain, definition of" -msgstr "ドメイン, 定義" - -msgid "download" -msgstr "ダウンロード" - -msgid "download, definition of" -msgstr "ダウンロード, 定義" - -msgid "drivers" -msgstr "ドライバー" - -msgid "durable exchange" -msgstr "永続交換" - -msgid "durable queue" -msgstr "永続キュー" - -msgid "east-west traffic" -msgstr "イースト・ウエスト通信" - -msgid "ebtables" -msgstr "ebtables" - -msgid "encapsulation" -msgstr "カプセル化" - -msgid "encryption" -msgstr "暗号化" - -msgid "encryption, definition of" -msgstr "暗号化, 定義" - -msgid "endpoint" -msgstr "エンドポイント" - -msgid "endpoint registry" -msgstr "エンドポイントレジストリ" - -msgid "endpoint template" -msgstr "エンドポイントテンプレート" - -msgid "endpoint templates" -msgstr "エンドポイントテンプレート" - -msgid "endpoints" -msgstr "エンドポイント" - -msgid "entity" -msgstr "エンティティー" - -msgid "entity, definition of" -msgstr "エンティティー, 定義" - -msgid "ephemeral image" -msgstr "一時イメージ" - -msgid "ephemeral images" -msgstr "一時イメージ" - -msgid "ephemeral volume" -msgstr "一時ボリューム" - -msgid "euca2ools" -msgstr "euca2ools" - -msgid "evacuate" -msgstr "退避" - -msgid "evacuation, definition of" -msgstr "退避, 定義" - -msgid "exchange" -msgstr "交換" - -msgid "exchange type" -msgstr "交換種別" - -msgid "exchange types" -msgstr "交換種別" - -msgid "exclusive queue" -msgstr "排他キュー" - -msgid "exclusive queues" -msgstr "排他キュー" - -msgid "extended attributes (xattr)" -msgstr "拡張属性 (xattr)" - -msgid "extension" -msgstr "エクステンション" - -msgid "extensions" -msgstr "拡張" - -msgid "external network" -msgstr "外部ネットワーク" - -msgid "external network, definition of" -msgstr "外部ネットワーク, 定義" - -msgid "extra specs" -msgstr "拡張仕様" - -msgid "extra specs, definition of" -msgstr "拡張仕様, 定義" - -msgid "fan-out exchange" -msgstr "ファンアウト交換" - -msgid "federated identity" -msgstr "連合認証" - -msgid "fill-first scheduler" -msgstr "充填優先スケジューラー" - -msgid "filter" -msgstr "フィルター" - -msgid "filtering" -msgstr "フィルタリング" - -msgid "firewall" -msgstr "ファイアウォール" - -msgid "firewalls" -msgstr "ファイアウォール" - -msgid "fixed" -msgstr "固定" - -msgid "fixed IP address" -msgstr "Fixed IP アドレス" - -msgid "fixed IP addresses" -msgstr "Fixed IP アドレス" - -msgid "flat mode injection" -msgstr "フラットモードインジェクション" - -msgid "flat network" -msgstr "フラットネットワーク" - -msgid "flavor" -msgstr "フレーバー" - -msgid "flavor ID" -msgstr "フレーバー ID" - -msgid "floating" -msgstr "Floating" - -msgid "floating IP address" -msgstr "Floating IP アドレス" - -msgid "freezer" -msgstr "freezer" - -msgid "front end" -msgstr "フロントエンド" - -msgid "front end, definition of" -msgstr "フロントエンド, 定義" - -msgid "gateway" -msgstr "ゲートウェイ" - -msgid "generic receive offload (GRO)" -msgstr "generic receive offload (GRO)" - -msgid "generic routing encapsulation (GRE)" -msgstr "generic routing encapsulation (GRE)" - -msgid "glance" -msgstr "glance" - -msgid "glance API server" -msgstr "glance API サーバー" - -msgid "glance registry" -msgstr "Glance レジストリ" - -msgid "global endpoint template" -msgstr "グローバルエンドポイントテンプレート" - -msgid "golden image" -msgstr "ゴールデンイメージ" - -msgid "guest OS" -msgstr "ゲスト OS" - -msgid "handover" -msgstr "handover" - -msgid "hard reboot" -msgstr "ハードリブート" - -msgid "hard vs. soft" -msgstr "ハード対ソフト" - -msgid "health monitor" -msgstr "ヘルスモニター" - -msgid "heat" -msgstr "heat" - -msgid "high availability (HA)" -msgstr "高可用性" - -msgid "horizon" -msgstr "Horizon" - -msgid "horizon plug-in" -msgstr "horizon プラグイン" - -msgid "horizon plug-ins" -msgstr "horizon プラグイン" - -msgid "host" -msgstr "ホスト" - -msgid "host aggregate" -msgstr "ホストアグリゲート" - -msgid "hosts, definition of" -msgstr "ホスト, 定義" - -msgid "http://www.apache.org/licenses/LICENSE-2.0" -msgstr "http://www.apache.org/licenses/LICENSE-2.0" - -msgid "hybrid cloud" -msgstr "ハイブリッドクラウド" - -msgid "hyperlink" -msgstr "ハイパーリンク" - -msgid "hypervisor" -msgstr "ハイパーバイザー" - -msgid "hypervisor pool" -msgstr "ハイパーバイザープール" - -msgid "hypervisor pools" -msgstr "ハイパーバイザープール" - -msgid "hypervisors" -msgstr "ハイパーバイザー" - -msgid "iSCSI" -msgstr "iSCSI" - -msgid "iSCSI Qualified Name (IQN)" -msgstr "iSCSI Qualified Name (IQN)" - -msgid "" -"iSCSI Qualified Name (IQN) is the format most commonly used for iSCSI names, " -"which uniquely identify nodes in an iSCSI network. All IQNs follow the " -"pattern iqn.yyyy-mm.domain:identifier, where 'yyyy-mm' is the year and month " -"in which the domain was registered, 'domain' is the reversed domain name of " -"the issuing organization, and 'identifier' is an optional string which makes " -"each IQN under the same domain unique. For example, 'iqn.2015-10.org." -"openstack.408ae959bce1'." -msgstr "" -"iSCSI Qualified Name (IQN) は iSCSI の名前として最も広く使われている形式で、 " -"iSCSI ネットワークで一意にノードを識別するのに使われます。すべての IQN は " -"iqn.yyyy-mm.domain:identifier という形式です。ここで、 'yyyy-mm' はそのドメイ" -"ンが登録された年と月、 'domain' は発行組織の登録されたドメイン名、 " -"'identifier' は同じドメイン内の各 IQN 番号を一意なものにするためのオプション" -"文字列です。例えば 'iqn.2015-10.org.openstack.408ae959bce1'" - -msgid "iSCSI protocol" -msgstr "iSCSI プロトコル" - -msgid "identity provider" -msgstr "識別情報プロバイダー" - -msgid "image" -msgstr "イメージ" - -msgid "image ID" -msgstr "イメージ ID" - -msgid "image UUID" -msgstr "イメージ UUID" - -msgid "image cache" -msgstr "イメージキャッシュ" - -msgid "image membership" -msgstr "イメージメンバーシップ" - -msgid "image owner" -msgstr "イメージ所有者" - -msgid "image registry" -msgstr "イメージレジストリー" - -msgid "image status" -msgstr "イメージ状態" - -msgid "image store" -msgstr "イメージストア" - -msgid "images" -msgstr "イメージ" - -msgid "incubated project" -msgstr "インキュベートプロジェクト" - -msgid "incubated projects" -msgstr "育成プロジェクト" - -msgid "ingress filtering" -msgstr "イングレスフィルタリング" - -msgid "injection" -msgstr "インジェクション" - -msgid "instance" -msgstr "インスタンス" - -msgid "instance ID" -msgstr "インスタンス ID" - -msgid "instance UUID" -msgstr "インスタンス UUID" - -msgid "instance state" -msgstr "インスタンス状態" - -msgid "instance tunnels network" -msgstr "インスタンストンネルネットワーク" - -msgid "instance type" -msgstr "インスタンスタイプ" - -msgid "instance type ID" -msgstr "インスタンスタイプ ID" - -msgid "instances" -msgstr "インスタンス" - -msgid "interface" -msgstr "インターフェース" - -msgid "interface ID" -msgstr "インターフェース ID" - -msgid "ip6tables" -msgstr "ip6tables" - -msgid "ipset" -msgstr "ipset" - -msgid "iptables" -msgstr "iptables" - -msgid "ironic" -msgstr "ironic" - -msgid "itsec" -msgstr "itsec" - -msgid "jumbo frame" -msgstr "ジャンボフレーム" - -msgid "kernel-based VM (KVM)" -msgstr "kernel-based VM (KVM)" - -msgid "kernel-based VM (KVM) hypervisor" -msgstr "kernel-based VM (KVM) ハイパーバイザー" - -msgid "keystone" -msgstr "keystone" - -msgid "large object" -msgstr "ラージオブジェクト" - -msgid "libvirt" -msgstr "libvirt" - -msgid "live migration" -msgstr "ライブマイグレーション" - -msgid "load balancer" -msgstr "負荷分散装置" - -msgid "load balancing" -msgstr "負荷分散" - -msgid "magnum" -msgstr "magnum" - -msgid "management API" -msgstr "マネジメント API" - -msgid "management network" -msgstr "管理ネットワーク" - -msgid "manager" -msgstr "マネージャー" - -msgid "manifest" -msgstr "マニフェスト" - -msgid "manifest object" -msgstr "マニフェストオブジェクト" - -msgid "manifest objects" -msgstr "マニフェストオブジェクト" - -msgid "manifests" -msgstr "マニフェスト" - -msgid "manila" -msgstr "manila" - -msgid "maximum transmission unit (MTU)" -msgstr "最大転送単位 (MTU)" - -msgid "mechanism driver" -msgstr "メカニズムドライバー" - -msgid "melange" -msgstr "melange" - -msgid "membership" -msgstr "メンバーシップ" - -msgid "membership list" -msgstr "メンバーシップリスト" - -msgid "membership lists" -msgstr "メンバーシップリスト" - -msgid "memcached" -msgstr "memcached" - -msgid "memory overcommit" -msgstr "メモリーオーバーコミット" - -msgid "message broker" -msgstr "メッセージブローカー" - -msgid "message brokers" -msgstr "メッセージブローカー" - -msgid "message bus" -msgstr "メッセージバス" - -msgid "message queue" -msgstr "メッセージキュー" - -msgid "messages" -msgstr "メッセージ" - -msgid "migration" -msgstr "マイグレーション" - -msgid "mistral" -msgstr "mistral" - -msgid "monasca" -msgstr "monasca" - -msgid "multi-factor authentication" -msgstr "多要素認証" - -msgid "multi-host" -msgstr "マルチホスト" - -msgid "multinic" -msgstr "マルチ NIC" - -msgid "murano" -msgstr "murano" - -msgid "netadmin" -msgstr "netadmin" - -msgid "network" -msgstr "Network" - -msgid "network ID" -msgstr "ネットワーク ID" - -msgid "network IDs" -msgstr "ネットワーク ID" - -msgid "network UUID" -msgstr "ネットワーク UUID" - -msgid "network controller" -msgstr "ネットワークコントローラー" - -msgid "network controllers" -msgstr "ネットワークコントローラー" - -msgid "network manager" -msgstr "ネットワークマネージャー" - -msgid "network managers" -msgstr "ネットワークマネージャー" - -msgid "network namespace" -msgstr "ネットワーク名前空間" - -msgid "network node" -msgstr "ネットワークノード" - -msgid "network nodes" -msgstr "ネットワークノード" - -msgid "network segment" -msgstr "ネットワークセグメント" - -msgid "network segments" -msgstr "ネットワークセグメント" - -msgid "network worker" -msgstr "ネットワークワーカー" - -msgid "network workers" -msgstr "ネットワークワーカー" - -msgid "networks" -msgstr "ネットワーク" - -msgid "neutron" -msgstr "neutron" - -msgid "neutron API" -msgstr "neutron API" - -msgid "neutron manager" -msgstr "neutron マネージャー" - -msgid "neutron plug-in" -msgstr "neutron プラグイン" - -msgid "neutron plug-in for" -msgstr "neutron プラグイン" - -msgid "node" -msgstr "node" - -msgid "nodes" -msgstr "ノード" - -msgid "non-durable exchange" -msgstr "非永続交換" - -msgid "non-durable exchanges" -msgstr "非永続交換" - -msgid "non-durable queue" -msgstr "非永続キュー" - -msgid "non-durable queues" -msgstr "非永続キュー" - -msgid "non-persistent volume" -msgstr "非永続ボリューム" - -msgid "north-south traffic" -msgstr "ノース・サウス通信" - -msgid "nova" -msgstr "nova" - -msgid "nova-network" -msgstr "nova-network" - -msgid "object" -msgstr "オブジェクト" - -msgid "object auditor" -msgstr "オブジェクトオーディター" - -msgid "object auditors" -msgstr "オブジェクトオーディター" - -msgid "object expiration" -msgstr "オブジェクト有効期限" - -msgid "object hash" -msgstr "オブジェクトハッシュ" - -msgid "object path hash" -msgstr "オブジェクトパスハッシュ" - -msgid "object replicator" -msgstr "オブジェクトレプリケーター" - -msgid "object replicators" -msgstr "オブジェクトレプリケーター" - -msgid "object server" -msgstr "オブジェクトサーバー" - -msgid "object servers" -msgstr "オブジェクトサーバー" - -msgid "object versioning" -msgstr "オブジェクトバージョニング" - -msgid "objects" -msgstr "オブジェクト" - -msgid "openSUSE" -msgstr "openSUSE" - -msgid "operator" -msgstr "運用者" - -msgid "orphan" -msgstr "orphan" - -msgid "orphans" -msgstr "orphan" - -msgid "parent cell" -msgstr "親セル" - -msgid "parent cells" -msgstr "親セル" - -msgid "partition" -msgstr "パーティション" - -msgid "partition index" -msgstr "パーティションインデックス" - -msgid "partition index value" -msgstr "パーティションインデックス値" - -msgid "partition shift value" -msgstr "パーティションシフト値" - -msgid "partitions" -msgstr "パーティション" - -msgid "path MTU discovery (PMTUD)" -msgstr "path MTU discovery (PMTUD)" - -msgid "pause" -msgstr "一時停止" - -msgid "persistent message" -msgstr "永続メッセージ" - -msgid "persistent messages" -msgstr "永続メッセージ" - -msgid "persistent volume" -msgstr "永続ボリューム" - -msgid "personality file" -msgstr "パーソナリティーファイル" - -msgid "plug-in" -msgstr "プラグイン" - -msgid "plug-ins, definition of" -msgstr "プラグイン, 定義" - -msgid "policy service" -msgstr "ポリシーサービス" - -msgid "pool" -msgstr "プール" - -msgid "pool member" -msgstr "プールメンバー" - -msgid "port" -msgstr "ポート" - -msgid "port UUID" -msgstr "ポート UUID" - -msgid "ports" -msgstr "ポート" - -msgid "preseed" -msgstr "preseed" - -msgid "preseed, definition of" -msgstr "preseed, 定義" - -msgid "private" -msgstr "プライベート" - -msgid "private IP address" -msgstr "プライベート IP アドレス" - -msgid "private image" -msgstr "プライベートイメージ" - -msgid "private network" -msgstr "プライベートネットワーク" - -msgid "private networks" -msgstr "プライベートネットワーク" - -msgid "project" -msgstr "プロジェクト" - -msgid "project ID" -msgstr "プロジェクト ID" - -msgid "project VPN" -msgstr "プロジェクト VPN" - -msgid "projects" -msgstr "プロジェクト" - -msgid "promiscuous mode" -msgstr "プロミスキャスモード" - -msgid "protected property" -msgstr "保護プロパティー" - -msgid "provider" -msgstr "プロバイダー" - -msgid "proxy node" -msgstr "プロキシノード" - -msgid "proxy nodes" -msgstr "プロキシノード" - -msgid "proxy server" -msgstr "プロキシサーバー" - -msgid "proxy servers" -msgstr "プロキシサーバー" - -msgid "public" -msgstr "パブリック" - -msgid "public API" -msgstr "パブリック API" - -msgid "public APIs" -msgstr "パブリック API" - -msgid "public IP address" -msgstr "パブリック IP アドレス" - -msgid "public image" -msgstr "パブリックイメージ" - -msgid "public images" -msgstr "パブリックイメージ" - -msgid "public key authentication" -msgstr "公開鍵認証" - -msgid "public network" -msgstr "パブリックネットワーク" - -msgid "quarantine" -msgstr "隔離" - -msgid "queues" -msgstr "キュー" - -msgid "quota" -msgstr "クォータ" - -msgid "quotas" -msgstr "クォータ" - -msgid "radvd" -msgstr "radvd" - -msgid "rally" -msgstr "rally" - -msgid "rate limit" -msgstr "レートリミット" - -msgid "rate limits" -msgstr "レートリミット" - -msgid "raw" -msgstr "raw" - -msgid "raw format" -msgstr "raw 形式" - -msgid "rebalance" -msgstr "リバランス" - -msgid "rebalancing" -msgstr "リバランス" - -msgid "reboot" -msgstr "リブート" - -msgid "rebuild" -msgstr "リビルド" - -msgid "rebuilding" -msgstr "リビルド" - -msgid "record" -msgstr "レコード" - -msgid "record ID" -msgstr "レコード ID" - -msgid "record IDs" -msgstr "レコード ID" - -msgid "records" -msgstr "レコード" - -msgid "reference architecture" -msgstr "リファレンスアーキテクチャー" - -msgid "region" -msgstr "リージョン" - -msgid "registry" -msgstr "レジストリー" - -msgid "registry server" -msgstr "レジストリサーバー" - -msgid "registry servers" -msgstr "レジストリサーバー" - -msgid "replica" -msgstr "レプリカ" - -msgid "replica count" -msgstr "レプリカ数" - -msgid "replication" -msgstr "レプリケーション" - -msgid "replicator" -msgstr "レプリケーター" - -msgid "replicators" -msgstr "レプリケーター" - -msgid "request ID" -msgstr "リクエスト ID" - -msgid "request IDs" -msgstr "リクエスト ID" - -msgid "rescue image" -msgstr "レスキューイメージ" - -msgid "rescue images" -msgstr "レスキューイメージ" - -msgid "resize" -msgstr "リサイズ" - -msgid "resizing" -msgstr "リサイズ" - -msgid "ring" -msgstr "リング" - -msgid "ring builder" -msgstr "リングビルダー" - -msgid "ring builders" -msgstr "リングビルダー" - -msgid "rings" -msgstr "リング" - -msgid "role" -msgstr "ロール" - -msgid "role ID" -msgstr "ロール ID" - -msgid "roles" -msgstr "ロール" - -msgid "rootwrap" -msgstr "rootwrap" - -msgid "round-robin" -msgstr "ラウンドロビン" - -msgid "round-robin scheduler" -msgstr "ラウンドロビンスケジューラー" - -msgid "router" -msgstr "ルーター" - -msgid "routing key" -msgstr "ルーティングキー" - -msgid "routing keys" -msgstr "ルーティングキー" - -msgid "rsync" -msgstr "rsync" - -msgid "sahara" -msgstr "sahara" - -msgid "scheduler manager" -msgstr "スケジューラーマネージャー" - -msgid "schedulers" -msgstr "スケジューラー" - -msgid "scoped token" -msgstr "スコープ付きトークン" - -msgid "scoped tokens" -msgstr "スコープ付きトークン" - -msgid "scrubber" -msgstr "スクラバー" - -msgid "scrubbers" -msgstr "スクラバー" - -msgid "secret key" -msgstr "シークレットキー" - -msgid "secret keys" -msgstr "シークレットキー" - -msgid "secure shell (SSH)" -msgstr "secure shell (SSH)" - -msgid "security group" -msgstr "セキュリティーグループ" - -msgid "security groups" -msgstr "セキュリティーグループ" - -msgid "segmented object" -msgstr "分割オブジェクト" - -msgid "segmented objects" -msgstr "分割オブジェクト" - -msgid "self-service" -msgstr "セルフサービス" - -msgid "senlin" -msgstr "senlin" - -msgid "server" -msgstr "サーバー" - -msgid "server UUID" -msgstr "サーバー UUID" - -msgid "server image" -msgstr "サーバーイメージ" - -msgid "servers" -msgstr "サーバー" - -msgid "service" -msgstr "サービス" - -msgid "service ID" -msgstr "サービス ID" - -msgid "service catalog" -msgstr "サービスカタログ" - -msgid "service provider" -msgstr "サービスプロバイダー" - -msgid "service registration" -msgstr "サービス登録" - -msgid "service tenant" -msgstr "サービステナント" - -msgid "service token" -msgstr "サービストークン" - -msgid "services" -msgstr "サービス" - -msgid "session back end" -msgstr "セッションバックエンド" - -msgid "session persistence" -msgstr "セッション持続性" - -msgid "session storage" -msgstr "セッションストレージ" - -msgid "sessions" -msgstr "セッション" - -msgid "share" -msgstr "共有" - -msgid "share network" -msgstr "共有用ネットワーク" - -msgid "shared" -msgstr "shared" - -msgid "shared IP address" -msgstr "共有 IP アドレス" - -msgid "shared IP group" -msgstr "共有 IP グループ" - -msgid "shared IP groups" -msgstr "共有 IP グループ" - -msgid "shared storage" -msgstr "共有ストレージ" - -msgid "snapshot" -msgstr "スナップショット" - -msgid "soft reboot" -msgstr "ソフトリブート" - -msgid "solum" -msgstr "solum" - -msgid "spread-first" -msgstr "分散優先" - -msgid "spread-first scheduler" -msgstr "分散優先スケジューラー" - -msgid "stack" -msgstr "スタック" - -msgid "static" -msgstr "静的" - -msgid "static IP address" -msgstr "静的 IP アドレス" - -msgid "static IP addresses" -msgstr "静的 IP アドレス" - -msgid "storage" -msgstr "ストレージ" - -msgid "storage back end" -msgstr "ストレージバックエンド" - -msgid "storage manager" -msgstr "ストレージマネージャー" - -msgid "storage manager back end" -msgstr "ストレージマネージャーバックエンド" - -msgid "storage node" -msgstr "ストレージノード" - -msgid "storage nodes" -msgstr "ストレージノード" - -msgid "storage services" -msgstr "ストレージサービス" - -msgid "store" -msgstr "ストア" - -msgid "strategy" -msgstr "ストラテジー" - -msgid "subdomain" -msgstr "サブドメイン" - -msgid "subdomains" -msgstr "サブドメイン" - -msgid "subnet" -msgstr "サブネット" - -msgid "suspend" -msgstr "休止" - -msgid "suspend, definition of" -msgstr "休止, 定義" - -msgid "swap" -msgstr "スワップ" - -msgid "swap, definition of" -msgstr "スワップ, 定義" - -msgid "swauth" -msgstr "swauth" - -msgid "swift" -msgstr "swift" - -msgid "swift All in One (SAIO)" -msgstr "swift All in One (SAIO)" - -msgid "swift middleware" -msgstr "swift ミドルウェア" - -msgid "swift proxy server" -msgstr "swift プロキシサーバー" - -msgid "swift storage node" -msgstr "swift ストレージノード" - -msgid "swift storage nodes" -msgstr "swift ストレージノード" - -msgid "sync point" -msgstr "同期ポイント" - -msgid "sysadmin" -msgstr "sysadmin" - -msgid "system usage" -msgstr "システム使用状況" - -msgid "tenant" -msgstr "テナント" - -msgid "tenant ID" -msgstr "テナント ID" - -msgid "tenant endpoint" -msgstr "テナントエンドポイント" - -msgid "tenants" -msgstr "テナント" - -msgid "token" -msgstr "トークン" - -msgid "token services" -msgstr "トークンサービス" - -msgid "tokens" -msgstr "トークン" - -msgid "tombstone" -msgstr "tombstone" - -msgid "topic publisher" -msgstr "トピック発行者" - -msgid "transaction ID" -msgstr "トランザクション ID" - -msgid "transaction IDs" -msgstr "トランザクション ID" - -msgid "transient" -msgstr "一時" - -msgid "transient exchange" -msgstr "一時交換" - -msgid "transient exchanges" -msgstr "一時交換" - -msgid "transient message" -msgstr "一時メッセージ" - -msgid "transient messages" -msgstr "一時メッセージ" - -msgid "transient queue" -msgstr "一時キュー" - -msgid "transient queues" -msgstr "一時キュー" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -msgid "translator-credits" -msgstr "KATO Tomoyuki , 2013-2015" - -msgid "trove" -msgstr "trove" - -msgid "under Image service" -msgstr "Image サービス" - -msgid "under cloud computing" -msgstr "アンダークラウドコンピューティング" - -msgid "unscoped token" -msgstr "スコープなしトークン" - -msgid "updater" -msgstr "アップデーター" - -msgid "updaters" -msgstr "アップデーター" - -msgid "user" -msgstr "ユーザー" - -msgid "user data" -msgstr "ユーザーデータ" - -msgid "users, definition of" -msgstr "ユーザー, 定義" - -msgid "vSphere" -msgstr "vSphere" - -msgid "virtual" -msgstr "仮想" - -msgid "virtual IP" -msgstr "仮想 IP" - -msgid "virtual VLAN" -msgstr "仮想 VLAN" - -msgid "virtual extensible LAN (VXLAN)" -msgstr "virtual extensible LAN (VXLAN)" - -msgid "virtual machine (VM)" -msgstr "仮想マシン (VM)" - -msgid "virtual network" -msgstr "仮想ネットワーク" - -msgid "virtual networking" -msgstr "仮想ネットワーク" - -msgid "virtual port" -msgstr "仮想ポート" - -msgid "virtual private network (VPN)" -msgstr "仮想プライベートネットワーク (VPN)" - -msgid "virtual server" -msgstr "仮想サーバー" - -msgid "virtual servers" -msgstr "仮想サーバー" - -msgid "virtual switch (vSwitch)" -msgstr "仮想スイッチ (vSwitch)" - -msgid "volume" -msgstr "ボリューム" - -msgid "volume ID" -msgstr "ボリューム ID" - -msgid "volume controller" -msgstr "ボリュームコントローラー" - -msgid "volume driver" -msgstr "ボリュームドライバー" - -msgid "volume manager" -msgstr "ボリュームマネージャー" - -msgid "volume node" -msgstr "ボリュームノード" - -msgid "volume plug-in" -msgstr "ボリュームプラグイン" - -msgid "volume worker" -msgstr "ボリュームワーカー" - -msgid "volume workers" -msgstr "ボリュームワーカー" - -msgid "weight" -msgstr "ウェイト" - -msgid "weighted cost" -msgstr "重み付けコスト" - -msgid "weighting" -msgstr "重み付け" - -msgid "worker" -msgstr "ワーカー" - -msgid "workers" -msgstr "ワーカー" - -msgid "zaqar" -msgstr "zaqar" diff --git a/doc/glossary/locale/zh_CN.po b/doc/glossary/locale/zh_CN.po deleted file mode 100644 index ddd66d7c..00000000 --- a/doc/glossary/locale/zh_CN.po +++ /dev/null @@ -1,1156 +0,0 @@ -# Translators: -# johnwoo_lee , 2015 -# -# -# OpenStack Infra , 2015. #zanata -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2016-03-14 11:19+0000\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"PO-Revision-Date: 2015-08-17 03:47+0000\n" -"Last-Translator: openstackjenkins \n" -"Language: zh-CN\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Zanata 3.7.3\n" -"Language-Team: Chinese (China)\n" - -msgid "6to4" -msgstr "6to4" - -msgid "A" -msgstr "A" - -msgid "A Java program that can be embedded into a web page." -msgstr "嵌入到web页面中的Java程序" - -msgid "A Linux distribution that is compatible with OpenStack." -msgstr "OpenStack兼容的一个Linux发行版。" - -msgid "" -"A SQLite database that contains Object Storage accounts and related metadata " -"and that the accounts server accesses." -msgstr "" -"基于SQLite的数据库,存储着对象存储账户及其相关的元数据信息,并支持账户服务的" -"访问。" - -msgid "" -"A collection of specifications used to access a service, application, or " -"program. Includes service calls, required parameters for each call, and the " -"expected return values." -msgstr "" -"用于访问服务、应用或程序的规范的集合,包括服务调用,每个调用返回的参数,以及" -"预料中的返回值。" - -msgid "" -"A core OpenStack project that provides a network connectivity abstraction " -"layer to OpenStack Compute." -msgstr "OpenStack核心项目之一,为OpenStack计算提供网络连接抽象层。" - -msgid "A core OpenStack project that provides block storage services for VMs." -msgstr "OpenStack核心项目,为虚拟机提供块存储服务。" - -msgid "A disk storage protocol tunneled within Ethernet." -msgstr "基于以太网实现的磁盘存储协议。" - -msgid "" -"A file system designed to aggregate NAS hosts, compatible with OpenStack." -msgstr "一种设计为聚合NAS主机的文件系统,兼容于OpenStack。" - -msgid "" -"A group of fixed and/or floating IP addresses that are assigned to a project " -"and can be used by or assigned to the VM instances in a project." -msgstr "一组可用的浮动IP地址,可被分配到项目中,即为项目中的虚拟机实例分配。" - -msgid "" -"A group of interrelated web development techniques used on the client-side " -"to create asynchronous web applications. Used extensively in horizon." -msgstr "用于客户端建立异步web应用的web开发技术。在Horizon项目中用到。" - -msgid "" -"A high availability system design approach and associated service " -"implementation ensures that a prearranged level of operational performance " -"will be met during a contractual measurement period. High availability " -"systems seeks to minimize system downtime and data loss." -msgstr "" -"高可用系统寻找最小系统宕机时间和数据丢失。高可用系统的设计方法和相关的服务实" -"现确保了经营业绩预先安排的水平将在合同测量期间得到满足。" - -msgid "" -"A hybrid cloud is a composition of two or more clouds (private, community or " -"public) that remain distinct entities but are bound together, offering the " -"benefits of multiple deployment models. Hybrid cloud can also mean the " -"ability to connect colocation, managed and/or dedicated services with cloud " -"resources." -msgstr "" -"混合云即是2个或多个云的组合(这些云可以是公有,私有,或者社区),它们彼此独立运" -"行但是是绑定到一起的,拥有多部署模式的优点。混合云还拥有连接托管云资源、被管" -"理云资源或专有的云资源的能力。" - -msgid "" -"A list of permissions attached to an object. An ACL specifies which users or " -"system processes have access to objects. It also defines which operations " -"can be performed on specified objects. Each entry in a typical ACL specifies " -"a subject and an operation. For instance, the ACL entry (Alice, " -"delete) for a file gives Alice permission to delete the file." -msgstr "" -"为对象附加的权限列表。一个ACL特指那些用户或系统进程可以访问这个对象。通常是用" -"针对特定的对象具有那些操作权限来定义。典型的ACL条目是一个主体加一个操作。例" -"如:ACL条目(Alice, delete),表示的意义就是为Alice赋予了删除文件" -"的权限。" - -msgid "" -"A mechanism that allows IPv6 packets to be transmitted over an IPv4 network, " -"providing a strategy for migrating to IPv6." -msgstr "一种可以在IPv4的网络中传输IPv6包的机制,提供了一种迁移到IPv6的策略。" - -msgid "" -"A minimal Linux distribution designed for use as a test image on clouds such " -"as OpenStack." -msgstr "在云环境(例如OpenStack)中用于测试镜像,按照最小的Linux发行版来设计。" - -msgid "" -"A piece of software that makes available another piece of software over a " -"network." -msgstr "一部分软件通过网络可用于另外一部分软件。" - -msgid "" -"A project that ports the shell script-based project named DevStack to Python." -msgstr "一个项目,将DevStack从shell脚本移植到Python。" - -msgid "" -"A subset of API calls that are accessible to authorized administrators and " -"are generally not accessible to end users or the public Internet. They can " -"exist as a separate service (keystone) or can be a subset of another API " -"(nova)." -msgstr "" -"一组可被经过认证的管理员访问的API调用,且不能被最终用户访问到,也不可在公网中" -"被访问。它以分离的服务(keystone)存在,或者是其他API(nova)的子集。" - -msgid "" -"A web framework used extensively in horizon." -msgstr "horizon项目所使用web框架。" - -msgid "ACL" -msgstr "ACL" - -msgid "API (application programming interface)" -msgstr "API(应用程序接口)" - -msgid "API endpoint" -msgstr "API 断点" - -msgid "API extension" -msgstr "API扩展" - -msgid "API extension plug-in" -msgstr "API扩展插件" - -msgid "API key" -msgstr "API键值" - -msgid "API server" -msgstr "API服务器" - -msgid "API token" -msgstr "API令牌" - -msgid "API version" -msgstr "API版本" - -msgid "ATA over Ethernet (AoE)" -msgstr "ATA以太网(AoE)" - -msgid "Active Directory" -msgstr "活动目录" - -msgid "Address Resolution Protocol (ARP)" -msgstr "地址解析协议(ARP)" - -msgid "Advanced Message Queuing Protocol (AMQP)" -msgstr "高级消息队列协议(AMQP)" - -msgid "Advanced RISC Machine (ARM)" -msgstr "先进精简指令集机器(ARM)" - -msgid "" -"All OpenStack core projects are provided under the terms of the Apache " -"License 2.0 license." -msgstr "所有的OpenStack核心项目均在Apache许可证2.0下提供。" - -msgid "Alternative term for a Networking plug-in or Networking API extension." -msgstr "和网络插件或网络API扩展相关的术语。" - -msgid "Alternative term for an API token." -msgstr "API令牌相关的术语。" - -msgid "Alternative term for an Amazon EC2 access key. See EC2 access key." -msgstr "从Amazon EC2的访问密钥学过来的,详细请看EC2访问密钥。" - -msgid "Amazon Kernel Image (AKI)" -msgstr "Amazon内核镜像(AKI)" - -msgid "Amazon Machine Image (AMI)" -msgstr "亚马逊机器镜像(AMI)" - -msgid "Amazon Ramdisk Image (ARI)" -msgstr "亚马逊内存盘镜像(ARI)" - -msgid "" -"An Object Storage worker that scans for and deletes account databases and " -"that the account server has marked for deletion." -msgstr "对象存储维护者从数据库中查找并删除账户,账户的服务器标注已删除。" - -msgid "An OpenStack core project that provides object storage services." -msgstr "OpenStack核心项目之一,提供对象存储服务。" - -msgid "" -"An OpenStack-supported hypervisor. KVM is a full virtualization solution for " -"Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-" -"V), ARM, IBM Power, and IBM zSeries. It consists of a loadable kernel " -"module, that provides the core virtualization infrastructure and a processor " -"specific module." -msgstr "" -"一种被OpenStack所支持的hypervisor。KVM是在linux实现的全虚拟话解决方案,在x86" -"下有CPU虚拟扩展 (Intel VT or AMD-V)所支持,其他平台诸如ARM,IBM Power,IBM " -"zSeries也支持KVM。它以模块的方式可被内核动态加载,提供了虚拟化基础设施的核心" -"和处理器特定的模块。" - -msgid "" -"An integrated project that aims to orchestrate multiple cloud applications " -"for OpenStack." -msgstr "一个集成的项目,目标是为OpenStack编排多种云应用。" - -msgid "" -"An operating system configuration management tool supporting OpenStack " -"deployments." -msgstr "操作系统配置管理工具,支持OpenStack的部署。" - -msgid "Anvil" -msgstr "Anvil" - -msgid "Any node running a daemon or worker that provides an API endpoint." -msgstr "任何提供API断点的运行着守护进程或任务的节点。" - -msgid "Apache" -msgstr "Apache" - -msgid "" -"Apache Hadoop is an open source software framework that supports data-" -"intensive distributed applications." -msgstr "一个开源软件框架,支持数据密集分布式处理。" - -msgid "Apache License 2.0" -msgstr "Apache许可证 2.0" - -msgid "Apache Web Server" -msgstr "Apache Web服务器" - -msgid "Application Programming Interface (API)" -msgstr "应用程序接口(API)" - -msgid "Application Service Provider (ASP)" -msgstr "应用服务提供商(ASP)" - -msgid "" -"Association of an interface ID to a logical port. Plugs an interface into a " -"port." -msgstr "将接口ID和逻辑端口关联起来。即将接口插入到端口。" - -msgid "Asynchronous JavaScript and XML (AJAX)" -msgstr "异步JavaScript和XML(AJAX)" - -msgid "Austin" -msgstr "Austin" - -msgid "" -"Authentication and identity service by Microsoft, based on LDAP. Supported " -"in OpenStack." -msgstr "由微软提供的认证和验证服务,基于LDAP,支持OpenStack。" - -msgid "B" -msgstr "B" - -msgid "Bexar" -msgstr "Bexar" - -msgid "Border Gateway Protocol (BGP)" -msgstr "边界网关协议(BGP)" - -msgid "C" -msgstr "C" - -msgid "CMDB" -msgstr "CMDB" - -msgid "CMDB (Configuration Management Database)" -msgstr "CMDB(配置管理数据库)" - -msgid "CentOS" -msgstr "CentOS" - -msgid "" -"Checks for missing replicas and incorrect or corrupted objects in a " -"specified Object Storage account by running queries against the back-end " -"SQLite database." -msgstr "" -"为指定的对象存储账户检查诸如丢失的副本,错误的或损坏的对象,支撑它的后端是" -"SQLite数据库。" - -msgid "Chef" -msgstr "Chef" - -msgid "CirrOS" -msgstr "CirrOS" - -msgid "" -"Companies that rent specialized applications that help businesses and " -"organizations provide additional services with lower cost." -msgstr "公司租用特定的应用程序,以低成本的方式,为业务和组织提供增值服务。" - -msgid "Compute" -msgstr "计算" - -msgid "Custom modules that extend some OpenStack core APIs." -msgstr "扩展OpenStack核心API的自定义模块。" - -msgid "D" -msgstr "D" - -msgid "DHCP" -msgstr "DHCP" - -msgid "DHCP agent" -msgstr "DHCP代理" - -msgid "DNS" -msgstr "DNS" - -msgid "" -"Denial of service (DoS) is a short form for denial-of-service attack. This " -"is a malicious attempt to prevent legitimate users from using a service." -msgstr "" -"拒绝服务(DoS)是拒绝服务攻击的缩写。这是一个恶意的企图阻止合法用户使用服务。" - -msgid "Desktop-as-a-Service" -msgstr "桌面即服务" - -msgid "Diablo" -msgstr "Diablo" - -msgid "Django" -msgstr "Django" - -msgid "E" -msgstr "E" - -msgid "EC2 compatibility API" -msgstr "EC2 兼容应用程序接口" - -msgid "Essex" -msgstr "Essex" - -msgid "F" -msgstr "F" - -msgid "Fedora" -msgstr "Fedora" - -msgid "Folsom" -msgstr "Folsom" - -msgid "G" -msgstr "G" - -msgid "Glossary" -msgstr "词汇表" - -msgid "GlusterFS" -msgstr "GlusterFS" - -msgid "Grizzly" -msgstr "Grizzly" - -msgid "H" -msgstr "H" - -msgid "Hadoop" -msgstr "Hadoop" - -msgid "Havana" -msgstr "Havana" - -msgid "Heat Orchestration Template (HOT)" -msgstr "Heat编排模板(HOT)" - -msgid "Heat input in the format native to OpenStack." -msgstr "OpenStack本地格式,用于Heat的输入。" - -msgid "Hyper-V" -msgstr "Hyper-V" - -msgid "I" -msgstr "I" - -msgid "ICMP" -msgstr "ICMP" - -msgid "IOPS" -msgstr "IOPS" - -msgid "" -"IOPS (Input/Output Operations Per Second) are a common performance " -"measurement used to benchmark computer storage devices like hard disk " -"drives, solid state drives, and storage area networks." -msgstr "" -"IOPS(每秒输入/输出操作)是一种常见的性能测量基准,针对于计算机存储设备,例如硬" -"盘,固态硬盘,存储区域网络等。" - -msgid "IP address" -msgstr "IP地址:" - -msgid "IP addresses" -msgstr "IP 地址" - -msgid "IPMI" -msgstr "IPMI" - -msgid "IaaS" -msgstr "IaaS" - -msgid "IaaS (Infrastructure-as-a-Service)" -msgstr "IaaS(基础设施即服务)" - -msgid "Icehouse" -msgstr "Icehouse" - -msgid "Identity" -msgstr "Identity" - -msgid "Identity service" -msgstr "认证服务" - -msgid "" -"Impassable limits for guest VMs. Settings include total RAM size, maximum " -"number of vCPUs, and maximum disk size." -msgstr "" -"客户虚拟机的硬性限制,设置包括总的内存大小,vCPU的最大数,以及最大虚拟磁盘大" -"小。" - -msgid "" -"In OpenStack, the API version for a project is part of the URL. For example, " -"example.com/nova/v1/foobar." -msgstr "" -"在OpenStack项目中,API版本是URL的一部分。例如example.com/nova/v1/" -"foobar。" - -msgid "" -"In a high-availability setup with an active/active configuration, several " -"systems share the load together and if one fails, the load is distributed to " -"the remaining systems." -msgstr "" -"在高可用步骤中配置双激活,多个系统共同承担负载,如果其中一个失效,负载会自动" -"分发到仍然正常运行的系统。" - -msgid "" -"In a high-availability setup with an active/passive configuration, systems " -"are set up to bring additional resources online to replace those that have " -"failed." -msgstr "在高可用设置主动/被动配置,系统会启动额外的资源来代替失效的节点。" - -msgid "" -"Infrastructure-as-a-Service. IaaS is a provisioning model in which an " -"organization outsources physical components of a data center, such as " -"storage, hardware, servers, and networking components. A service provider " -"owns the equipment and is responsible for housing, operating and maintaining " -"it. The client typically pays on a per-use basis. IaaS is a model for " -"providing cloud services." -msgstr "" -"基础设施即服务。IaaS是一种配置模式,将数据中心的物理组件,如存储、硬件、服务" -"器以及网络等以组织外包的方式提供。服务运营商提供设备,负责机房以及操作和维" -"护。用户只需要按需使用并付费即可。IaaS是云服务模式的一种。" - -msgid "J" -msgstr "J" - -msgid "Java" -msgstr "Java" - -msgid "Jenkins" -msgstr "Jenkins" - -msgid "Juno" -msgstr "Juno" - -msgid "K" -msgstr "K" - -msgid "Kilo" -msgstr "Kilo" - -msgid "L" -msgstr "L" - -msgid "Layer-2 network" -msgstr "2层网络" - -msgid "Layer-3 network" -msgstr "三层网络" - -msgid "" -"Licensed under the Apache License, Version 2.0 (the \"License\"); you may " -"not use this file except in compliance with the License. You may obtain a " -"copy of the License at" -msgstr "" -"基于Apache许可证,版本2.0(许可证);用户不得在非许可情形下使用这些文件。用户可" -"以从这里获得许可证" - -msgid "Linux Bridge" -msgstr "Linux 网桥" - -msgid "" -"Lists containers in Object Storage and stores container information in the " -"account database." -msgstr "列出对象存储的容器,且保存容器信息到账户数据库。" - -msgid "Load-Balancer-as-a-Service (LBaaS)" -msgstr "Load-Balancer-as-a-Service (LBaaS)" - -msgid "" -"Lower power consumption CPU often found in mobile and embedded devices. " -"Supported by OpenStack." -msgstr "" -"以低能耗著称的CPU,常用在移动电话或嵌入式设备中。OpenStack支持此类CPU。" - -msgid "M" -msgstr "M" - -msgid "N" -msgstr "N" - -msgid "Network Time Protocol (NTP)" -msgstr "网络时间协议(NTP)" - -msgid "Networking API" -msgstr "网络应用程序接口" - -msgid "Numbers" -msgstr "数字" - -msgid "O" -msgstr "O" - -msgid "Object Storage" -msgstr "对象存储" - -msgid "Open vSwitch" -msgstr "Open vSwitch" - -msgid "" -"Open vSwitch is a production quality, multilayer virtual switch licensed " -"under the open source Apache 2.0 license. It is designed to enable massive " -"network automation through programmatic extension, while still supporting " -"standard management interfaces and protocols (for example NetFlow, sFlow, " -"SPAN, RSPAN, CLI, LACP, 802.1ag)." -msgstr "" -"Open vSwitch是一款产品级的,多层的虚拟交换机,基于开源Apache2.0许可证分发。被" -"设计用于基于可编程扩展的大规模网络自动化,支持标准的管理接口和协议(例如" -"NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag)。" - -msgid "OpenStack" -msgstr "OpenStack" - -msgid "OpenStack glossary" -msgstr "OpenStack 词汇表" - -msgid "" -"OpenStack is a cloud operating system that controls large pools of compute, " -"storage, and networking resources throughout a data center, all managed " -"through a dashboard that gives administrators control while empowering their " -"users to provision resources through a web interface. OpenStack is an open " -"source project licensed under the Apache License 2.0." -msgstr "" -"OpenStack是一个云操作系统,通过数据中心可控制大型的计算、存储、网络等资源池。" -"所有的管理通过前端界面管理员就可以完成,同样也可以通过web接口让最终用户部署资" -"源。OpenStack是一个开放源代码的项目,基于Apeche许可证2.0发布。" - -msgid "OpenStack project that provides a dashboard, which is a web interface." -msgstr "提供web接口的仪表盘的OpenStack项目。" - -msgid "OpenStack project that provides compute services." -msgstr "OpenStack核心项目之一,提供计算服务。" - -msgid "OpenStack project that provides database services to applications." -msgstr "OpenStack为提供数据库服务的应用程序项目。" - -msgid "" -"OpenStack-on-OpenStack program. The code name for the OpenStack Deployment " -"program." -msgstr "OpenStack-on-OpenStack程序,为OpenStack开发程序使用的项目。" - -msgid "P" -msgstr "P" - -msgid "" -"Passed to API requests and used by OpenStack to verify that the client is " -"authorized to run the requested operation." -msgstr "用于API请求通过,以及OpenStack用于验证客户端要运行请求操作的认证。" - -msgid "Platform-as-a-Service (PaaS)" -msgstr "平台即服务(PaaS)" - -msgid "Q" -msgstr "Q" - -msgid "Qpid" -msgstr "Qpid" - -msgid "R" -msgstr "R" - -msgid "RESTful" -msgstr "RESTful" - -msgid "RabbitMQ" -msgstr "RabbitMQ" - -msgid "Red Hat Enterprise Linux (RHEL)" -msgstr "红帽企业Linux(RHEL)" - -msgid "S" -msgstr "S" - -msgid "S3" -msgstr "S3" - -msgid "S3 storage service" -msgstr "S3存储服务" - -msgid "SELinux" -msgstr "SELinux" - -msgid "SPICE" -msgstr "SPICE" - -msgid "SPICE (Simple Protocol for Independent Computing Environments)" -msgstr "SPICE(独立计算环境简单协议)" - -msgid "SUSE Linux Enterprise Server (SLES)" -msgstr "SUSE Linux Enterprise Server (SLES)" - -msgid "See access control list." -msgstr "参考访问控制列表。" - -msgid "Sheepdog" -msgstr "Sheepdog" - -msgid "T" -msgstr "T" - -msgid "" -"Term used in the OSI network architecture for the network layer. The network " -"layer is responsible for packet forwarding including routing from one node " -"to another." -msgstr "" -"来自OSI网络架构的术语,即网络层。网络层响应报文转发,包括从一个节点到其它节点" -"的路由。" - -msgid "" -"The Apache Software Foundation supports the Apache community of open-source " -"software projects. These projects provide software products for the public " -"good." -msgstr "" -"Apache软件基金会支持的Apache社区开源软件项目。这些项目提供软件产品并提倡公" -"开。" - -msgid "" -"The Border Gateway Protocol is a dynamic routing protocol that connects " -"autonomous systems. Considered the backbone of the Internet, this protocol " -"connects disparate networks to form a larger network." -msgstr "" -"边界网关协议是连接到自主系统的动态路由协议。想想下互联网的背后,此协议连接不" -"同的网络以形成更大的网络。" - -msgid "" -"The Compute service can send alerts through its notification system, which " -"includes a facility to create custom notification drivers. Alerts can be " -"sent to and displayed on the horizon dashboard." -msgstr "" -"计算服务可通过它的通知系统发送警告,包括自定义的通知。警告被发送并显示到" -"horizon仪表盘。" - -msgid "" -"The Compute service provides accounting information through the event " -"notification and system usage data facilities." -msgstr "计算服务通过事件通知和系统使用量数据提供账单信息。" - -msgid "" -"The OpenStack core project that enables management of volumes, volume " -"snapshots, and volume types. The project name of Block Storage is cinder." -msgstr "" -"OpenStack核心项目,它管理卷、卷快照,以及卷类型。块存储的项目名称叫做cinder。" - -msgid "" -"The OpenStack core project that provides compute services. The project name " -"of Compute service is nova." -msgstr "OpenStack核心项目,提供计算服务。项目名称为nova。" - -msgid "" -"The OpenStack core project that provides eventually consistent and redundant " -"storage and retrieval of fixed digital content. The project name of " -"OpenStack Object Storage is swift." -msgstr "" -"OpenStack核心项目之一,提供一致性的、冗余的存储、可恢复的数字内容。OpenStack" -"对象存储的项目名称是swift。" - -msgid "The most common web server software currently used on the Internet." -msgstr "在当下互联网上最常见的web服务器软件。" - -msgid "" -"The open standard messaging protocol used by OpenStack components for intra-" -"service communications, provided by RabbitMQ, Qpid, or ZeroMQ." -msgstr "" -"用于OpenStack组件的内部服务通信的开放标准消息协议,由RabbitMQ,Qpid或ZeroMQ提" -"供。" - -msgid "" -"The practice of utilizing a secondary environment to elastically build " -"instances on-demand when the primary environment is resource constrained." -msgstr "当主要环境的资源出现瓶颈时,第二环境按照需求动态构建实例。" - -msgid "" -"The process associating a Compute floating IP address with a fixed IP " -"address." -msgstr "将计算的浮动IP地址关联到固定IP地址的进程。" - -msgid "" -"The process of connecting a VIF or vNIC to a L2 network in Networking. In " -"the context of Compute, this process connects a storage volume to an " -"instance." -msgstr "" -"在网络中,是连接虚拟网卡或虚拟网络接口到2层网络的过程。在计算的上下文中,此过" -"程变为将存储卷给虚拟机实例。" - -msgid "" -"The process of taking a floating IP address from the address pool so it can " -"be associated with a fixed IP on a guest VM instance." -msgstr "将浮动IP从地址池中取出分配给客户虚拟机实例的进程。" - -msgid "" -"The project name for the Telemetry service, which is an integrated project " -"that provides metering and measuring facilities for OpenStack." -msgstr "是遥测服务的项目名称,此服务整合OpenStack中提供计量和测量的项目。" - -msgid "The project that provides OpenStack Identity services." -msgstr "OpenStack验证服务的项目。" - -msgid "" -"The protocol by which layer-3 IP addresses are resolved into layer-2 link " -"local addresses." -msgstr "将三层IP地址解析为二层链路地址的协议。" - -msgid "" -"The web-based management interface for OpenStack. An alternative name for " -"horizon." -msgstr "OpenStack基于web的管理接口。项目名称为horizon。" - -msgid "" -"This glossary offers a list of terms and definitions to define a vocabulary " -"for OpenStack-related concepts." -msgstr "此词汇表列出了OpenStack相关的术语和定义词汇。" - -msgid "" -"Tool used for maintaining Address Resolution Protocol packet filter rules in " -"the Linux kernel firewall modules. Used along with iptables, ebtables, and " -"ip6tables in Compute to provide firewall services for VMs." -msgstr "" -"在Linux内核防火墙模块用于维护地址解析协议包过滤的工具。在计算服务中使用" -"iptables,ebtables,ip6tables为虚拟机提供防火墙服务。" - -msgid "TripleO" -msgstr "TripleO" - -msgid "U" -msgstr "U" - -msgid "Ubuntu" -msgstr "Ubuntu" - -msgid "" -"Use this glossary to get definitions of OpenStack-related words and phrases." -msgstr "此词汇表定义了和OpenStack相关的词和短语。" - -msgid "User Mode Linux (UML)" -msgstr "用户模式Linux (UML)" - -msgid "V" -msgstr "V" - -msgid "VIP" -msgstr "vip" - -msgid "VLAN" -msgstr "VLAN" - -msgid "VLAN network" -msgstr "VLAN 网络" - -msgid "W" -msgstr "W" - -msgid "X" -msgstr "X" - -msgid "Xen" -msgstr "Xen" - -msgid "Xen Cloud Platform (XCP)" -msgstr "Xen 云平台(XCP)" - -msgid "" -"Xen is a hypervisor using a microkernel design, providing services that " -"allow multiple computer operating systems to execute on the same computer " -"hardware concurrently." -msgstr "" -"Xen是一种hypervisor,使用微内核设计,提供在同一台硬件计算机并行的运行多个计算" -"机操作系统的服务。" - -msgid "XenServer" -msgstr "XenServer" - -msgid "Y" -msgstr "Y" - -msgid "Z" -msgstr "Z" - -msgid "ZeroMQ" -msgstr "ZeroMQ" - -msgid "absolute limit" -msgstr "绝对限制" - -msgid "access control list" -msgstr "访问控制列表" - -msgid "access control list (ACL)" -msgstr "访问控制列表(ACL)" - -msgid "access key" -msgstr "访问密钥" - -msgid "account" -msgstr "账户" - -msgid "account auditor" -msgstr "账户审计" - -msgid "account database" -msgstr "账户数据库" - -msgid "account reaper" -msgstr "账户回收" - -msgid "account server" -msgstr "账户服务器" - -msgid "account service" -msgstr "账户服务" - -msgid "accounting" -msgstr "账单" - -msgid "accounts" -msgstr "账户" - -msgid "active/active configuration" -msgstr "双激活配置" - -msgid "active/passive configuration" -msgstr "主动/被动配置" - -msgid "address pool" -msgstr "地址池" - -msgid "admin API" -msgstr "管理 API" - -msgid "admin server" -msgstr "管理服务" - -msgid "alert" -msgstr "警告" - -msgid "alerts" -msgstr "警告" - -msgid "allocate" -msgstr "分配" - -msgid "allocate, definition of" -msgstr "分配,定义为" - -msgid "applet" -msgstr "applet" - -msgid "application server" -msgstr "应用服务器" - -msgid "application servers" -msgstr "应用服务器" - -msgid "associate, definition of" -msgstr "关联的定义是" - -msgid "attach" -msgstr "附加" - -msgid "attach, definition of" -msgstr "附加定义为" - -msgid "attachment (network)" -msgstr "附加(网络)" - -msgid "auditing" -msgstr "审计" - -msgid "bandwidth" -msgstr "带宽" - -msgid "bare" -msgstr "裸" - -msgid "bursting" -msgstr "暴发" - -msgid "ceilometer" -msgstr "ceilometer" - -msgid "cell" -msgstr "单元" - -msgid "cells" -msgstr "单元" - -msgid "cinder" -msgstr "cinder" - -msgid "cloud-init" -msgstr "cloud-init" - -msgid "compute host" -msgstr "计算主机" - -msgid "container" -msgstr "容器" - -msgid "current workload" -msgstr "当前负载" - -msgid "definition of" -msgstr "定义为" - -msgid "denial of service (DoS)" -msgstr "拒绝服务(DoS)" - -msgid "east-west traffic" -msgstr "东西向流量" - -msgid "endpoint" -msgstr "端点" - -msgid "endpoints" -msgstr "断点" - -msgid "firewall" -msgstr "防火墙" - -msgid "flat network" -msgstr "扁平化网络" - -msgid "gateway" -msgstr "网关" - -msgid "glance" -msgstr "glance" - -msgid "heat" -msgstr "heat" - -msgid "high availability (HA)" -msgstr "高可用(HA)" - -msgid "horizon" -msgstr "horizon" - -msgid "host" -msgstr "主机" - -msgid "http://www.apache.org/licenses/LICENSE-2.0" -msgstr "http://www.apache.org/licenses/LICENSE-2.0" - -msgid "hybrid cloud" -msgstr "混合云" - -msgid "image" -msgstr "镜像" - -msgid "image ID" -msgstr "镜像ID" - -msgid "image UUID" -msgstr "镜像UUID" - -msgid "images" -msgstr "镜像" - -msgid "instance" -msgstr "云主机" - -msgid "instance ID" -msgstr "实例ID" - -msgid "instances" -msgstr "实例" - -msgid "interface ID" -msgstr "接口 ID" - -msgid "iptables" -msgstr "iptables" - -msgid "kernel-based VM (KVM)" -msgstr "基于内核的虚拟机(KVM)" - -msgid "kernel-based VM (KVM) hypervisor" -msgstr "基于内核的虚拟机(KVM) hypervisor" - -msgid "keystone" -msgstr "keystone" - -msgid "libvirt" -msgstr "libvirt" - -msgid "management network" -msgstr "管理网络" - -msgid "memory overcommit" -msgstr "内存溢出" - -msgid "network" -msgstr "网络" - -msgid "network ID" -msgstr "网络ID" - -msgid "neutron" -msgstr "neutron" - -msgid "node" -msgstr "节点" - -msgid "north-south traffic" -msgstr "南北向流量" - -msgid "nova" -msgstr "nova" - -msgid "nova-network" -msgstr "nova-network" - -msgid "object" -msgstr "对象" - -msgid "objects" -msgstr "对象" - -msgid "openSUSE" -msgstr "openSUSE" - -msgid "pause" -msgstr "暂停" - -msgid "pool" -msgstr "池" - -msgid "port" -msgstr "端口" - -msgid "project" -msgstr "项目" - -msgid "project ID" -msgstr "项目ID" - -msgid "raw" -msgstr "raw" - -msgid "role" -msgstr "角色" - -msgid "role ID" -msgstr "角色ID" - -msgid "router" -msgstr "路由" - -msgid "rsync" -msgstr "rsync" - -msgid "server" -msgstr "服务器" - -msgid "servers" -msgstr "服务器" - -msgid "service" -msgstr "服务" - -msgid "shared" -msgstr "共享的" - -msgid "snapshot" -msgstr "快照" - -msgid "stack" -msgstr "栈" - -msgid "static IP address" -msgstr "静态IP 地址" - -msgid "subnet" -msgstr "子网" - -msgid "swift" -msgstr "swift" - -msgid "tenant" -msgstr "租户" - -msgid "token" -msgstr "token" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -msgid "translator-credits" -msgstr "translator-credits" - -msgid "trove" -msgstr "trove" - -msgid "user" -msgstr "用户" - -msgid "vSphere" -msgstr "vSphere" - -msgid "virtual network" -msgstr "虚拟网络" - -msgid "volume" -msgstr "卷" - -msgid "volume ID" -msgstr "卷 ID" diff --git a/doc/openstack-ops/acknowledgements.xml b/doc/openstack-ops/acknowledgements.xml deleted file mode 100644 index 7caacb28..00000000 --- a/doc/openstack-ops/acknowledgements.xml +++ /dev/null @@ -1,79 +0,0 @@ - - - - - - - - -GET'> -PUT'> -POST'> -DELETE'> -]> - - Acknowledgments - The OpenStack Foundation supported the creation of this book with - plane tickets to Austin, lodging (including one adventurous evening - without power after a windstorm), and delicious food. For about USD - $10,000, we could collaborate intensively for a week in the same room at - the Rackspace Austin office. The authors are all members of the - OpenStack Foundation, which you can join. Go to the Foundation web - site at http://openstack.org/join. - We want to acknowledge our excellent host Rackers at Rackspace in - Austin: - - - Emma Richards of Rackspace Guest Relations took excellent care - of our lunch orders and even set aside a pile of sticky notes - that had fallen off the walls. - - - Betsy Hagemeier, a Fanatical Executive Assistant, took care of - a room reshuffle and helped us settle in for the week. - - - The Real Estate team at Rackspace in Austin, also known as - "The Victors," were super responsive. - - - Adam Powell in Racker IT supplied us with bandwidth each day - and second monitors for those of us needing more screens. - - - On Wednesday night we had a fun happy hour with the Austin - OpenStack Meetup group and Racker Katie Schmidt took great care - of our group. - - - We also had some excellent input from outside of the room: - - - Tim Bell from CERN gave us feedback on the outline before we - started and reviewed it mid-week. - - - Sébastien Han has written excellent blogs and generously gave - his permission for re-use. - - - Oisin Feeley read it, made some edits, and provided emailed - feedback right when we asked. - - - Inside the book sprint room with us each day was our book - sprint facilitator Adam Hyde. Without his tireless support and - encouragement, we would have thought a book of this scope was - impossible in five days. Adam has proven the book sprint - method effectively again and again. He creates both tools and - faith in collaborative authoring at www.booksprints.net. - We couldn't have pulled it off without so much supportive help and - encouragement. - diff --git a/doc/openstack-ops/app_crypt.xml b/doc/openstack-ops/app_crypt.xml deleted file mode 100644 index 9f33a945..00000000 --- a/doc/openstack-ops/app_crypt.xml +++ /dev/null @@ -1,572 +0,0 @@ - - -%openstack; -]> - - Tales From the Cryp^H^H^H^H Cloud - - Herein lies a selection of tales from OpenStack cloud operators. Read, - and learn from their wisdom. - -
- Double VLAN - I was on-site in Kelowna, British Columbia, Canada - setting up a new OpenStack cloud. The deployment was fully - automated: Cobbler deployed the OS on the bare metal, - bootstrapped it, and Puppet took over from there. I had - run the deployment scenario so many times in practice and - took for granted that everything was working. - On my last day in Kelowna, I was in a conference call - from my hotel. In the background, I was fooling around on - the new cloud. I launched an instance and logged in. - Everything looked fine. Out of boredom, I ran - ps aux and - all of the sudden the instance locked up. - Thinking it was just a one-off issue, I terminated the - instance and launched a new one. By then, the conference - call ended and I was off to the data center. - At the data center, I was finishing up some tasks and remembered - the lock-up. I logged into the new instance and ran ps - aux again. It worked. Phew. I decided to run it one - more time. It locked up. - After reproducing the problem several times, I came to - the unfortunate conclusion that this cloud did indeed have - a problem. Even worse, my time was up in Kelowna and I had - to return back to Calgary. - Where do you even begin troubleshooting something like - this? An instance that just randomly locks up when a command is - issued. Is it the image? Nope—it happens on all images. - Is it the compute node? Nope—all nodes. Is the instance - locked up? No! New SSH connections work just fine! - We reached out for help. A networking engineer suggested - it was an MTU issue. Great! MTU! Something to go on! - What's MTU and why would it cause a problem? - MTU is maximum transmission unit. It specifies the - maximum number of bytes that the interface accepts for - each packet. If two interfaces have two different MTUs, - bytes might get chopped off and weird things happen—such - as random session lockups. - - Not all packets have a size of 1500. Running the ls - command over SSH might only create a single packets - less than 1500 bytes. However, running a command with - heavy output, such as ps aux - requires several packets of 1500 bytes. - - OK, so where is the MTU issue coming from? Why haven't - we seen this in any other deployment? What's new in this - situation? Well, new data center, new uplink, new - switches, new model of switches, new servers, first time - using this model of servers… so, basically everything was - new. Wonderful. We toyed around with raising the MTU at - various areas: the switches, the NICs on the compute - nodes, the virtual NICs in the instances, we even had the - data center raise the MTU for our uplink interface. Some - changes worked, some didn't. This line of troubleshooting - didn't feel right, though. We shouldn't have to be - changing the MTU in these areas. - As a last resort, our network admin (Alvaro) and myself - sat down with four terminal windows, a pencil, and a piece - of paper. In one window, we ran ping. In the second - window, we ran tcpdump on the cloud - controller. In the third, tcpdump on - the compute node. And the forth had tcpdump - on the instance. For background, this cloud was a - multi-node, non-multi-host setup. - One cloud controller acted as a gateway to all compute - nodes. VlanManager was used for the network config. This - means that the cloud controller and all compute nodes had - a different VLAN for each OpenStack project. We used the - -s option of ping to change the packet size. - We watched as sometimes packets would fully return, - sometimes they'd only make it out and never back in, and - sometimes the packets would stop at a random point. We - changed tcpdump to start displaying the - hex dump of the packet. We pinged between every - combination of outside, controller, compute, and - instance. - Finally, Alvaro noticed something. When a packet from - the outside hits the cloud controller, it should not be - configured with a VLAN. We verified this as true. When the - packet went from the cloud controller to the compute node, - it should only have a VLAN if it was destined for an - instance. This was still true. When the ping reply was - sent from the instance, it should be in a VLAN. True. When - it came back to the cloud controller and on its way out to - the Internet, it should no longer have a VLAN. - False. Uh oh. It looked as though the VLAN part of the - packet was not being removed. - That made no sense. - While bouncing this idea around in our heads, I was - randomly typing commands on the compute node: - $ ip a -… -10: vlan100@vlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br100 state UP -… - - "Hey Alvaro, can you run a VLAN on top of a - VLAN?" - "If you did, you'd add an extra 4 bytes to the - packet…" - Then it all made sense… - $ grep vlan_interface /etc/nova/nova.conf -vlan_interface=vlan20 - - In nova.conf, vlan_interface - specifies what interface OpenStack should attach all VLANs - to. The correct setting should have been: - vlan_interface=bond0 - As this would be the server's bonded NIC. - vlan20 is the VLAN that the data center gave us for - outgoing Internet access. It's a correct VLAN and - is also attached to bond0. - By mistake, I configured OpenStack to attach all tenant - VLANs to vlan20 instead of bond0 thereby stacking one VLAN - on top of another. This added an extra 4 bytes to - each packet and caused a packet of 1504 bytes to be sent - out which would cause problems when it arrived at an - interface that only accepted 1500. - As soon as this setting was fixed, everything - worked. -
-
- "The Issue" - At the end of August 2012, a post-secondary school in - Alberta, Canada migrated its infrastructure to an - OpenStack cloud. As luck would have it, within the first - day or two of it running, one of their servers just - disappeared from the network. Blip. Gone. - After restarting the instance, everything was back up - and running. We reviewed the logs and saw that at some - point, network communication stopped and then everything - went idle. We chalked this up to a random - occurrence. - A few nights later, it happened again. - We reviewed both sets of logs. The one thing that stood - out the most was DHCP. At the time, OpenStack, by default, - set DHCP leases for one minute (it's now two minutes). - This means that every instance - contacts the cloud controller (DHCP server) to renew its - fixed IP. For some reason, this instance could not renew - its IP. We correlated the instance's logs with the logs on - the cloud controller and put together a - conversation: - - - Instance tries to renew IP. - - - Cloud controller receives the renewal request - and sends a response. - - - Instance "ignores" the response and re-sends the - renewal request. - - - Cloud controller receives the second request and - sends a new response. - - - Instance begins sending a renewal request to - 255.255.255.255 since it hasn't - heard back from the cloud controller. - - - The cloud controller receives the - 255.255.255.255 request and sends - a third response. - - - The instance finally gives up. - - - With this information in hand, we were sure that the - problem had to do with DHCP. We thought that for some - reason, the instance wasn't getting a new IP address and - with no IP, it shut itself off from the network. - A quick Google search turned up this: DHCP lease errors in VLAN mode - (https://lists.launchpad.net/openstack/msg11696.html) - which further supported our DHCP theory. - An initial idea was to just increase the lease time. If - the instance only renewed once every week, the chances of - this problem happening would be tremendously smaller than - every minute. This didn't solve the problem, though. It - was just covering the problem up. - We decided to have tcpdump run on this - instance and see if we could catch it in action again. - Sure enough, we did. - The tcpdump looked very, very weird. In - short, it looked as though network communication stopped - before the instance tried to renew its IP. Since there is - so much DHCP chatter from a one minute lease, it's very - hard to confirm it, but even with only milliseconds - difference between packets, if one packet arrives first, - it arrived first, and if that packet reported network - issues, then it had to have happened before DHCP. - Additionally, this instance in question was responsible - for a very, very large backup job each night. While "The - Issue" (as we were now calling it) didn't happen exactly - when the backup happened, it was close enough (a few - hours) that we couldn't ignore it. - Further days go by and we catch The Issue in action more - and more. We find that dhclient is not running after The - Issue happens. Now we're back to thinking it's a DHCP - issue. Running /etc/init.d/networking restart - brings everything back up and running. - Ever have one of those days where all of the sudden you - get the Google results you were looking for? Well, that's - what happened here. I was looking for information on - dhclient and why it dies when it can't renew its lease and - all of the sudden I found a bunch of OpenStack and dnsmasq - discussions that were identical to the problem we were - seeing! - - Problem with Heavy Network IO and Dnsmasq - (http://www.gossamer-threads.com/lists/openstack/operators/18197) - - - instances losing IP address while running, due to No - DHCPOFFER - (http://www.gossamer-threads.com/lists/openstack/dev/14696) - Seriously, Google. - This bug report was the key to everything: - KVM images lose connectivity with bridged - network - (https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978) - It was funny to read the report. It was full of people - who had some strange network problem but didn't quite - explain it in the same way. - So it was a qemu/kvm bug. - At the same time of finding the bug report, a co-worker - was able to successfully reproduce The Issue! How? He used - iperf to spew a ton of bandwidth at an instance. Within 30 - minutes, the instance just disappeared from the - network. - Armed with a patched qemu and a way to reproduce, we set - out to see if we've finally solved The Issue. After 48 - hours straight of hammering the instance with bandwidth, - we were confident. The rest is history. You can search the - bug report for "joe" to find my comments and actual - tests. -
-
- Disappearing Images - At the end of 2012, Cybera (a nonprofit with a mandate - to oversee the development of cyberinfrastructure in - Alberta, Canada) deployed an updated OpenStack cloud for - their DAIR project - (http://www.canarie.ca/en/dair-program/about). A few days - into production, a compute node locks up. Upon rebooting - the node, I checked to see what instances were hosted on - that node so I could boot them on behalf of the customer. - Luckily, only one instance. - The nova reboot command wasn't working, so - I used virsh, but it immediately came back - with an error saying it was unable to find the backing - disk. In this case, the backing disk is the Glance image - that is copied to - /var/lib/nova/instances/_base when the - image is used for the first time. Why couldn't it find it? - I checked the directory and sure enough it was - gone. - I reviewed the nova database and saw the - instance's entry in the nova.instances table. - The image that the instance was using matched what virsh - was reporting, so no inconsistency there. - I checked Glance and noticed that this image was a - snapshot that the user created. At least that was good - news—this user would have been the only user - affected. - Finally, I checked StackTach and reviewed the user's events. They - had created and deleted several snapshots—most likely - experimenting. Although the timestamps didn't match up, my - conclusion was that they launched their instance and then deleted - the snapshot and it was somehow removed from - /var/lib/nova/instances/_base. None of that - made sense, but it was the best I could come up with. - It turns out the reason that this compute node locked up - was a hardware issue. We removed it from the DAIR cloud - and called Dell to have it serviced. Dell arrived and - began working. Somehow or another (or a fat finger), a - different compute node was bumped and rebooted. - Great. - When this node fully booted, I ran through the same - scenario of seeing what instances were running so I could - turn them back on. There were a total of four. Three - booted and one gave an error. It was the same error as - before: unable to find the backing disk. Seriously, - what? - Again, it turns out that the image was a snapshot. The - three other instances that successfully started were - standard cloud images. Was it a problem with snapshots? - That didn't make sense. - A note about DAIR's architecture: - /var/lib/nova/instances is a shared NFS - mount. This means that all compute nodes have access to - it, which includes the _base directory. - Another centralized area is /var/log/rsyslog - on the cloud controller. This directory collects all - OpenStack logs from all compute nodes. I wondered if there - were any entries for the file that virsh is - reporting: - dair-ua-c03/nova.log:Dec 19 12:10:59 dair-ua-c03 -2012-12-19 12:10:59 INFO nova.virt.libvirt.imagecache -[-] Removing base file: -/var/lib/nova/instances/_base/7b4783508212f5d242cbf9ff56fb8d33b4ce6166_10 - - - Ah-hah! So OpenStack was deleting it. But why? - A feature was introduced in Essex to periodically check - and see if there were any _base files not in use. - If there - were, OpenStack Compute would delete them. This idea sounds innocent - enough and has some good qualities to it. But how did this - feature end up turned on? It was disabled by default in - Essex. As it should be. It was decided to be turned on in Folsom - (https://bugs.launchpad.net/nova/+bug/1029674). I cannot - emphasize enough that: - - Actions which delete things should not be - enabled by default. - - Disk space is cheap these days. Data recovery is - not. - Secondly, DAIR's shared - /var/lib/nova/instances directory - contributed to the problem. Since all compute nodes have - access to this directory, all compute nodes periodically - review the _base directory. If there is only one instance - using an image, and the node that the instance is on is - down for a few minutes, it won't be able to mark the image - as still in use. Therefore, the image seems like it's not - in use and is deleted. When the compute node comes back - online, the instance hosted on that node is unable to - start. -
-
- The Valentine's Day Compute Node Massacre - Although the title of this story is much more dramatic - than the actual event, I don't think, or hope, that I'll - have the opportunity to use "Valentine's Day Massacre" - again in a title. - This past Valentine's Day, I received an alert that a - compute node was no longer available in the cloud—meaning, - $nova service-list - showed this particular node in down state. - I logged into the cloud controller and was able to both - ping and SSH into the problematic compute node which - seemed very odd. Usually if I receive this type of alert, - the compute node has totally locked up and would be - inaccessible. - After a few minutes of troubleshooting, I saw the - following details: - - - A user recently tried launching a CentOS - instance on that node - - - This user was the only user on the node (new - node) - - - The load shot up to 8 right before I received - the alert - - - The bonded 10gb network device (bond0) was in a - DOWN state - - - The 1gb NIC was still alive and active - - - I looked at the status of both NICs in the bonded pair - and saw that neither was able to communicate with the - switch port. Seeing as how each NIC in the bond is - connected to a separate switch, I thought that the chance - of a switch port dying on each switch at the same time was - quite improbable. I concluded that the 10gb dual port NIC - had died and needed replaced. I created a ticket for the - hardware support department at the data center where the - node was hosted. I felt lucky that this was a new node and - no one else was hosted on it yet. - An hour later I received the same alert, but for another - compute node. Crap. OK, now there's definitely a problem - going on. Just like the original node, I was able to log - in by SSH. The bond0 NIC was DOWN but the 1gb NIC was - active. - And the best part: the same user had just tried creating - a CentOS instance. What? - I was totally confused at this point, so I texted our - network admin to see if he was available to help. He - logged in to both switches and immediately saw the - problem: the switches detected spanning tree packets - coming from the two compute nodes and immediately shut the - ports down to prevent spanning tree loops: -Feb 15 01:40:18 SW-1 Stp: %SPANTREE-4-BLOCK_BPDUGUARD: Received BPDU packet on Port-Channel35 with BPDU guard enabled. Disabling interface. (source mac fa:16:3e:24:e7:22) -Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35. -Feb 15 01:40:18 SW-1 Mlag: %MLAG-4-INTF_INACTIVE_LOCAL: Local interface Port-Channel35 is link down. MLAG 35 is inactive. -Feb 15 01:40:18 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-Channel35 (Server35), changed state to down -Feb 15 01:40:19 SW-1 Stp: %SPANTREE-6-INTERFACE_DEL: Interface Port-Channel35 has been removed from instance MST0 -Feb 15 01:40:19 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet35 (Server35), changed state to down - - He re-enabled the switch ports and the two compute nodes - immediately came back to life. - Unfortunately, this story has an open ending... we're - still looking into why the CentOS image was sending out - spanning tree packets. Further, we're researching a proper - way on how to mitigate this from happening. It's a bigger - issue than one might think. While it's extremely important - for switches to prevent spanning tree loops, it's very - problematic to have an entire compute node be cut from the - network when this happens. If a compute node is hosting - 100 instances and one of them sends a spanning tree - packet, that instance has effectively DDOS'd the other 99 - instances. - This is an ongoing and hot topic in networking circles - —especially with the raise of virtualization and virtual - switches. -
-
- Down the Rabbit Hole - Users being able to retrieve console logs from running - instances is a boon for support—many times they can - figure out what's going on inside their instance and fix - what's going on without bothering you. Unfortunately, - sometimes overzealous logging of failures can cause - problems of its own. - A report came in: VMs were launching slowly, or not at - all. Cue the standard checks—nothing on the Nagios, but - there was a spike in network towards the current master of - our RabbitMQ cluster. Investigation started, but soon the - other parts of the queue cluster were leaking memory like - a sieve. Then the alert came in—the master Rabbit server - went down and connections failed over to the slave. - At that time, our control services were hosted by - another team and we didn't have much debugging information - to determine what was going on with the master, and we - could not reboot it. That team noted that it failed without - alert, but managed to reboot it. After an hour, the - cluster had returned to its normal state and we went home - for the day. - Continuing the diagnosis the next morning was kick - started by another identical failure. We quickly got the - message queue running again, and tried to work out why - Rabbit was suffering from so much network traffic. - Enabling debug logging on - nova-api quickly brought - understanding. A tail -f - /var/log/nova/nova-api.log was scrolling by - faster than we'd ever seen before. CTRL+C on that and we - could plainly see the contents of a system log spewing - failures over and over again - a system log from one of - our users' instances. - After finding the instance ID we headed over to - /var/lib/nova/instances to find the - console.log: - -adm@cc12:/var/lib/nova/instances/instance-00000e05# wc -l console.log -92890453 console.log -adm@cc12:/var/lib/nova/instances/instance-00000e05# ls -sh console.log -5.5G console.log - Sure enough, the user had been periodically refreshing - the console log page on the dashboard and the 5G file was - traversing the Rabbit cluster to get to the - dashboard. - We called them and asked them to stop for a while, and - they were happy to abandon the horribly broken VM. After - that, we started monitoring the size of console - logs. - To this day, the issue - (https://bugs.launchpad.net/nova/+bug/832507) doesn't have - a permanent resolution, but we look forward to the - discussion at the next summit. -
-
- Havana Haunted by the Dead - Felix Lee of Academia Sinica Grid Computing Centre in Taiwan - contributed this story. - I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using - the RDO repository and everything was running pretty - well—except the EC2 API. - I noticed that the API would suffer from a heavy load and - respond slowly to particular EC2 requests such as - RunInstances. - Output from /var/log/nova/nova-api.log on - Havana: - 2014-01-10 09:11:45.072 129745 INFO nova.ec2.wsgi.server -[req-84d16d16-3808-426b-b7af-3b90a11b83b0 -0c6e7dba03c24c6a9bce299747499e8a 7052bd6714e7460caeb16242e68124f9] -117.103.103.29 "GET -/services/Cloud?AWSAccessKeyId=[something]&Action=RunInstances&ClientToken=[something]&ImageId=ami-00000001&InstanceInitiatedShutdownBehavior=terminate... -HTTP/1.1" status: 200 len: 1109 time: 138.5970151 - - This request took over two minutes to process, but executed - quickly on another co-existing Grizzly deployment using the same - hardware and system configuration. - Output from /var/log/nova/nova-api.log on - Grizzly: - 2014-01-08 11:15:15.704 INFO nova.ec2.wsgi.server -[req-ccac9790-3357-4aa8-84bd-cdaab1aa394e -ebbd729575cb404081a45c9ada0849b7 8175953c209044358ab5e0ec19d52c37] -117.103.103.29 "GET -/services/Cloud?AWSAccessKeyId=[something]&Action=RunInstances&ClientToken=[something]&ImageId=ami-00000007&InstanceInitiatedShutdownBehavior=terminate... -HTTP/1.1" status: 200 len: 931 time: 3.9426181 - - While monitoring system resources, I noticed - a significant increase in memory consumption while the EC2 API - processed this request. I thought it wasn't handling memory - properly—possibly not releasing memory. If the API received - several of these requests, memory consumption quickly grew until - the system ran out of RAM and began using swap. Each node has - 48 GB of RAM and the "nova-api" process would consume all of it - within minutes. Once this happened, the entire system would become - unusably slow until I restarted the - nova-api service. - So, I found myself wondering what changed in the EC2 API on - Havana that might cause this to happen. Was it a bug or a normal - behavior that I now need to work around? - After digging into the nova (OpenStack Compute) code, I noticed two areas in - api/ec2/cloud.py potentially impacting my - system: - - instances = self.compute_api.get_all(context, - search_opts=search_opts, - sort_dir='asc') - - sys_metas = self.compute_api.get_all_system_metadata( - context, search_filts=[{'key': ['EC2_client_token']}, - {'value': [client_token]}]) - - Since my database contained many records—over 1 million - metadata records and over 300,000 instance records in "deleted" - or "errored" states—each search took a long time. I decided to clean - up the database by first archiving a copy for backup and then - performing some deletions using the MySQL client. For example, I - ran the following SQL command to remove rows of instances deleted - for over a year: - mysql> delete from nova.instances where deleted=1 and terminated_at < (NOW() - INTERVAL 1 YEAR); - Performance increased greatly after deleting the old records and - my new deployment continues to behave well. -
-
diff --git a/doc/openstack-ops/app_roadmaps.xml b/doc/openstack-ops/app_roadmaps.xml deleted file mode 100644 index 2efd1278..00000000 --- a/doc/openstack-ops/app_roadmaps.xml +++ /dev/null @@ -1,708 +0,0 @@ - - - Working with Roadmaps - - The good news: OpenStack has unprecedented transparency when it comes - to providing information about what's coming up. The bad news: each release - moves very quickly. The purpose of this appendix is to highlight some of the - useful pages to track, and take an educated guess at what is coming up in - the next release and perhaps further afield. - Kilo - - upcoming release of - - OpenStack community - - working with roadmaps - - release cycle - - - OpenStack follows a six month release cycle, typically releasing in - April/May and October/November each year. At the start of each cycle, the - community gathers in a single location for a design summit. At the summit, - the features for the coming releases are discussed, prioritized, and - planned. shows an example release - cycle, with dates showing milestone releases, code freeze, and string freeze - dates, along with an example of when the summit occurs. Milestones are - interim releases within the cycle that are available as packages for - download and testing. Code freeze is putting a stop to adding new features - to the release. String freeze is putting a stop to changing any strings - within the source code. - -
- Release cycle diagram - - - - - - -
- -
- Information Available to You - - There are several good sources of information available that you can - use to track your OpenStack development desires. - OpenStack community - - working with roadmaps - - information available - - - Release notes are maintained on the OpenStack wiki, and also shown - here: - - - - - Series - - Status - - Releases - - Date - - - - - - Liberty - - Under Development - - - 2015.2 - - Oct, 2015 - - - - Kilo - - Current stable release, security-supported - - - 2015.1 - - Apr 30, 2015 - - - Juno - - Security-supported - - - 2014.2 - - Oct 16, 2014 - - - - Icehouse - - End-of-life - - 2014.1 - - Apr 17, 2014 - - - - 2014.1.1 - - Jun 9, 2014 - - - - 2014.1.2 - - Aug 8, 2014 - - - - 2014.1.3 - - Oct 2, 2014 - - - - Havana - - End-of-life - - 2013.2 - - - Apr 4, 2013 - - - - - 2013.2.1 - - Dec 16, 2013 - - - - 2013.2.2 - - Feb 13, 2014 - - - - 2013.2.3 - - Apr 3, 2014 - - - - 2013.2.4 - - Sep 22, 2014 - - - - 2013.2.1 - - Dec 16, 2013 - - - Grizzly - - End-of-life - - 2013.1 - - - Apr 4, 2013 - - - - - 2013.1.1 - - May 9, 2013 - - - - - 2013.1.2 - - Jun 6, 2013 - - - - - 2013.1.3 - - Aug 8, 2013 - - - - - 2013.1.4 - - Oct 17, 2013 - - - - 2013.1.5 - - Mar 20, 2015 - - - Folsom - - End-of-life - - 2012.2 - - - Sep 27, 2012 - - - - - 2012.2.1 - - Nov 29, 2012 - - - - - 2012.2.2 - - Dec 13, 2012 - - - - - 2012.2.3 - - Jan 31, 2013 - - - - - 2012.2.4 - - Apr 11, 2013 - - - - Essex - - End-of-life - - 2012.1 - - - Apr 5, 2012 - - - - - 2012.1.1 - - Jun 22, 2012 - - - - - 2012.1.2 - - Aug 10, 2012 - - - - - 2012.1.3 - - Oct 12, 2012 - - - - Diablo - - Deprecated - - 2011.3 - - - Sep 22, 2011 - - - - - 2011.3.1 - - Jan 19, 2012 - - - - Cactus - - Deprecated - - 2011.2 - - - Apr 15, 2011 - - - - Bexar - - Deprecated - - 2011.1 - - - Feb 3, 2011 - - - - Austin - - Deprecated - - 2010.1 - - - Oct 21, 2010 - - - - - Here are some other resources: - - - - A breakdown of - current features under development, with their target - milestone - - - - A list of all - features, including those not yet under development - - - - Rough-draft design - discussions ("etherpads") from the last design summit - - - - List of individual - code changes under review - - -
- -
- Influencing the Roadmap - - OpenStack truly welcomes your ideas (and contributions) and highly - values feedback from real-world users of the software. By learning a - little about the process that drives feature development, you can - participate and perhaps get the additions you desire. - OpenStack community - - working with roadmaps - - influencing - - - Feature requests typically start their life in Etherpad, a - collaborative editing tool, which is used to take coordinating notes at a - design summit session specific to the feature. This then leads to the - creation of a blueprint on the Launchpad site for the particular project, - which is used to describe the feature more formally. Blueprints are then - approved by project team members, and development can begin. - - Therefore, the fastest way to get your feature request up for - consideration is to create an Etherpad with your ideas and propose a - session to the design summit. If the design summit has already passed, you - may also create a blueprint directly. Read this blog post about how to work with - blueprints the perspective of Victoria Martínez, a developer - intern. - - The roadmap for the next release as it is developed can be seen at - Releases. - - To determine the potential features going in to future releases, or - to look at features implemented previously, take a look at the existing - blueprints such as OpenStack Compute (nova) - Blueprints, OpenStack - Identity (keystone) Blueprints, and release notes. - - Aside from the direct-to-blueprint pathway, there is another very - well-regarded mechanism to influence the development roadmap: the user - survey. Found at , it allows you to - provide details of your deployments and needs, anonymously by default. - Each cycle, the user committee analyzes the results and produces a report, - including providing specific information to the technical committee and - project team leads. -
- -
- Aspects to Watch - - You want to keep an eye on the areas improving within OpenStack. The - best way to "watch" roadmaps for each project is to look at the blueprints - that are being approved for work on milestone releases. You can also learn - from PTL webinars that follow the OpenStack summits twice a - year. - OpenStack community - - working with roadmaps - - aspects to watch - - -
- Driver Quality Improvements - - A major quality push has occurred across drivers and plug-ins in - Block Storage, Compute, and Networking. Particularly, developers of - Compute and Networking drivers that require proprietary or hardware - products are now required to provide an automated external testing - system for use during the development process. -
- -
- Easier Upgrades - - One of the most requested features since OpenStack began (for - components other than Object Storage, which tends to "just work"): - easier upgrades. In all recent releases internal messaging communication - is versioned, meaning services can theoretically drop back to - backward-compatible behavior. This allows you to run later versions of - some components, while keeping older versions of others. - - In addition, database migrations are now tested with the Turbo - Hipster tool. This tool tests database migration performance on copies of - real-world user databases. - - These changes have facilitated the first proper OpenStack upgrade - guide, found in , and will continue to - improve in the next release. - Kilo - - upgrades in - -
- -
- Deprecation of Nova Network - - With the introduction of the full software-defined networking - stack provided by OpenStack Networking (neutron) in the Folsom release, - development effort on the initial networking code that remains part of - the Compute component has gradually lessened. While many still use - nova-network in production, there has been a - long-term plan to remove the code in favor of the more flexible and - full-featured OpenStack Networking. - nova - - deprecation of - - - An attempt was made to deprecate nova-network - during the Havana release, which was aborted due to the lack of - equivalent functionality (such as the FlatDHCP multi-host - high-availability mode mentioned in this guide), lack of a migration - path between versions, insufficient testing, and simplicity when used - for the more straightforward use cases nova-network - traditionally supported. Though significant effort has been made to - address these concerns, nova-network was not be - deprecated in the Juno release. In addition, to a limited degree, - patches to nova-network have again begin to be - accepted, such as adding a per-network settings feature and SR-IOV - support in Juno. - Juno - - nova network deprecation - - - This leaves you with an important point of decision when designing - your cloud. OpenStack Networking is robust enough to use with a small - number of limitations (performance issues in some scenarios, only basic - high availability of layer 3 systems) and provides many more features than - nova-network. However, if you do not have the more - complex use cases that can benefit from fuller software-defined - networking capabilities, or are uncomfortable with the new concepts - introduced, nova-network may continue to be a viable - option for the next 12 months. - - Similarly, if you have an existing cloud and are looking to - upgrade from nova-network to OpenStack Networking, - you should have the option to delay the upgrade for this period of time. - However, each release of OpenStack brings significant new innovation, - and regardless of your use of networking methodology, it is likely best - to begin planning for an upgrade within a reasonable timeframe of each - release. - - As mentioned, there's currently no way to cleanly migrate from - nova-network to neutron. We recommend that you keep a - migration in mind and what that process might involve for when a proper - migration path is released. -
-
- -
- Distributed Virtual Router - - One of the long-time complaints surrounding OpenStack Networking was - the lack of high availability for the layer 3 components. The Juno release - introduced Distributed Virtual Router (DVR), which aims to solve this - problem. - Early indications are that it does do this well for a base set of - scenarios, such as using the ML2 plug-in with Open vSwitch, one flat - external network and VXLAN tenant networks. However, it does appear that - there are problems with the use of VLANs, IPv6, Floating IPs, high - north-south traffic scenarios and large numbers of compute nodes. It is - expected these will improve significantly with the next release, but - bug reports on specific issues are highly desirable. -
- - -
- Replacement of Open vSwitch Plug-in with <phrase - role="keep-together">Modular Layer 2</phrase> - - The Modular Layer 2 plug-in is a framework allowing OpenStack - Networking to simultaneously utilize the variety of layer-2 networking - technologies found in complex real-world data centers. It currently works - with the existing Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is - intended to replace and deprecate the monolithic plug-ins associated with - those L2 agents. -
- -
- New API Versions - - The third version of the Compute API was broadly discussed and - worked on during the Havana and Icehouse release cycles. Current - discussions indicate that the V2 API will remain for many releases, and - the next iteration of the API will be denoted v2.1 and have similar - properties to the existing v2.0, rather than an entirely new v3 API. - This is a great time to evaluate all API and provide comments - while the next generation APIs are being defined. A new working group was - formed specifically to - improve - OpenStack APIs and create design guidelines, which you are welcome to join. - Kilo - - Compute V3 API - -
- -
- OpenStack on OpenStack (TripleO) - - This project continues to improve and you may consider using it for - greenfield deployments, though - according to the latest user survey results it remains to see widespread - uptake. -
- -
- Data processing service for OpenStack (sahara) - - A much-requested answer to big data problems, a dedicated team has - been making solid progress on a Hadoop-as-a-Service project. -
- -
- Bare metal Deployment (ironic) - - The bare-metal deployment has been widely lauded, and development - continues. The Juno release brought the OpenStack Bare metal drive into the Compute - project, and it was aimed to deprecate the existing bare-metal driver in - Kilo. - If you are a current user of the bare metal driver, a particular blueprint to follow is - Deprecate the bare metal driver. - Kilo - - Compute bare-metal deployment - -
- -
- Database as a Service (trove) - - The OpenStack community has had a database-as-a-service tool in - development for some time, and we saw the first integrated - release of it in Icehouse. From its release it was able to deploy database - servers out of the box in a highly available way, initially supporting only - MySQL. Juno introduced support for Mongo (including clustering), PostgreSQL and - Couchbase, in addition to replication functionality for MySQL. In Kilo, - more advanced clustering capability was delivered, in addition to better - integration with other OpenStack components such as Networking. - - Juno - - database-as-a-service tool - -
- -
- Message Service (zaqar) - - A service to provide queues of messages and notifications was released. -
- -
- DNS service (designate) - - A long requested service, to provide the ability to manipulate DNS - entries associated with OpenStack resources has gathered a following. The - designate project was also released. -
-
- Scheduler Improvements - - Both Compute and Block Storage rely on schedulers to determine where - to place virtual machines or volumes. In Havana, the Compute scheduler - underwent significant improvement, while in Icehouse it was the scheduler in - Block Storage that received a boost. Further down the track, an effort - started this cycle that aims to create a holistic scheduler covering both - will come to fruition. Some of the work that was done in Kilo can be found under the -Gantt -project. - Kilo - - scheduler improvements - - -
- Block Storage Improvements - - Block Storage is considered a stable project, with wide uptake - and a long track record of quality drivers. The team - has discussed many areas of work at the summits, - including better error reporting, automated discovery, and thin - provisioning features. -
- -
- Toward a Python SDK - - Though many successfully use the various python-*client code as an - effective SDK for interacting with OpenStack, consistency between the - projects and documentation availability waxes and wanes. To combat this, - an effort to improve the - experience has started. Cross-project development efforts in - OpenStack have a checkered history, such as the unified client project - having several false starts. However, the early signs for the SDK - project are promising, and we expect to see results during the Juno - cycle. -
-
-
diff --git a/doc/openstack-ops/app_usecases.xml b/doc/openstack-ops/app_usecases.xml deleted file mode 100644 index 75285551..00000000 --- a/doc/openstack-ops/app_usecases.xml +++ /dev/null @@ -1,300 +0,0 @@ - - - Use Cases - - This appendix contains a small selection of use cases from the - community, with more technical detail than usual. Further examples can be - found on the OpenStack - website. - -
- NeCTAR - - Who uses it: researchers from the Australian publicly funded - research sector. Use is across a wide variety of disciplines, with the - purpose of instances ranging from running simple web servers to using - hundreds of cores for high-throughput computing. - NeCTAR Research Cloud - - use cases - - NeCTAR - - OpenStack community - - use cases - - NeCTAR - - -
- Deployment - - Using OpenStack Compute cells, the NeCTAR Research Cloud spans - eight sites with approximately 4,000 cores per site. - - Each site runs a different configuration, as a resource - cells in an OpenStack Compute cells setup. Some - sites span multiple data centers, some use off compute node storage with - a shared file system, and some use on compute node storage with a - non-shared file system. Each site deploys the Image service with an - Object Storage back end. A central Identity, dashboard, and - Compute API service are used. A login to the dashboard triggers a SAML - login with Shibboleth, which creates an account - in the Identity service with an SQL back end. An Object Storage Global - Cluster is used across several sites. - - Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per - core and approximately 40 GB of ephemeral storage per core. - - All sites are based on Ubuntu 14.04, with KVM as the hypervisor. - The OpenStack version in use is typically the current stable version, - with 5 to 10 percent back-ported code from trunk and modifications. - -
- -
- Resources - - - - OpenStack.org case - study - - - - NeCTAR-RC - GitHub - - - - NeCTAR - website - - -
-
- -
- MIT CSAIL - - Who uses it: researchers from the MIT Computer Science and - Artificial Intelligence Lab. - CSAIL (Computer Science and Artificial Intelligence - Lab) - - MIT CSAIL (Computer Science and Artificial Intelligence - Lab) - - use cases - - MIT CSAIL - - OpenStack community - - use cases - - MIT CSAIL - - -
- Deployment - - The CSAIL cloud is currently 64 physical nodes with a total of 768 - physical cores and 3,456 GB of RAM. Persistent data storage is largely - outside the cloud on NFS, with cloud resources focused on compute - resources. There are more than 130 users in more than 40 projects, - typically running 2,000–2,500 vCPUs in 300 to 400 instances. - - We initially deployed on Ubuntu 12.04 with the Essex release of - OpenStack using FlatDHCP multi-host networking. - - The software stack is still Ubuntu 12.04 LTS, but now with - OpenStack Havana from the Ubuntu Cloud Archive. KVM is the hypervisor, - deployed using FAI - and Puppet for configuration management. The FAI and Puppet combination - is used lab-wide, not only for OpenStack. There is a single cloud - controller node, which also acts as network controller, with the - remainder of the server hardware dedicated to compute nodes. - - Host aggregates and instance-type extra specs are used to provide - two different resource allocation ratios. The default resource - allocation ratios we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive - workloads use instance types that require non-oversubscribed hosts where - cpu_ratio and ram_ratio are both - set to 1.0. Since we have hyper-threading enabled on our compute nodes, - this provides one vCPU per CPU thread, or two vCPUs per physical - core. - - With our upgrade to Grizzly in August 2013, we moved to OpenStack - Networking, neutron (quantum at the time). Compute nodes have - two-gigabit network interfaces and a separate management card for IPMI - management. One network interface is used for node-to-node - communications. The other is used as a trunk port for OpenStack managed - VLANs. The controller node uses two bonded 10g network interfaces for - its public IP communications. Big pipes are used here because images are - served over this port, and it is also used to connect to iSCSI storage, - back-ending the image storage and database. The controller node also has - a gigabit interface that is used in trunk mode for OpenStack managed - VLAN traffic. This port handles traffic to the dhcp-agent and - metadata-proxy. - - We approximate the older nova-network - multi-host HA setup by using "provider VLAN networks" that connect - instances directly to existing publicly addressable networks and use - existing physical routers as their default gateway. This means that if - our network controller goes down, running instances still have their - network available, and no single Linux host becomes a traffic - bottleneck. We are able to do this because we have a sufficient supply - of IPv4 addresses to cover all of our instances and thus don't need NAT - and don't use floating IP addresses. We provide a single generic public - network to all projects and additional existing VLANs on a - project-by-project basis as needed. Individual projects are also allowed - to create their own private GRE based networks. -
- -
- Resources - - - - CSAIL - homepage - - -
-
- -
- DAIR - - Who uses it: DAIR is an integrated virtual environment that - leverages the CANARIE network to develop and test new information - communication technology (ICT) and other digital technologies. It combines - such digital infrastructure as advanced networking and cloud computing and - storage to create an environment for developing and testing innovative ICT - applications, protocols, and services; performing at-scale experimentation - for deployment; and facilitating a faster time to market. - DAIR - - use cases - - DAIR - - OpenStack community - - use cases - - DAIR - - -
- Deployment - - DAIR is hosted at two different data centers across Canada: one in - Alberta and the other in Quebec. It consists of a cloud controller at - each location, although, one is designated the "master" controller that - is in charge of central authentication and quotas. This is done through - custom scripts and light modifications to OpenStack. DAIR is currently - running Havana. - - For Object Storage, each region has a swift environment. - - A NetApp appliance is used in each region for both block storage - and instance storage. There are future plans to move the instances off - the NetApp appliance and onto a distributed file system such as - Ceph or GlusterFS. - - VlanManager is used extensively for network management. All - servers have two bonded 10GbE NICs that are connected to two redundant - switches. DAIR is set up to use single-node networking where the cloud - controller is the gateway for all instances on all compute nodes. - Internal OpenStack traffic (for example, storage traffic) does not go - through the cloud controller. -
- -
- Resources - - - - DAIR - homepage - - -
-
- -
- CERN - - Who uses it: researchers at CERN (European Organization for Nuclear - Research) conducting high-energy physics research. - CERN (European Organization for Nuclear Research) - - use cases - - CERN - - OpenStack community - - use cases - - CERN - - -
- Deployment - - The environment is largely based on Scientific Linux 6, which is - Red Hat compatible. We use KVM as our primary hypervisor, although tests - are ongoing with Hyper-V on Windows Server 2008. - - We use the Puppet Labs OpenStack modules to configure Compute, - Image service, Identity, and dashboard. Puppet is used widely for - instance configuration, and Foreman is used as a GUI for reporting and - instance provisioning. - - Users and groups are managed through Active Directory and imported - into the Identity service using LDAP. CLIs are available for nova - and Euca2ools to do this. - - There are three clouds currently running at CERN, totaling about - 4,700 compute nodes, with approximately 120,000 cores. The CERN IT cloud - aims to expand to 300,000 cores by 2015. - -
- -
- Resources - - - - “OpenStack in - Production: A tale of 3 OpenStack Clouds” - - - - “Review of CERN - Data Centre Infrastructure” - - - - “CERN Cloud - Infrastructure User Guide” - - -
-
-
diff --git a/doc/openstack-ops/bk_ops_guide.xml b/doc/openstack-ops/bk_ops_guide.xml deleted file mode 100644 index 4745fceb..00000000 --- a/doc/openstack-ops/bk_ops_guide.xml +++ /dev/null @@ -1,63 +0,0 @@ - - - - - - - -]> - - OpenStack Operations Guide - - OpenStack Ops Guide - - - - - - - - OpenStack - Foundation - - - - 2014 - OpenStack Foundation - - OpenStack - - - - Copyright details are filled - in by the template. - - - - This book provides information about - designing and operating OpenStack - clouds. - - - - - - - - - - - - - - - - - - diff --git a/doc/openstack-ops/callouts/1.pdf b/doc/openstack-ops/callouts/1.pdf deleted file mode 100644 index e2e678f8..00000000 Binary files a/doc/openstack-ops/callouts/1.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/1.png b/doc/openstack-ops/callouts/1.png deleted file mode 100644 index 7d473430..00000000 Binary files a/doc/openstack-ops/callouts/1.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/10.pdf b/doc/openstack-ops/callouts/10.pdf deleted file mode 100644 index 4fa51cef..00000000 Binary files a/doc/openstack-ops/callouts/10.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/10.png b/doc/openstack-ops/callouts/10.png deleted file mode 100644 index 997bbc82..00000000 Binary files a/doc/openstack-ops/callouts/10.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/2.pdf b/doc/openstack-ops/callouts/2.pdf deleted file mode 100644 index 32dd2852..00000000 Binary files a/doc/openstack-ops/callouts/2.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/2.png b/doc/openstack-ops/callouts/2.png deleted file mode 100644 index 5d09341b..00000000 Binary files a/doc/openstack-ops/callouts/2.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/3.pdf b/doc/openstack-ops/callouts/3.pdf deleted file mode 100644 index c5f4f06f..00000000 Binary files a/doc/openstack-ops/callouts/3.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/3.png b/doc/openstack-ops/callouts/3.png deleted file mode 100644 index ef7b7004..00000000 Binary files a/doc/openstack-ops/callouts/3.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/4.pdf b/doc/openstack-ops/callouts/4.pdf deleted file mode 100644 index 5df555d9..00000000 Binary files a/doc/openstack-ops/callouts/4.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/4.png b/doc/openstack-ops/callouts/4.png deleted file mode 100644 index adb8364e..00000000 Binary files a/doc/openstack-ops/callouts/4.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/5.pdf b/doc/openstack-ops/callouts/5.pdf deleted file mode 100644 index 96c1ba28..00000000 Binary files a/doc/openstack-ops/callouts/5.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/5.png b/doc/openstack-ops/callouts/5.png deleted file mode 100644 index 4d7eb460..00000000 Binary files a/doc/openstack-ops/callouts/5.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/6.pdf b/doc/openstack-ops/callouts/6.pdf deleted file mode 100644 index 99b454b0..00000000 Binary files a/doc/openstack-ops/callouts/6.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/6.png b/doc/openstack-ops/callouts/6.png deleted file mode 100644 index 0ba694af..00000000 Binary files a/doc/openstack-ops/callouts/6.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/7.pdf b/doc/openstack-ops/callouts/7.pdf deleted file mode 100644 index 2e6827f8..00000000 Binary files a/doc/openstack-ops/callouts/7.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/7.png b/doc/openstack-ops/callouts/7.png deleted file mode 100644 index 472e96f8..00000000 Binary files a/doc/openstack-ops/callouts/7.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/8.pdf b/doc/openstack-ops/callouts/8.pdf deleted file mode 100644 index 159ac764..00000000 Binary files a/doc/openstack-ops/callouts/8.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/8.png b/doc/openstack-ops/callouts/8.png deleted file mode 100644 index 5e60973c..00000000 Binary files a/doc/openstack-ops/callouts/8.png and /dev/null differ diff --git a/doc/openstack-ops/callouts/9.pdf b/doc/openstack-ops/callouts/9.pdf deleted file mode 100644 index 60a2e70a..00000000 Binary files a/doc/openstack-ops/callouts/9.pdf and /dev/null differ diff --git a/doc/openstack-ops/callouts/9.png b/doc/openstack-ops/callouts/9.png deleted file mode 100644 index a0676d26..00000000 Binary files a/doc/openstack-ops/callouts/9.png and /dev/null differ diff --git a/doc/openstack-ops/ch_arch_cloud_controller.xml b/doc/openstack-ops/ch_arch_cloud_controller.xml deleted file mode 100644 index d75bbaf9..00000000 --- a/doc/openstack-ops/ch_arch_cloud_controller.xml +++ /dev/null @@ -1,787 +0,0 @@ - - - - - Designing for Cloud Controllers and <phrase - role="keep-together">Cloud Management</phrase> - - OpenStack is designed to be massively horizontally scalable, which - allows all services to be distributed widely. However, to simplify this - guide, we have decided to discuss services of a more central nature, using - the concept of a cloud controller. A cloud controller - is just a conceptual simplification. In the real world, you design an - architecture for your cloud controller that enables high availability so - that if any node fails, another can take over the required tasks. In - reality, cloud controller tasks are spread out across more than a single - node. - design considerations - - cloud controller services - - cloud controllers - - concept of - - - The cloud controller provides the central management system for - OpenStack deployments. Typically, the cloud controller manages - authentication and sends messaging to all the systems through a message - queue. - - For many deployments, the cloud controller is a single node. However, - to have high availability, you have to take a few considerations into - account, which we'll cover in this chapter. - - The cloud controller manages the following services for the - cloud: - cloud controllers - - services managed by - - - - - Databases - - - Tracks current information about users and instances, for - example, in a database, typically one database instance managed per - service - - - - - Message queue services - - - All AMQP—Advanced Message Queue Protocol—messages for services - are received and sent according to the queue broker - Advanced Message Queuing Protocol (AMQP) - - - - - - Conductor services - - - Proxy requests to a database - - - - - Authentication and authorization for identity management - - - Indicates which users can do what actions on certain cloud - resources; quota management is spread out among services, - however - authentication - - - - - - Image-management services - - - Stores and serves images with metadata on each, for launching in - the cloud - - - - - Scheduling services - - - Indicates which resources to use first; for example, spreading - out where instances are launched based on an algorithm - - - - - User dashboard - - - Provides a web-based front end for users to consume OpenStack - cloud services - - - - - API endpoints - - - Offers each service's REST API access, where the API endpoint - catalog is managed by the Identity service - - - - - For our example, the cloud controller has a collection of - nova-* components that represent the global state of the cloud; - talks to services such as authentication; maintains information about the - cloud in a database; communicates to all compute nodes and storage - workers through a queue; and provides API access. - Each service running on a designated cloud controller may be broken out into - separate nodes for scalability or availability. - storage - - storage workers - - workers - - - As another example, you could use pairs of servers for a collective - cloud controller—one active, one standby—for redundant nodes providing a - given set of related services, such as: - - - - Front end web for API requests, the scheduler for choosing which - compute node to boot an instance on, Identity services, and the - dashboard - - - - Database and message queue server (such as MySQL, RabbitMQ) - - - - Image service for the image management - - - - Now that you see the myriad designs for controlling your cloud, read - more about the further considerations to help with your design - decisions. - -
- Hardware Considerations - - A cloud controller's hardware can be the same as a compute node, - though you may want to further specify based on the size and type of cloud - that you run. - hardware - - design considerations - - design considerations - - hardware considerations - - - It's also possible to use virtual machines for all or some of the - services that the cloud controller manages, such as the message queuing. - In this guide, we assume that all services are running directly on the - cloud controller. - - contains common - considerations to review when sizing hardware for the cloud controller - design. - cloud controllers - - hardware sizing considerations - - Active Directory - - dashboard - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cloud controller hardware sizing considerations
ConsiderationRamification
How many instances will run at once?Size your database server accordingly, and scale out - beyond one cloud controller if many instances will report status at - the same time and scheduling where a new instance starts up needs - computing power.
How many compute nodes will run at once?Ensure that your messaging queue handles requests - successfully and size accordingly.
How many users will access the API?If many users will make multiple requests, make sure that - the CPU load for the cloud controller can handle it.
How many users will access the - dashboard versus the REST API - directly?The dashboard makes many requests, even more than the API - access, so add even more CPU if your dashboard is the main interface - for your users.
How many nova-api services do you run at once - for your cloud?You need to size the controller with a core per - service.
How long does a single instance run?Starting instances and deleting instances is demanding on - the compute node but also demanding on the controller node because - of all the API queries and scheduling needs.
Does your authentication system also verify - externally?External systems such as LDAP or Active - Directory require network connectivity between the cloud - controller and an external authentication system. Also ensure that - the cloud controller has the CPU power to keep up with - requests.
-
- -
- Separation of Services - - While our example contains all central services in a single - location, it is possible and indeed often a good idea to separate services - onto different physical servers. is - a list of deployment scenarios we've seen and their - justifications. - provisioning/deployment - - deployment scenarios - - services - - separation of - - separation of services - - design considerations - - separation of services - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Deployment scenarios
ScenarioJustification
Run glance-* servers on the - swift-proxy server.This deployment felt that the spare I/O on the Object - Storage proxy server was sufficient and that the Image Delivery - portion of glance benefited from being on physical hardware and - having good connectivity to the Object Storage back end it was - using.
Run a central dedicated database server.This deployment used a central dedicated server to provide - the databases for all services. This approach simplified operations - by isolating database server updates and allowed for the simple - creation of slave database servers for failover.
Run one VM per service.This deployment ran central services on a set of servers - running KVM. A dedicated VM was created for each service - (nova-scheduler, rabbitmq, database, etc). This - assisted the deployment with scaling because administrators could - tune the resources given to each virtual machine based on the load - it received (something that was not well understood during - installation).
Use an external load balancer.This deployment had an expensive hardware load balancer in - its organization. It ran multiple nova-api and - swift-proxy servers on different physical servers and - used the load balancer to switch between them.
- - One choice that always comes up is whether to virtualize. Some - services, such as nova-compute, swift-proxy and - swift-object servers, should not be virtualized. However, - control servers can often be happily virtualized—the performance penalty - can usually be offset by simply running more of the service. -
- -
- Database - - OpenStack Compute uses an SQL database to store and retrieve stateful - information. MySQL is the popular database choice in the OpenStack - community. - databases - - design considerations - - design considerations - - database choice - - - Loss of the database leads to errors. As a result, we recommend that - you cluster your database to make it failure tolerant. Configuring and - maintaining a database cluster is done outside OpenStack and is determined - by the database software you choose to use in your cloud environment. - MySQL/Galera is a popular option for MySQL-based databases. -
- -
- Message Queue - - Most OpenStack services communicate with each other using the - message queue. - messages - - design considerations - - design considerations - - message queues - For example, Compute communicates to block storage services - and networking services through the message queue. Also, you can - optionally enable notifications for any service. RabbitMQ, Qpid, and 0mq - are all popular choices for a message-queue service. In general, if the - message queue fails or becomes inaccessible, the cluster grinds to a halt - and ends up in a read-only state, with information stuck at the point - where the last message was sent. Accordingly, we recommend that you - cluster the message queue. Be aware that clustered message queues can be a - pain point for many OpenStack deployments. While RabbitMQ has native - clustering support, there have been reports of issues when running it at a - large scale. While other queuing solutions are available, such as 0mq and - Qpid, 0mq does not offer stateful queues. Qpid is the messaging system of choice for Red Hat and - its derivatives. Qpid does not have native clustering capabilities and - requires a supplemental service, such as Pacemaker or Corsync. For your - message queue, you need to determine what level of data loss you are - comfortable with and whether to use an OpenStack project's ability to - retry multiple MQ hosts in the event of a failure, such as using Compute's - ability to do so. - 0mq - - Qpid - - RabbitMQ - - message queue - -
- -
- Conductor Services - - In the previous version of OpenStack, all - nova-compute services required direct access to the - database hosted on the cloud controller. This was problematic for two - reasons: security and performance. With regard to security, if a compute - node is compromised, the attacker inherently has access to the database. - With regard to performance, nova-compute calls to the - database are single-threaded and blocking. This creates a performance - bottleneck because database requests are fulfilled serially rather than in - parallel. - conductors - - design considerations - - conductor services - - - The conductor service resolves both of these issues by acting as a - proxy for the nova-compute service. Now, instead of - nova-compute directly accessing the database, it - contacts the nova-conductor service, and - nova-conductor accesses the database on - nova-compute's behalf. Since - nova-compute no longer has direct access to the - database, the security issue is resolved. Additionally, - nova-conductor is a nonblocking service, so requests - from all compute nodes are fulfilled in parallel. - - - If you are using nova-network and multi-host - networking in your cloud environment, nova-compute - still requires direct access to the database. - multi-host networking - - - - The nova-conductor service is horizontally - scalable. To make nova-conductor highly available and - fault tolerant, just launch more instances of the - nova-conductor process, either on the same server or across - multiple servers. -
- -
- Application Programming Interface (API) - - All public access, whether direct, through a command-line client, or - through the web-based dashboard, uses the API service. Find the API - reference at . - API (application programming interface) - - design considerations - - design considerations - - API support - - - You must choose whether you want to support the Amazon EC2 - compatibility APIs, or just the OpenStack APIs. One issue you might - encounter when running both APIs is an inconsistent experience when - referring to images and instances. - - For example, the EC2 API refers to instances using IDs that contain - hexadecimal, whereas the OpenStack API uses names and digits. Similarly, - the EC2 API tends to rely on DNS aliases for contacting virtual machines, - as opposed to OpenStack, which typically lists IP addresses. - DNS (Domain Name Server, Service or System) - - DNS aliases - - troubleshooting - - DNS issues - - - If OpenStack is not set up in the right way, it is simple to have - scenarios in which users are unable to contact their instances due to - having only an incorrect DNS alias. Despite this, EC2 compatibility can - assist users migrating to your cloud. - - As with databases and message queues, having more than one - API server is a good thing. Traditional HTTP - load-balancing techniques can be used to achieve a highly available - nova-api service. - API (application programming interface) - - API server - -
- -
- Extensions - - The API Specifications define the core - actions, capabilities, and mediatypes of the OpenStack API. A client can - always depend on the availability of this core API, and implementers are - always required to support it in its entirety. Requiring strict adherence to the - core API allows clients to rely upon a minimal level of functionality when - interacting with multiple implementations of the same API. - extensions - - design considerations - - design considerations - - extensions - - - The OpenStack Compute API is extensible. An extension adds - capabilities to an API beyond those defined in the core. The introduction - of new features, MIME types, actions, states, headers, parameters, and - resources can all be accomplished by means of extensions to the core API. - This allows the introduction of new features in the API without requiring - a version change and allows the introduction of vendor-specific niche - functionality. -
- -
- Scheduling - - The scheduling services are responsible for determining the compute - or storage node where a virtual machine or block storage volume should be - created. The scheduling services receive creation requests for these - resources from the message queue and then begin the process of determining - the appropriate node where the resource should reside. This process is - done by applying a series of user-configurable filters against the - available collection of nodes. - schedulers - - design considerations - - design considerations - - scheduling - - - There are currently two schedulers: - nova-scheduler for virtual machines and - cinder-scheduler for block storage volumes. Both - schedulers are able to scale horizontally, so for high-availability - purposes, or for very large or high-schedule-frequency installations, you - should consider running multiple instances of each scheduler. The - schedulers all listen to the shared message queue, so no special load - balancing is required. -
- -
- Images - - The OpenStack Image service consists of two parts: - glance-api and glance-registry. The former is - responsible for the delivery of images; the compute node uses it to - download images from the back end. The latter maintains the metadata - information associated with virtual machine images and requires a - database. - glance - - glance registry - - glance - - glance API server - - metadata - - OpenStack Image service and - - Image service - - design considerations - - design considerations - - images - - - The glance-api part is an abstraction layer that allows - a choice of back end. Currently, it supports: - - - - OpenStack Object Storage - - - Allows you to store images as objects. - - - - - File system - - - Uses any traditional file system to store the images as - files. - - - - - S3 - - - Allows you to fetch images from Amazon S3. - - - - - HTTP - - - Allows you to fetch images from a web server. You cannot write - images by using this mode. - - - - - If you have an OpenStack Object Storage service, we recommend using - this as a scalable place to store your images. You can also use a file - system with sufficient performance or Amazon S3—unless you do not need the - ability to upload new images through OpenStack. -
- -
- Dashboard - - The OpenStack dashboard (horizon) provides a web-based user - interface to the various OpenStack components. The dashboard includes an - end-user area for users to manage their virtual infrastructure and an - admin area for cloud operators to manage the OpenStack environment as a - whole. - dashboard - - design considerations - - dashboard - - - The dashboard is implemented as a Python web application that - normally runs in Apache httpd. - Therefore, you may treat it the same as any other web application, - provided it can reach the API servers (including their admin endpoints) - over the network. - Apache - -
- -
- Authentication and Authorization - - The concepts supporting OpenStack's authentication and authorization - are derived from well-understood and widely used systems of a similar - nature. Users have credentials they can use to authenticate, and they can - be a member of one or more groups (known as projects or tenants, - interchangeably). - credentials - - authorization - - authentication - - design considerations - - authentication/authorization - - - For example, a cloud administrator might be able to list all - instances in the cloud, whereas a user can see only those in his current - group. Resources quotas, such as the number of cores that can be used, - disk space, and so on, are associated with a project. - - OpenStack Identity provides - authentication decisions and user attribute information, which is then - used by the other OpenStack services to perform authorization. The policy is - set in the policy.json file. For information on how to configure these, see - . - Identity - - authentication decisions - - Identity - - plug-in support - - - OpenStack Identity supports different plug-ins for authentication - decisions and identity storage. Examples of these plug-ins include: - - - - In-memory key-value Store (a simplified internal storage - structure) - - - - SQL database (such as MySQL or PostgreSQL) - - - - Memcached (a distributed memory object caching system) - - - - LDAP (such as OpenLDAP or Microsoft's Active Directory) - - - - Many deployments use the SQL database; however, LDAP is also a - popular choice for those with existing authentication infrastructure that - needs to be integrated. -
- -
- Network Considerations - - Because the cloud controller handles so many different services, it - must be able to handle the amount of traffic that hits it. For example, if - you choose to host the OpenStack Image service on the cloud controller, - the cloud controller should be able to support the transferring of the - images at an acceptable speed. - cloud controllers - - network traffic and - - networks - - design considerations - - design considerations - - networks - - - As another example, if you choose to use single-host networking - where the cloud controller is the network gateway for all instances, then - the cloud controller must support the total amount of traffic that travels - between your cloud and the public Internet. - - We recommend that you use a fast NIC, such as 10 GB. You can also - choose to use two 10 GB NICs and bond them together. While you might not - be able to get a full bonded 20 GB speed, different transmission streams - use different NICs. For example, if the cloud controller transfers two - images, each image uses a different NIC and gets a full 10 GB of - bandwidth. - bandwidth - - design considerations for - -
-
diff --git a/doc/openstack-ops/ch_arch_compute_nodes.xml b/doc/openstack-ops/ch_arch_compute_nodes.xml deleted file mode 100644 index 52599901..00000000 --- a/doc/openstack-ops/ch_arch_compute_nodes.xml +++ /dev/null @@ -1,623 +0,0 @@ - - - - - Compute Nodes - - In this chapter, we discuss some of the choices you need to consider - when building out your compute nodes. Compute nodes form the resource core - of the OpenStack Compute cloud, providing the processing, memory, network - and storage resources to run instances. - -
- Choosing a CPU - - The type of CPU in your compute node is a very important choice. - First, ensure that the CPU supports virtualization by way of - VT-x for Intel chips and AMD-v - for AMD chips. - CPUs (central processing units) - - choosing - - compute nodes - - CPU choice - - - - Consult the vendor documentation to check for virtualization - support. For Intel, read “Does my processor support Intel® - Virtualization Technology?”. For AMD, read AMD - Virtualization. Note that your CPU may support virtualization but - it may be disabled. Consult your BIOS documentation for how to enable - CPU features. - virtualization technology - - AMD Virtualization - - Intel Virtualization Technology - - - - The number of cores that the CPU has also affects the decision. It's - common for current CPUs to have up to 12 cores. Additionally, if an Intel - CPU supports hyperthreading, those 12 cores are doubled to 24 cores. If - you purchase a server that supports multiple CPUs, the number of cores is - further multiplied. - cores - - hyperthreading - - multithreading - - - - - - Multithread Considerations - - Hyper-Threading is Intel's proprietary simultaneous multithreading - implementation used to improve parallelization on their CPUs. You might - consider enabling Hyper-Threading to improve the performance of - multithreaded applications. - - Whether you should enable Hyper-Threading on your CPUs depends - upon your use case. For example, disabling Hyper-Threading can be - beneficial in intense computing environments. We recommend that you do - performance testing with your local workload with both Hyper-Threading - on and off to determine what is more appropriate in your case. - CPUs (central processing units) - - enabling hyperthreading on - - -
- -
- Choosing a Hypervisor - - A hypervisor provides software to manage virtual machine access to - the underlying hardware. The hypervisor creates, manages, and monitors - virtual machines. - Docker - - Hyper-V - - ESXi hypervisor - - ESX hypervisor - - VMware API - - Quick EMUlator (QEMU) - - Linux containers (LXC) - - kernel-based VM (KVM) hypervisor - - Xen API - - XenServer hypervisor - - hypervisors - - choosing - - compute nodes - - hypervisor choice - OpenStack Compute supports many hypervisors to various - degrees, including: - - KVM - - - - LXC - - - - QEMU - - - - VMware ESX/ESXi - - - - Xen - - - - Hyper-V - - - - Docker - - - - Probably the most important factor in your choice of hypervisor is - your current usage or experience. Aside from that, there are practical - concerns to do with feature parity, documentation, and the level of - community experience. - - For example, KVM is the most widely adopted hypervisor in the - OpenStack community. Besides KVM, more deployments run Xen, LXC, VMware, - and Hyper-V than the others listed. However, each of these are - lacking some feature support or the documentation on how to use them with - OpenStack is out of date. - - The best information available to support your choice is found on - the Hypervisor Support Matrix and in the - configuration - reference. - - - It is also possible to run multiple hypervisors in a single - deployment using host aggregates or cells. However, an individual - compute node can run only a single hypervisor at a time. - hypervisors - - running multiple - - -
- -
- Instance Storage Solutions - - As part of the procurement for a compute cluster, you must specify - some storage for the disk on which the instantiated instance runs. There - are three main approaches to providing this temporary-style storage, and - it is important to understand the implications of the choice. - storage - - instance storage solutions - - instances - - storage solutions - - compute nodes - - instance storage solutions - - - They are: - - - - Off compute node storage—shared file system - - - - On compute node storage—shared file system - - - - On compute node storage—nonshared file system - - - - In general, the questions you should ask when selecting storage are - as follows: - - - - What is the platter count you can achieve? - - - - Do more spindles result in better I/O despite network - access? - - - - Which one results in the best cost-performance scenario you're - aiming for? - - - - How do you manage the storage operationally? - - - - Many operators use separate compute and storage hosts. Compute - services and storage services have different requirements, and compute - hosts typically require more CPU and RAM than storage hosts. Therefore, - for a fixed budget, it makes sense to have different configurations for - your compute nodes and your storage nodes. Compute nodes will be invested - in CPU and RAM, and storage nodes will be invested in block - storage. - - However, if you are more restricted in the number of physical hosts - you have available for creating your cloud and you want to be able to - dedicate as many of your hosts as possible to running instances, it makes - sense to run compute and storage on the same machines. - - We'll discuss the three main approaches to instance storage in the - next few sections. - - - -
- Off Compute Node Storage—Shared File System - - In this option, the disks storing the running instances are hosted - in servers outside of the compute nodes. - shared storage - - file systems - - shared - - - If you use separate compute and storage hosts, you can treat your - compute hosts as "stateless." As long as you don't have any instances - currently running on a compute host, you can take it offline or wipe it - completely without having any effect on the rest of your cloud. This - simplifies maintenance for the compute hosts. - - There are several advantages to this approach: - - - - If a compute node fails, instances are usually easily - recoverable. - - - - Running a dedicated storage system can be operationally - simpler. - - - - You can scale to any number of spindles. - - - - It may be possible to share the external storage for other - purposes. - - - - The main downsides to this approach are: - - - - Depending on design, heavy I/O usage from some instances can - affect unrelated instances. - - - - Use of the network can decrease performance. - - -
- -
- On Compute Node Storage—Shared File System - - In this option, each compute node is specified with a significant - amount of disk space, but a distributed file system ties the disks from - each compute node into a single mount. - - The main advantage of this option is that it scales to external - storage when you require additional storage. - - However, this option has several downsides: - - - - Running a distributed file system can make you lose your data - locality compared with nonshared storage. - - - - Recovery of instances is complicated by depending on multiple - hosts. - - - - The chassis size of the compute node can limit the number of - spindles able to be used in a compute node. - - - - Use of the network can decrease performance. - - -
- -
- On Compute Node Storage—Nonshared File System - - In this option, each compute node is specified with enough disks - to store the instances it hosts. - file systems - - nonshared - - - There are two main reasons why this is a good idea: - - - - Heavy I/O usage on one compute node does not affect instances - on other compute nodes. - - - - Direct I/O access can increase performance. - - - - This has several downsides: - - - - If a compute node fails, the instances running on that node - are lost. - - - - The chassis size of the compute node can limit the number of - spindles able to be used in a compute node. - - - - Migrations of instances from one node to another are more - complicated and rely on features that may not continue to be - developed. - - - - If additional storage is required, this option does not - scale. - - - - Running a shared file system on a storage system apart from the - computes nodes is ideal for clouds where reliability and scalability are - the most important factors. Running a shared file system on the compute - nodes themselves may be best in a scenario where you have to deploy to - preexisting servers for which you have little to no control over their - specifications. Running a nonshared file system on the compute nodes - themselves is a good option for clouds with high I/O requirements and - low concern for reliability. - scaling - - file system choice - -
- -
- Issues with Live Migration - - We consider live migration an integral part of the operations of - the cloud. This feature provides the ability to seamlessly move - instances from one physical host to another, a necessity for performing - upgrades that require reboots of the compute hosts, but only works well - with shared storage. - storage - - live migration - - migration - - live migration - - compute nodes - - live migration - - - Live migration can also be done with nonshared storage, using a - feature known as KVM live block migration. While an - earlier implementation of block-based migration in KVM and QEMU was - considered unreliable, there is a newer, more reliable implementation of - block-based live migration as of QEMU 1.4 and libvirt 1.0.2 that is also - compatible with OpenStack. However, none of the authors of this guide - have first-hand experience using live block migration. - block migration - -
- -
- Choice of File System - - If you want to support shared-storage live migration, you need to - configure a distributed file system. - compute nodes - - file system choice - - file systems - - choice of - - storage - - file system choice - - - Possible options include: - - - - NFS (default for Linux) - - - - GlusterFS - - - - MooseFS - - - - Lustre - - - - We've seen deployments with all, and recommend that you choose the - one you are most familiar with operating. If you are not familiar with - any of these, choose NFS, as it is the easiest to set up and there is - extensive community knowledge about it. -
-
- -
- Overcommitting - - OpenStack allows you to overcommit CPU and RAM on compute nodes. - This allows you to increase the number of instances you can have running - on your cloud, at the cost of reducing the performance of the - instances. - RAM overcommit - - CPUs (central processing units) - - overcommitting - - overcommitting - - compute nodes - - overcommitting - OpenStack Compute uses the following ratios by - default: - - - - CPU allocation ratio: 16:1 - - - - RAM allocation ratio: 1.5:1 - - - - The default CPU allocation ratio of 16:1 means that the scheduler - allocates up to 16 virtual cores per physical core. For example, if a - physical node has 12 cores, the scheduler sees 192 available virtual - cores. With typical flavor definitions of 4 virtual cores per instance, - this ratio would provide 48 instances on a physical node. - - The formula for the number of virtual instances on a compute node is - (OR*PC)/VC, where: - - - - OR - - - CPU overcommit ratio (virtual cores per physical core) - - - - - PC - - - Number of physical cores - - - - - VC - - - Number of virtual cores per instance - - - - - Similarly, the default RAM allocation ratio of 1.5:1 means that the - scheduler allocates instances to a physical node as long as the total - amount of RAM associated with the instances is less than 1.5 times the - amount of RAM available on the physical node. - - For example, if a physical node has 48 GB of RAM, the scheduler - allocates instances to that node until the sum of the RAM associated with - the instances reaches 72 GB (such as nine instances, in the case where - each instance has 8 GB of RAM). - - - Regardless of the overcommit ratio, an instance can not be placed - on any physical node with fewer raw (pre-overcommit) resources than - the instance flavor requires. - - - You must select the appropriate CPU and RAM allocation ratio for - your particular use case. -
- -
- Logging - - Logging is detailed more fully in . However, it is an important design - consideration to take into account before commencing operations of your - cloud. - logging/monitoring - - compute nodes and - - compute nodes - - logging - - - OpenStack produces a great deal of useful logging information, - however; but for the information to be useful for operations purposes, you - should consider having a central logging server to send logs to, and a log - parsing/analysis system (such as logstash). -
- -
- Networking - - Networking in OpenStack is a complex, multifaceted challenge. See - . - compute nodes - - networking - -
- -
- Conclusion - - Compute nodes are the workhorse of your cloud and the place where - your users' applications will run. They are likely to be affected by your - decisions on what to deploy and how you deploy it. Their requirements - should be reflected in the choices you make. -
-
diff --git a/doc/openstack-ops/ch_arch_examples.xml b/doc/openstack-ops/ch_arch_examples.xml deleted file mode 100644 index 75c33894..00000000 --- a/doc/openstack-ops/ch_arch_examples.xml +++ /dev/null @@ -1,49 +0,0 @@ - - - - - Architecture Examples - - To understand the possibilities that OpenStack offers, it's best to start - with basic architecture that has been tested in - production environments. We offer two examples with basic pivots on the - base operating system (Ubuntu and Red Hat Enterprise Linux) and the - networking architecture. There are other differences between these two - examples and this guide provides reasons for each choice made. - - Because OpenStack is highly configurable, with many different back ends - and network configuration options, it is difficult to write documentation - that covers all possible OpenStack deployments. Therefore, this guide - defines examples of architecture to simplify the task of documenting, as well - as to provide the scope for this guide. Both of the offered architecture - examples are currently running in production and serving users. - - - As always, refer to the if you - are unclear about any of the terminology mentioned in architecture - examples. - - - - - - -
- Parting Thoughts on Architecture Examples - - With so many considerations and options available, our hope is to - provide a few clearly-marked and tested paths for your OpenStack - exploration. If you're looking for additional ideas, check out , the OpenStack Installation Guides, - or the OpenStack User Stories - page. -
-
diff --git a/doc/openstack-ops/ch_arch_network_design.xml b/doc/openstack-ops/ch_arch_network_design.xml deleted file mode 100644 index eb7e2802..00000000 --- a/doc/openstack-ops/ch_arch_network_design.xml +++ /dev/null @@ -1,536 +0,0 @@ - - - - - Network Design - - OpenStack provides a rich networking environment, and this chapter - details the requirements and options to deliberate when designing your - cloud. - network design - - first steps - - design considerations - - network design - - - - If this is the first time you are deploying a cloud infrastructure - in your organization, after reading this section, your first conversations - should be with your networking team. Network usage in a running cloud is - vastly different from traditional network deployments and has the - potential to be disruptive at both a connectivity and a policy - level. - cloud computing - - vs. traditional deployments - - - - For example, you must plan the number of IP addresses that you need - for both your guest instances as well as management infrastructure. - Additionally, you must research and discuss cloud network connectivity - through proxy servers and firewalls. - - In this chapter, we'll give some examples of network implementations - to consider and provide information about some of the network layouts that - OpenStack uses. Finally, we have some brief notes on the networking services - that are essential for stable operation. - -
- Management Network - - A management network (a separate network for - use by your cloud operators) typically consists of a separate switch and - separate NICs (network interface cards), and is a recommended option. This - segregation prevents system administration and the monitoring of system - access from being disrupted by traffic generated by guests. - NICs (network interface cards) - - management network - - network design - - management network - - - Consider creating other private networks for communication between - internal components of OpenStack, such as the message queue and OpenStack - Compute. Using a virtual local area network (VLAN) works well for these - scenarios because it provides a method for creating multiple virtual - networks on a physical network. -
- -
- Public Addressing Options - - There are two main types of IP addresses for guest virtual machines: - fixed IPs and floating IPs. Fixed IPs are assigned to instances on boot, - whereas floating IP addresses can change their association between - instances by action of the user. Both types of IP addresses can be either - public or private, depending on your use case. - IP addresses - - public addressing options - - network design - - public addressing options - - - Fixed IP addresses are required, whereas it is possible to run - OpenStack without floating IPs. One of the most common use cases for - floating IPs is to provide public IP addresses to a private cloud, where - there are a limited number of IP addresses available. Another is for a - public cloud user to have a "static" IP address that can be reassigned - when an instance is upgraded or moved. - IP addresses - - static - - static IP addresses - - - Fixed IP addresses can be private for private clouds, or public for - public clouds. When an instance terminates, its fixed IP is lost. It is - worth noting that newer users of cloud computing may find their ephemeral - nature frustrating. - IP addresses - - fixed - - fixed IP addresses - -
- -
- IP Address Planning - - An OpenStack installation can potentially have many subnets (ranges - of IP addresses) and different types of services in each. An IP address - plan can assist with a shared understanding of network partition purposes - and scalability. Control services can have public and private IP - addresses, and as noted above, there are a couple of options for an - instance's public addresses. - IP addresses - - address planning - - network design - - IP address planning - - - An IP address plan might be broken down into the following - sections: - IP addresses - - sections of - - - - - Subnet router - - - Packets leaving the subnet go via this address, which could be - a dedicated router or a nova-network - service. - - - - - Control services public interfaces - - - Public access to swift-proxy, - nova-api, glance-api, and horizon come to - these addresses, which could be on one side of a load balancer or - pointing at individual machines. - - - - - Object Storage cluster internal communications - - - Traffic among object/account/container servers and between - these and the proxy server's internal interface uses this private - network. - containers - - container servers - - objects - - object servers - - account server - - - - - - Compute and storage communications - - - If ephemeral or block storage is external to the compute node, - this network is used. - - - - - Out-of-band remote management - - - If a dedicated remote access controller chip is included in - servers, often these are on a separate network. - - - - - In-band remote management - - - Often, an extra (such as 1 GB) interface on compute or storage - nodes is used for system administrators or monitoring tools to - access the host instead of going through the public - interface. - - - - - Spare space for future growth - - - Adding more public-facing control services or guest instance - IPs should always be part of your plan. - - - - - For example, take a deployment that has both OpenStack Compute and - Object Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26 - available. One way to segregate the space might be as follows: - - 172.22.42.0/24: -172.22.42.1 - 172.22.42.3 - subnet routers -172.22.42.4 - 172.22.42.20 - spare for networks -172.22.42.21 - 172.22.42.104 - Compute node remote access controllers - (inc spare) -172.22.42.105 - 172.22.42.188 - Compute node management interfaces (inc spare) -172.22.42.189 - 172.22.42.208 - Swift proxy remote access controllers - (inc spare) -172.22.42.209 - 172.22.42.228 - Swift proxy management interfaces (inc spare) -172.22.42.229 - 172.22.42.252 - Swift storage servers remote access controllers - (inc spare) -172.22.42.253 - 172.22.42.254 - spare -172.22.87.0/26: -172.22.87.1 - 172.22.87.3 - subnet routers -172.22.87.4 - 172.22.87.24 - Swift proxy server internal interfaces - (inc spare) -172.22.87.25 - 172.22.87.63 - Swift object server internal interfaces - (inc spare) - - A similar approach can be taken with public IP addresses, taking - note that large, flat ranges are preferred for use with guest instance - IPs. Take into account that for some OpenStack networking options, a - public IP address in the range of a guest instance public IP address is - assigned to the nova-compute host. -
- -
- Network Topology - - OpenStack Compute with nova-network provides - predefined network deployment models, each with its own strengths and - weaknesses. The selection of a network manager changes your network - topology, so the choice should be made carefully. You also have a choice - between the tried-and-true legacy nova-network settings - or the neutron project for OpenStack - Networking. Both offer networking for launched instances with different - implementations and requirements. - networks - - deployment options - - networks - - network managers - - network design - - network topology - - deployment options - - - For OpenStack Networking with the neutron project, typical - configurations are documented with the idea that any setup you can - configure with real hardware you can re-create with a software-defined - equivalent. Each tenant can contain typical network elements such as - routers, and services such as DHCP. - - describes the - networking deployment options for both legacy - nova-network options and an equivalent neutron - configuration. - provisioning/deployment - - network deployment options - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Networking deployment options
Network deployment modelStrengthsWeaknessesNeutron equivalent
FlatExtremely simple topology. No DHCP - overhead.Requires file injection into the instance to configure - network interfaces.Configure a single bridge as the integration bridge (br-int) and - connect it to a physical network interface with the Modular Layer 2 - (ML2) plug-in, which uses Open vSwitch by default.
FlatDHCPRelatively simple to deploy. Standard - networking. Works with all guest operating - systems.Requires its own DHCP broadcast domain.Configure DHCP agents and routing agents. Network Address - Translation (NAT) performed outside of compute nodes, typically on - one or more network nodes.
VlanManagerEach tenant is isolated to its own VLANs.More complex to set up. Requires its own DHCP - broadcast domain. Requires many VLANs to be trunked - onto a single port. Standard VLAN number - limitation. Switches must support 802.1q VLAN - tagging.Isolated tenant networks implement some form of isolation - of layer 2 traffic between distinct networks. VLAN tagging is key - concept, where traffic is “tagged” with an ordinal identifier for - the VLAN. Isolated network implementations may or may not include - additional services like DHCP, NAT, and routing.
FlatDHCP Multi-host with high availability - (HA)Networking failure is isolated to the VMs running on the - affected hypervisor. DHCP traffic can be isolated - within an individual host. Network traffic is - distributed to the compute nodes.More complex to set up. Compute nodes - typically need IP addresses accessible by external networks. - Options must be carefully configured for live migration to - work with networking services.Configure neutron with multiple DHCP and layer-3 agents. - Network nodes are not able to failover to each other, so the - controller runs networking services, such as DHCP. Compute nodes run - the ML2 plug-in with support for agents such as Open vSwitch or - Linux Bridge.
- - Both nova-network and neutron services provide - similar capabilities, such as VLAN between VMs. You also can provide - multiple NICs on VMs with either service. Further discussion - follows. - -
- VLAN Configuration Within OpenStack VMs - - VLAN configuration can be as simple or as complicated as desired. - The use of VLANs has the benefit of allowing each project its own subnet - and broadcast segregation from other projects. To allow OpenStack to - efficiently use VLANs, you must allocate a VLAN range (one for each - project) and turn each compute node switch port into a trunk - port. - networks - - VLAN - - VLAN network - - network design - - network topology - - VLAN with OpenStack VMs - - - For example, if you estimate that your cloud must support a - maximum of 100 projects, pick a free VLAN range that your network - infrastructure is currently not using (such as VLAN 200–299). You must - configure OpenStack with this range and also configure your switch ports - to allow VLAN traffic from that range. -
- -
- Multi-NIC Provisioning - - OpenStack Networking with neutron and - OpenStack Compute with nova-network have the ability to assign - multiple NICs to instances. For nova-network this can be done - on a per-request basis, with each additional NIC using up an - entire subnet or VLAN, reducing the total number of supported - projects. - MultiNic - - network design - - network topology - - multi-NIC provisioning - -
- -
- Multi-Host and Single-Host Networking - - The nova-network service has the ability to - operate in a multi-host or single-host mode. Multi-host is when each - compute node runs a copy of nova-network and the - instances on that compute node use the compute node as a gateway to the - Internet. The compute nodes also host the floating IPs and security - groups for instances on that node. Single-host is when a central - server—for example, the cloud controller—runs the - nova-network service. All compute nodes forward traffic - from the instances to the cloud controller. The cloud controller then - forwards traffic to the Internet. The cloud controller hosts the - floating IPs and security groups for all instances on all compute nodes - in the cloud. - single-host networking - - networks - - multi-host - - multi-host networking - - network design - - network topology - - multi- vs. single-host networking - - - There are benefits to both modes. Single-node has the downside of - a single point of failure. If the cloud controller is not available, - instances cannot communicate on the network. This is not true with - multi-host, but multi-host requires that each compute node has a public - IP address to communicate on the Internet. If you are not able to obtain - a significant block of public IP addresses, multi-host might not be an - option. -
-
- -
- Services for Networking - - OpenStack, like any network application, has a number of standard - considerations to apply, such as NTP and DNS. - network design - - services for networking - - -
- NTP - - Time synchronization is a critical element to ensure continued - operation of OpenStack components. Correct time is necessary to avoid - errors in instance scheduling, replication of objects in the object - store, and even matching log timestamps for debugging. - networks - - Network Time Protocol (NTP) - - - All servers running OpenStack components should be able to access - an appropriate NTP server. You may decide to set up one locally or use - the public pools available from the Network Time Protocol - project. -
- -
- DNS - - OpenStack does not currently provide DNS services, aside from the - dnsmasq daemon, which resides on nova-network hosts. You - could consider providing a dynamic DNS service to allow instances to - update a DNS entry with new IP addresses. You can also consider making a - generic forward and reverse DNS mapping for instances' IP addresses, - such as vm-203-0-113-123.example.com. - DNS (Domain Name Server, Service or System) - - DNS service choices - -
-
- -
- Conclusion - - Armed with your IP address layout and numbers and knowledge about - the topologies and services you can use, it's now time to prepare the - network for your installation. Be sure to also check out the OpenStack Security - Guide for tips on securing your network. We wish you a - good relationship with your networking team! -
-
diff --git a/doc/openstack-ops/ch_arch_provision.xml b/doc/openstack-ops/ch_arch_provision.xml deleted file mode 100644 index 6309baf4..00000000 --- a/doc/openstack-ops/ch_arch_provision.xml +++ /dev/null @@ -1,375 +0,0 @@ - - - - - Provisioning and Deployment - - A critical part of a cloud's scalability is the amount of effort that - it takes to run your cloud. To minimize the operational cost of running your - cloud, set up and use an automated deployment and configuration - infrastructure with a configuration management system, such as Puppet or - Chef. Combined, these systems greatly reduce manual effort and the chance - for operator error. - cloud computing - - minimizing costs of - - - This infrastructure includes systems to automatically install the - operating system's initial configuration and later coordinate the - configuration of all services automatically and centrally, which reduces - both manual effort and the chance for error. Examples include Ansible, - CFEngine, Chef, Puppet, and Salt. You can even use OpenStack to deploy - OpenStack, named TripleO (OpenStack On OpenStack). - Puppet - - Chef - - -
- Automated Deployment - - An automated deployment system installs and configures operating - systems on new servers, without intervention, after the absolute minimum - amount of manual work, including physical racking, MAC-to-IP assignment, - and power configuration. Typically, solutions rely on wrappers around PXE - boot and TFTP servers for the basic operating system install and then hand - off to an automated configuration management system. - deployment - - provisioning/deployment - - provisioning/deployment - - automated deployment - - - Both Ubuntu and Red Hat Enterprise Linux include mechanisms for - configuring the operating system, including preseed and kickstart, that - you can use after a network boot. Typically, these are used to - bootstrap an automated configuration system. Alternatively, you can use - an image-based approach for deploying the operating system, such as - systemimager. You can use both approaches with a virtualized - infrastructure, such as when you run VMs to separate your control - services and physical infrastructure. - - When you create a deployment plan, focus on a few vital areas - because they are very hard to modify post deployment. The next two - sections talk about configurations for: - - - - Disk partitioning and disk array setup for scalability - - - - Networking configuration just for PXE booting - - - -
- Disk Partitioning and RAID - - At the very base of any operating system are the hard drives on - which the operating system (OS) is installed. - RAID (redundant array of independent disks) - - partitions - - disk partitioning - - disk partitioning - - - You must complete the following configurations on the server's - hard drives: - - - - Partitioning, which provides greater flexibility for layout of - operating system and swap space, as described below. - - - - Adding to a RAID array (RAID stands for redundant array of - independent disks), based on the number of disks you have available, - so that you can add capacity as your cloud grows. Some options are - described in more detail below. - - - - The simplest option to get started is to use one hard drive with - two partitions: - - - - File system to store files and directories, where all the data - lives, including the root partition that starts and runs the - system - - - - Swap space to free up memory for processes, as an independent - area of the physical disk used only for swapping and nothing - else - - - - RAID is not used in this simplistic one-drive setup because - generally for production clouds, you want to ensure that if one disk - fails, another can take its place. Instead, for production, use more - than one disk. The number of disks determine what types of RAID arrays - to build. - - We recommend that you choose one of the following multiple disk - options: - - - - Option 1 - - - Partition all drives in the same way in a horizontal - fashion, as shown in . - - With this option, you can assign different partitions to - different RAID arrays. You can allocate partition 1 of disk one - and two to the /boot partition mirror. You can make - partition 2 of all disks the root partition mirror. You can use - partition 3 of all disks for a cinder-volumes LVM - partition running on a RAID 10 array. - -
- Partition setup of drives - - - - - - -
- - While you might end up with unused partitions, such as - partition 1 in disk three and four of this example, this option - allows for maximum utilization of disk space. I/O performance - might be an issue as a result of all disks being used for all - tasks. -
-
- - - Option 2 - - - Add all raw disks to one large RAID array, either hardware - or software based. You can partition this large array with the - boot, root, swap, and LVM areas. This option is simple to - implement and uses all partitions. However, disk I/O might - suffer. - - - - - Option 3 - - - Dedicate entire disks to certain partitions. For example, - you could allocate disk one and two entirely to the boot, root, - and swap partitions under a RAID 1 mirror. Then, allocate disk - three and four entirely to the LVM partition, also under a RAID 1 - mirror. Disk I/O should be better because I/O is focused on - dedicated tasks. However, the LVM partition is much - smaller. - - -
- - - You may find that you can automate the partitioning itself. For - example, MIT uses Fully - Automatic Installation (FAI) to do the initial PXE-based - partition and then install using a combination of min/max and - percentage-based partitioning. - Fully Automatic Installation (FAI) - - - - As with most architecture choices, the right answer depends on - your environment. If you are using existing hardware, you know the disk - density of your servers and can determine some decisions based on the - options above. If you are going through a procurement process, your - user's requirements also help you determine hardware purchases. Here are - some examples from a private cloud providing web developers custom - environments at AT&T. This example is from a specific deployment, so - your existing hardware or procurement opportunity may vary from this. - AT&T uses three types of hardware in its deployment: - - - - Hardware for controller nodes, used for all stateless - OpenStack API services. About 32–64 GB memory, small attached disk, - one processor, varied number of cores, such as 6–12. - - - - Hardware for compute nodes. Typically 256 or 144 GB memory, - two processors, 24 cores. 4–6 TB direct attached storage, typically - in a RAID 5 configuration. - - - - Hardware for storage nodes. Typically for these, the disk - space is optimized for the lowest cost per GB of storage while - maintaining rack-space efficiency. - - - - Again, the right answer depends on your environment. You have to - make your decision based on the trade-offs between space utilization, - simplicity, and I/O performance. -
- -
- Network Configuration - - Network configuration is a very large topic that spans multiple - areas of this book. For now, make sure that your servers can PXE boot - and successfully communicate with the deployment server. - networks - - configuration of - - - For example, you usually cannot configure NICs for VLANs when PXE - booting. Additionally, you usually cannot PXE boot with bonded NICs. If - you run into this scenario, consider using a simple 1 GB switch in a - private network on which only your cloud communicates. -
-
- -
- Automated Configuration - - The purpose of automatic configuration management is to establish - and maintain the consistency of a system without using human intervention. - You want to maintain consistency in your deployments so that you can have - the same cloud every time, repeatably. Proper use of automatic - configuration-management tools ensures that components of the cloud - systems are in particular states, in addition to simplifying deployment, - and configuration change propagation. - automated configuration - - provisioning/deployment - - automated configuration - - - These tools also make it possible to test and roll back changes, as - they are fully repeatable. Conveniently, a large body of work has been - done by the OpenStack community in this space. Puppet, a configuration - management tool, even provides official modules for OpenStack projects in - an OpenStack infrastructure system known as Puppet OpenStack - . Chef configuration management is provided within . - Additional configuration management systems include Juju, Ansible, and - Salt. Also, PackStack is a command-line utility for Red Hat Enterprise - Linux and derivatives that uses Puppet modules to support rapid deployment - of OpenStack on existing servers over an SSH connection. - - - An integral part of a configuration-management system is the item - that it controls. You should carefully consider all of the items that you - want, or do not want, to be automatically managed. For example, you may - not want to automatically format hard drives with user data. -
- -
- Remote Management - - In our experience, most operators don't sit right next to the - servers running the cloud, and many don't necessarily enjoy visiting the - data center. OpenStack should be entirely remotely configurable, but - sometimes not everything goes according to plan. - provisioning/deployment - - remote management - - - In this instance, having an out-of-band access into nodes running - OpenStack components is a boon. The IPMI protocol is the de facto standard - here, and acquiring hardware that supports it is highly recommended to - achieve that lights-out data center aim. - - In addition, consider remote power control as well. While IPMI - usually controls the server's power state, having remote access to the PDU - that the server is plugged into can really be useful for situations when - everything seems wedged. -
- -
- Parting Thoughts for Provisioning and Deploying OpenStack - - You can save time by understanding the use cases for the cloud you - want to create. Use cases for OpenStack are varied. Some include object - storage only; others require preconfigured compute resources to speed - development-environment set up; and others need fast provisioning of - compute resources that are already secured per tenant with private - networks. Your users may have need for highly redundant servers to make - sure their legacy applications continue to run. Perhaps a goal would be to - architect these legacy applications so that they run on multiple instances - in a cloudy, fault-tolerant way, but not make it a goal to add to those - clusters over time. Your users may indicate that they need scaling - considerations because of heavy Windows server use. - provisioning/deployment - - tips for - - - You can save resources by looking at the best fit for the hardware - you have in place already. You might have some high-density storage - hardware available. You could format and repurpose those servers for - OpenStack Object Storage. All of these considerations and input from users - help you build your use case and your deployment plan. - - - For further research about OpenStack deployment, investigate the - supported and documented preconfigured, prepackaged installers for - OpenStack from companies such as Canonical, Cisco, Cloudscaling, IBM, Metacloud, Mirantis, Piston, Rackspace, Red Hat, SUSE, and SwiftStack. - -
- -
- Conclusion - - The decisions you make with respect to provisioning and deployment - will affect your day-to-day, week-to-week, and month-to-month maintenance - of the cloud. Your configuration management will be able to evolve over - time. However, more thought and design need to be done for upfront choices - about deployment, disk partitioning, and network configuration. -
-
diff --git a/doc/openstack-ops/ch_arch_scaling.xml b/doc/openstack-ops/ch_arch_scaling.xml deleted file mode 100644 index 9d68ea89..00000000 --- a/doc/openstack-ops/ch_arch_scaling.xml +++ /dev/null @@ -1,716 +0,0 @@ - - -%openstack; -]> - - - - Scaling - - Whereas traditional applications required larger hardware to scale - ("vertical scaling"), cloud-based applications typically request more, - discrete hardware ("horizontal scaling"). If your cloud is successful, - eventually you must add resources to meet the increasing demand. - scaling - - vertical vs. horizontal - - - To suit the cloud paradigm, OpenStack itself is designed to be - horizontally scalable. Rather than switching to larger servers, you procure - more servers and simply install identically configured services. Ideally, - you scale out and load balance among groups of functionally identical - services (for example, compute nodes or nova-api nodes), - that communicate on a message bus. - -
- The Starting Point - - Determining the scalability of your cloud and how to improve it is - an exercise with many variables to balance. No one solution meets - everyone's scalability goals. However, it is helpful to track a number of - metrics. Since you can define virtual hardware templates, called "flavors" - in OpenStack, you can start to make scaling decisions based on the flavors - you'll provide. These templates define sizes for memory in RAM, root disk - size, amount of ephemeral data disk space available, and number of cores - for starters. - virtual machine (VM) - - hardware - - virtual hardware - - flavor - - scaling - - metrics for - - - The default OpenStack flavors are shown in . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OpenStack default flavors
NameVirtual coresMemoryDiskEphemeral
m1.tiny1512 MB1 GB0 GB
m1.small12 GB10 GB20 GB
m1.medium24 GB10 GB40 GB
m1.large48 GB10 GB80 GB
m1.xlarge816 GB10 GB160 GB
- - The starting point for most is the core count of your cloud. By - applying some ratios, you can gather information about: - - The number of virtual machines (VMs) you expect to run, - ((overcommit fraction × cores) / virtual cores per - instance) - - - - How much storage is required (flavor disk size - × number of instances) - - You can use these ratios to determine how much - additional infrastructure you need to support your cloud. - - Here is an example using the ratios for gathering scalability - information for the number of VMs expected as well as the storage needed. - The following numbers support (200 / 2) × 16 = 1600 VM instances and - require 80 TB of storage for /var/lib/nova/instances: - - - - 200 physical cores. - - - - Most instances are size m1.medium (two virtual cores, 50 GB of - storage). - - - - Default CPU overcommit ratio (cpu_allocation_ratio - in nova.conf) of 16:1. - - - - - Regardless of the overcommit ratio, an instance can not be placed - on any physical node with fewer raw (pre-overcommit) resources than - instance flavor requires. - - - However, you need more than the core count alone to estimate the - load that the API services, database servers, and queue servers are likely - to encounter. You must also consider the usage patterns of your - cloud. - - As a specific example, compare a cloud that supports a managed - web-hosting platform with one running integration tests for a development - project that creates one VM per code commit. In the former, the heavy work - of creating a VM happens only every few months, whereas the latter puts - constant heavy load on the cloud controller. You must consider your - average VM lifetime, as a larger number generally means less load on the - cloud controller. - cloud controllers - - scalability and - - - Aside from the creation and termination of VMs, you must consider - the impact of users accessing the service—particularly on - nova-api and its associated database. Listing instances - garners a great deal of information and, given the frequency with which - users run this operation, a cloud with a large number of users can - increase the load significantly. This can occur even without their - knowledge—leaving the OpenStack dashboard instances tab open in the - browser refreshes the list of VMs every 30 seconds. - - After you consider these factors, you can determine how many cloud - controller cores you require. A typical eight core, 8 GB of RAM server is - sufficient for up to a rack of compute nodes — given the above - caveats. - - You must also consider key hardware specifications for the - performance of user VMs, as well as budget and performance needs, - including storage performance (spindles/core), memory availability - (RAM/core), network bandwidth - bandwidth - - hardware specifications and - (Gbps/core), and overall CPU performance (CPU/core). - - - For a discussion of metric tracking, including how to extract - metrics from your cloud, see . - -
- -
- Adding Cloud Controller Nodes - - You can facilitate the horizontal expansion of your cloud by adding - nodes. Adding compute nodes is straightforward—they are easily picked up - by the existing installation. However, you must consider some important - points when you design your cluster to be highly available. - compute nodes - - adding - - high availability - - configuration options - - high availability - - cloud controller nodes - - adding - - scaling - - adding cloud controller nodes - - - Recall that a cloud controller node runs several different services. - You can install services that communicate only using the message queue - internally—nova-scheduler and nova-console—on a - new server for expansion. However, other integral parts require more - care. - - You should load balance user-facing services such as dashboard, - nova-api, or the Object Storage proxy. Use any standard HTTP - load-balancing method (DNS round robin, hardware load balancer, or - software such as Pound or HAProxy). One caveat with dashboard is the VNC - proxy, which uses the WebSocket protocol—something that an L7 load - balancer might struggle with. See also Horizon session - storage. - - You can configure some services, such as nova-api and - glance-api, to use multiple processes by changing a flag in - their configuration file—allowing them to share work between multiple - cores on the one machine. - - - Several options are available for MySQL load balancing, and the - supported AMQP brokers have built-in clustering support. Information on - how to configure these and many of the other services can be found in - . - Advanced Message Queuing Protocol (AMQP) - - -
- -
- Segregating Your Cloud - - When you want to offer users different regions to provide legal - considerations for data storage, redundancy across earthquake fault lines, - or for low-latency API calls, you segregate your cloud. Use one of the - following OpenStack methods to segregate your cloud: - cells, regions, - availability zones, or host - aggregates. - segregation methods - - scaling - - cloud segregation - - - Each method provides different functionality and can be best divided - into two groups: - - - - Cells and regions, which segregate an entire cloud and result in - running separate Compute deployments. - - - - Availability - zones and host aggregates, which merely divide a single - Compute deployment. - - - - provides a comparison view of - each segregation method currently provided by OpenStack Compute. - endpoints - - API endpoint - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OpenStack segregation methods
CellsRegionsAvailability zonesHost aggregates
Use when you need - A single API endpoint for compute, - or you require a second level of scheduling.Discrete regions with separate API endpoints and no - coordination between regions.Logical separation within your nova deployment for - physical isolation or redundancy.To schedule a group of hosts with common - features.
Example A cloud with multiple sites where you can schedule VMs - "anywhere" or on a particular site.A cloud with multiple sites, where you schedule VMs to a - particular site and you want a shared infrastructure.A single-site cloud with equipment fed by separate power - supplies.Scheduling to hosts with trusted hardware - support.
Overhead Considered experimental.A new service, - nova-cells.Each cell has a full nova installation - except nova-api.A different API endpoint for every - region.Each region has a full nova installation. - Configuration changes to nova.conf.Configuration changes to nova.conf.
Shared services - Keystonenova-api KeystoneKeystoneAll nova servicesKeystoneAll nova services
- -
- Cells and Regions - - OpenStack Compute cells are designed to allow running the cloud in - a distributed fashion without having to use more complicated - technologies, or be invasive to existing nova installations. Hosts in a - cloud are partitioned into groups called cells. - Cells are configured in a tree. The top-level cell ("API cell") has a - host that runs the nova-api service, but no - nova-compute services. Each child cell runs all of the - other typical nova-* services found in a regular - installation, except for the nova-api service. Each cell - has its own message queue and database service and also runs - nova-cells, which manages the communication between the API - cell and child cells. - scaling - - cells and regions - - cells - - cloud segregation - - region - - - This allows for a single API server being used to control access - to multiple cloud installations. Introducing a second level of - scheduling (the cell selection), in addition to the regular - nova-scheduler selection of hosts, provides greater - flexibility to control where virtual machines are run. - - Unlike having a single API endpoint, regions have a separate API endpoint - per installation, allowing for a more discrete separation. Users wanting - to run instances across sites have to explicitly select a region. - However, the additional complexity of a running a new service is not - required. - - The OpenStack dashboard (horizon) can be configured to use multiple - regions. This can be configured through the parameter. -
- -
- Availability Zones and Host Aggregates - - You can use availability zones, host aggregates, or both to - partition a nova deployment. - scaling - - availability zones - - - Availability zones are implemented through and configured in a - similar way to host aggregates. - - However, you use them for different reasons. - -
- Availability zone - - This enables you to arrange OpenStack compute hosts into logical - groups and provides a form of physical isolation and redundancy from - other availability zones, such as by using a separate power supply or - network equipment. - availability zone - - - You define the availability zone in which a specified compute - host resides locally on each server. An availability zone is commonly - used to identify a set of servers that have a common attribute. For - instance, if some of the racks in your data center are on a separate - power source, you can put servers in those racks in their own - availability zone. Availability zones can also help separate different - classes of hardware. - - When users provision resources, they can specify from which - availability zone they want their instance to be built. This allows - cloud consumers to ensure that their application resources are spread - across disparate machines to achieve high availability in the event of - hardware failure. -
- -
- Host aggregates zone - - This enables you to partition OpenStack Compute deployments into - logical groups for load balancing and instance distribution. You can - use host aggregates to further partition an availability zone. For - example, you might use host aggregates to partition an availability - zone into groups of hosts that either share common resources, such as - storage and network, or have a special property, such as trusted - computing hardware. - scaling - - host aggregate - - host aggregate - - - A common use of host aggregates is to provide information for - use with the nova-scheduler. For example, you might - use a host aggregate to group a set of hosts that share specific - flavors or images. - - The general case for this is setting key-value pairs in the - aggregate metadata and matching key-value pairs in flavor's extra_specs - metadata. The AggregateInstanceExtraSpecsFilter in - the filter scheduler will enforce that instances be scheduled only on - hosts in aggregates that define the same key to the same value. - - An advanced use of this general concept allows different - flavor types to run with different CPU and RAM allocation ratios so - that high-intensity computing loads and low-intensity development and - testing systems can share the same cloud without either starving the - high-use systems or wasting resources on low-utilization systems. This - works by setting metadata in your host - aggregates and matching extra_specs in your - flavor types. - - The first step is setting the aggregate metadata keys - cpu_allocation_ratio and - ram_allocation_ratio to a floating-point - value. The filter schedulers - AggregateCoreFilter and - AggregateRamFilter will use those values rather - than the global defaults in nova.conf when - scheduling to hosts in the aggregate. It is important to be cautious - when using this feature, since each host can be in multiple aggregates - but should have only one allocation ratio for each resources. It is up - to you to avoid putting a host in multiple aggregates that define - different values for the same resource. - - This is the first half of the equation. To get flavor types - that are guaranteed a particular ratio, you must set the - extra_specs in the flavor type to the - key-value pair you want to match in the aggregate. For example, if you - define extra_specs - cpu_allocation_ratio to "1.0", then instances - of that type will run in aggregates only where the metadata key - cpu_allocation_ratio is also defined as "1.0." - In practice, it is better to define an additional key-value pair in - the aggregate metadata to match on rather than match directly on - cpu_allocation_ratio or - core_allocation_ratio. This allows better - abstraction. For example, by defining a key - overcommit and setting a value of "high," - "medium," or "low," you could then tune the numeric allocation ratios - in the aggregates without also needing to change all flavor types - relating to them. - - - Previously, all services had an availability zone. Currently, - only the nova-compute service has its own - availability zone. Services such as - nova-scheduler, nova-network, - and nova-conductor have always spanned all - availability zones. - - When you run any of the following operations, the services - appear in their own internal availability zone - (CONF.internal_service_availability_zone): - - nova host-list (os-hosts) - - - - euca-describe-availability-zones verbose - - The internal availability zone is hidden in - euca-describe-availability_zones (nonverbose). - - CONF.node_availability_zone has been renamed to - CONF.default_availability_zone and is used only by the - nova-api and nova-scheduler - services. - - CONF.node_availability_zone still works but is - deprecated. - -
-
-
- -
- Scalable Hardware - - While several resources already exist to help with deploying and - installing OpenStack, it's very important to make sure that you have your - deployment planned out ahead of time. This guide presumes that you have at - least set aside a rack for the OpenStack cloud but also offers suggestions - for when and what to scale. - -
- Hardware Procurement - - “The Cloud” has been described as a volatile environment where - servers can be created and terminated at will. While this may be true, - it does not mean that your servers must be volatile. Ensuring that your - cloud's hardware is stable and configured correctly means that your - cloud environment remains up and running. Basically, put effort into - creating a stable hardware environment so that you can host a cloud that - users may treat as unstable and volatile. - servers - - avoiding volatility in - - hardware - - scalability planning - - scaling - - hardware procurement - - - OpenStack can be deployed on any hardware supported by an - OpenStack-compatible Linux distribution. - - Hardware does not have to be consistent, but it should at least - have the same type of CPU to support instance migration. - - The typical hardware recommended for use with OpenStack is the - standard value-for-money offerings that most hardware vendors stock. It - should be straightforward to divide your procurement into building - blocks such as "compute," "object storage," and "cloud controller," and - request as many of these as you need. Alternatively, should you be - unable to spend more, if you have existing servers—provided they meet - your performance requirements and virtualization technology—they are - quite likely to be able to support OpenStack. -
- -
- Capacity Planning - - OpenStack is designed to increase in size in a straightforward - manner. Taking into account the considerations that we've mentioned in - this chapter—particularly on the sizing of the cloud controller—it - should be possible to procure additional compute or object storage nodes - as needed. New nodes do not need to be the same specification, or even - vendor, as existing nodes. - capability - - scaling and - - weight - - capacity planning - - scaling - - capacity planning - - - For compute nodes, nova-scheduler will take care of - differences in sizing having to do with core count and RAM amounts; - however, you should consider that the user experience changes with - differing CPU speeds. When adding object storage nodes, a - weight should be specified that reflects the - capability of the node. - - Monitoring the resource usage and user growth will enable you to - know when to procure. details some - useful metrics. -
- -
- Burn-in Testing - - The chances of failure for the server's hardware are high at the start and the - end of its life. As a result, dealing with hardware - failures while in production can be avoided by appropriate burn-in - testing to attempt to trigger the early-stage failures. The general - principle is to stress the hardware to its limits. Examples of burn-in - tests include running a CPU or disk benchmark for several - days. - testing - - burn-in testing - - troubleshooting - - burn-in testing - - burn-in testing - - scaling - - burn-in testing - -
-
-
diff --git a/doc/openstack-ops/ch_arch_storage.xml b/doc/openstack-ops/ch_arch_storage.xml deleted file mode 100644 index 935add95..00000000 --- a/doc/openstack-ops/ch_arch_storage.xml +++ /dev/null @@ -1,955 +0,0 @@ - - - - - Storage Decisions - - Storage is found in many parts of the OpenStack stack, and the - differing types can cause confusion to even experienced cloud engineers. - This section focuses on persistent storage options you can configure with - your cloud. It's important to understand the distinction between ephemeral storage and persistent storage. - -
- Ephemeral Storage - - If you deploy only the OpenStack Compute Service (nova), your users - do not have access to any form of persistent storage by default. The disks - associated with VMs are "ephemeral," meaning that (from the user's point - of view) they effectively disappear when a virtual machine is - terminated. - storage - - ephemeral - -
- -
- Persistent Storage - - Persistent storage means that the storage resource outlives any - other resource and is always available, regardless of the state of a - running instance. - - Today, OpenStack clouds explicitly support three types of persistent - storage: object storage, block storage, - and file system storage. - - swift - - Object Storage API - - - persistent storage - - - objects - - persistent storage of - - - Object Storage - - Object Storage API - - - storage - - object storage - - - shared file system storage - shared file systems service - - - -
- Object Storage - - With object storage, users access binary objects through a REST - API. You may be familiar with Amazon S3, which is a well-known example - of an object storage system. Object storage is implemented in OpenStack - by the OpenStack Object Storage (swift) project. If your intended users - need to archive or manage large datasets, you want to provide them with - object storage. In addition, OpenStack can store your virtual machine (VM) images inside of an object - storage system, as an alternative to storing the images on a file - system. - binary - - binary objects - - - OpenStack Object Storage provides a highly scalable, highly - available storage solution by relaxing some of the constraints of - traditional file systems. In designing and procuring for such a cluster, - it is important to understand some key concepts about its operation. - Essentially, this type of storage is built on the idea that all storage - hardware fails, at every level, at some point. Infrequently encountered - failures that would hamstring other storage systems, such as issues - taking down RAID cards or entire servers, are handled gracefully with - OpenStack Object Storage. - scaling - - Object Storage and - - - A good document describing the Object Storage architecture is - found within the developer documentation—read - this first. Once you understand the architecture, you should know what a - proxy server does and how zones work. However, some important points are - often missed at first glance. - - When designing your cluster, you must consider durability and - availability. Understand that the predominant source of these is the - spread and placement of your data, rather than the reliability of the - hardware. Consider the default value of the number of replicas, which is - three. This means that before an object is marked as having been - written, at least two copies exist—in case a single server fails to - write, the third copy may or may not yet exist when the write operation - initially returns. Altering this number increases the robustness of your - data, but reduces the amount of storage you have available. Next, look - at the placement of your servers. Consider spreading them widely - throughout your data center's network and power-failure zones. Is a zone - a rack, a server, or a disk? - - Object Storage's network patterns might seem unfamiliar at first. - Consider these main traffic flows: - - objects - storage decisions and - - - containers - storage decisions and - - account server - - - - Among object, - container, and - account servers - - - - Between those servers and the proxies - - - - Between the proxies and your users - - - - - Object Storage is very "chatty" among servers hosting data—even a - small cluster does megabytes/second of traffic, which is predominantly, - “Do you have the object?”/“Yes I have the object!” Of course, if the - answer to the aforementioned question is negative or the request times - out, replication of the object begins. - - Consider the scenario where an entire server fails and 24 TB of - data needs to be transferred "immediately" to remain at three - copies—this can put significant load on the network. - - - - Another fact that's often forgotten is that when a new file is - being uploaded, the proxy server must write out as many streams as there - are replicas—giving a multiple of network traffic. For a three-replica - cluster, 10 Gbps in means 30 Gbps out. Combining this with the previous - high bandwidth - - bandwidth - - private vs. public network recommendations - demands of replication is what results in the - recommendation that your private network be of significantly higher - bandwidth than your public need be. Oh, and OpenStack Object Storage - communicates internally with unencrypted, unauthenticated rsync for - performance—you do want the private network to be private. - - - The remaining point on bandwidth is the public-facing portion. The - swift-proxy service is stateless, which means that - you can easily add more and use HTTP load-balancing methods to share - bandwidth and availability between them. - - - More proxies means more bandwidth, if your storage can keep - up. -
- -
- Block Storage - - Block storage (sometimes referred to as volume storage) provides - users with access to block-storage devices. Users interact with block - storage by attaching volumes to their running VM instances. - volume storage - - block storage - - storage - - block storage - - - These volumes are persistent: they can be detached from one - instance and re-attached to another, and the data remains intact. Block - storage is implemented in OpenStack by the OpenStack Block Storage - (cinder) project, which supports multiple back ends in the form of - drivers. Your choice of a storage back end must be supported by a Block - Storage driver. - - Most block storage drivers allow the instance to have direct - access to the underlying storage hardware's block device. This helps - increase the overall read/write IO. However, support for utilizing files - as volumes is also well established, with full support for NFS, - GlusterFS and others. - - These drivers work a little differently than a traditional "block" - storage driver. On an NFS or GlusterFS file system, a single file is - created and then mapped as a "virtual" volume into the instance. This - mapping/translation is similar to how OpenStack utilizes QEMU's - file-based virtual machines stored in - /var/lib/nova/instances. -
- -
- Shared File Systems Service - - The Shared File Systems service provides a set of services for - management of Shared File Systems in a multi-tenant cloud environment. - Users interact with Shared File Systems service by mounting remote File - Systems on their instances with the following usage of those systems - for file storing and exchange. Shared File Systems service provides you - with shares. A share is a remote, mountable file system. You can mount - a share to and access a share from several hosts by several users at a - time. With shares, user can also: - - - Create a share specifying its size, shared file system - protocol, visibility level - - - - - Create a share on either a share server or standalone, depending - on the selected back-end mode, with or without using a share - network. - - - - Specify access rules and security services for existing - shares. - - - Combine several shares in groups to keep data consistency - inside the groups for the following safe group operations. - - - Create a snapshot of a selected share or a share group for - storing the existing shares consistently or creating new shares from - that snapshot in a consistent way - - - Create a share from a snapshot. - - - Set rate limits and quotas for specific shares and snapshots - - - View usage of share resources - - - Remove shares. - - - Like Block Storage, the Shared File Systems service is persistent. It - can be: - - - Mounted to any number of client machines. - - - Detached from one instance and attached to another without - data loss. During this process the data are safe unless the - Shared File Systems service itself is changed or removed. - - - Shares are provided by the Shared File Systems service. In OpenStack, - Shared File Systems service is implemented by Shared File System - (manila) project, which supports multiple back-ends in the form of - drivers. The Shared File Systems service can be configured to provision - shares from one or more back-ends. Share servers are, mostly, virtual - machines that export file shares via different protocols such as NFS, - CIFS, GlusterFS, or HDFS. - -
-
- -
- OpenStack Storage Concepts - - explains the different storage - concepts provided by OpenStack. - block device - - storage - - overview of concepts - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OpenStack storage
Ephemeral storageBlock storageObject storageShared File System storage
Used to…Run operating system and scratch spaceAdd additional persistent storage to a virtual machine - (VM)Store data, including VM imagesAdd additional persistent storage to a virtual machine
Accessed through…A file systemA block device that can be - partitioned, formatted, and mounted (such as, /dev/vdc)The REST APIA Shared File Systems service share (either manila - managed or an external one registered in manila) that can be partitioned, - formatted and mounted (such as /dev/vdc)
Accessible from…Within a VMWithin a VMAnywhereWithin a VM
Managed by…OpenStack Compute (nova)OpenStack Block Storage (cinder)OpenStack Object Storage (swift)OpenStack Shared File System Storage (manila)
Persists until…VM is terminatedDeleted by userDeleted by userDeleted by user
Sizing determined by…Administrator configuration of size settings, known as - flavors User specification in initial requestAmount of available physical storage - - - - - User specification in initial request - - - - - Requests for extension - - - - - Available user-level quotes - - - - - Limitations applied by Administrator - - - - -
Encryption set by…Parameter in nova.confAdmin establishing -encrypted volume type, - then user selecting encrypted volumeNot yet availableShared File Systems service does not apply any additional - encryption above what the share’s back-end storage provides
Example of typical usage…10 GB first disk, 30 GB second disk1 TB disk10s of TBs of dataset storageDepends completely on the size of back-end storage specified when - a share was being created. In case of thin provisioning it can be partial - space reservation (for more details see Capabilities and Extra-Specs specification)
- - - File-level Storage (for Live Migration) - - With file-level storage, users access stored data using the - operating system's file system interface. Most users, if they have used - a network storage solution before, have encountered this form of - networked storage. In the Unix world, the most common form of this is - NFS. In the Windows world, the most common form is called CIFS - (previously, SMB). - migration - - live migration - - storage - - file-level - - - OpenStack clouds do not present file-level storage to end users. - However, it is important to consider file-level storage for storing - instances under /var/lib/nova/instances when designing your - cloud, since you must have a shared file system if you want to support - live migration. - -
- -
- Choosing Storage Back Ends - - Users will indicate different needs for their cloud use cases. Some - may need fast access to many objects that do not change often, or want to - set a time-to-live (TTL) value on a file. Others may access only storage - that is mounted with the file system itself, but want it to be replicated - instantly when starting a new instance. For other systems, ephemeral - storage—storage that is released when a VM attached to it is shut down— is - the preferred way. When you select storage - back ends, - storage - - choosing back ends - - storage back end - - back end interactions - - store - ask the following questions on behalf of your users: - - - - Do my users need block storage? - - - - Do my users need object storage? - - - - Do I need to support live migration? - - - - Should my persistent storage drives be contained in my compute - nodes, or should I use external storage? - - - - What is the platter count I can achieve? Do more spindles result - in better I/O despite network access? - - - - Which one results in the best cost-performance scenario I'm - aiming for? - - - - How do I manage the storage operationally? - - - - How redundant and distributed is the storage? What happens if a - storage node fails? To what extent can it mitigate my data-loss - disaster scenarios? - - - - To deploy your storage by using only commodity hardware, you can use - a number of open-source packages, as shown in . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Persistent file-based storage support
 ObjectBlockFile-level - This list of open source file-level shared storage - solutions is not exhaustive; other open source solutions exist - (MooseFS). Your organization may already have deployed a - file-level shared storage solution that you can use. -
Swift - - - -  
LVM  - - - -  
Ceph - - - - - - - - Experimental
Gluster - - - - - - - - - - - -
NFS - - - - - - - -
ZFS  - - - -  
Sheepdog - - - - - - - -
- - - Storage Driver Support - - In addition to the open source technologies, there are a number of - proprietary solutions that are officially supported by OpenStack Block - Storage. - storage - - storage driver support - They are offered by the following vendors: - - - - IBM (Storwize family/SVC, XIV) - - - - NetApp - - - - Nexenta - - - - SolidFire - - - - You can find a matrix of the functionality provided by all of the - supported Block Storage drivers on the OpenStack wiki. - - - Also, you need to decide whether you want to support object storage - in your cloud. The two common use cases for providing object storage in a - compute cloud are: - - - - To provide users with a persistent storage mechanism - - - - As a scalable, reliable data store for virtual machine - images - - - -
- Commodity Storage Back-end Technologies - - This section provides a high-level overview of the differences - among the different commodity storage back end technologies. Depending on - your cloud user's needs, you can implement one or many of these - technologies in different combinations: - storage - - commodity storage - - - - - OpenStack Object Storage (swift) - - - The official OpenStack Object Store implementation. It is a - mature technology that has been used for several years in - production by Rackspace as the technology behind Rackspace Cloud - Files. As it is highly scalable, it is well-suited to managing - petabytes of storage. OpenStack Object Storage's advantages are - better integration with - OpenStack (integrates with OpenStack Identity, works with the - OpenStack dashboard interface) and better support for multiple - data center deployment through support of asynchronous eventual - consistency replication. - - Therefore, if you eventually plan on distributing your - storage cluster across multiple data centers, if you need unified - accounts for your users for both compute and object storage, or if - you want to control your object storage with the OpenStack - dashboard, you should consider OpenStack Object Storage. More - detail can be found about OpenStack Object Storage in the section - below. - accounts - - - - - - Ceph - Ceph - - - - A scalable storage solution that replicates data across - commodity storage nodes. Ceph was originally developed by one of - the founders of DreamHost and is currently used in production - there. - - Ceph was designed to expose different types of storage - interfaces to the end user: it supports object storage, block - storage, and file-system interfaces, although the file-system - interface is not yet considered production-ready. Ceph supports - the same API as swift for object storage and can be used as a - back end for cinder block storage as well as back-end storage for - glance images. Ceph supports "thin provisioning," implemented - using copy-on-write. - - This can be useful when booting from volume because a new - volume can be provisioned very quickly. Ceph also supports - keystone-based authentication (as of version 0.56), so it can be a - seamless swap in for the default OpenStack swift - implementation. - - Ceph's advantages are that it gives the administrator more - fine-grained control over data distribution and replication - strategies, enables you to consolidate your object and block - storage, enables very fast provisioning of boot-from-volume - instances using thin provisioning, and supports a distributed - file-system interface, though this interface is not yet recommended for use in - production deployment by the Ceph project. - - If you want to manage your object and block storage within a - single system, or if you want to support fast boot-from-volume, - you should consider Ceph. - - - - - Gluster - GlusterFS - - - - A distributed, shared file system. As of Gluster version - 3.3, you can use Gluster to consolidate your object storage and - file storage into one unified file and object storage solution, - which is called Gluster For OpenStack (GFO). GFO uses a customized - version of swift that enables Gluster to be used as the back-end - storage. - - The main reason to use GFO rather than regular swift is if - you also want to support a distributed file system, either to - support shared storage live migration or to provide it as a - separate service to your end users. If you want to manage your - object and file storage within a single system, you should - consider GFO. - - - - - LVM - LVM (Logical Volume Manager) - - - - The Logical Volume Manager is a Linux-based system that - provides an abstraction layer on top of physical disks to expose - logical volumes to the operating system. The LVM back-end - implements block storage as LVM logical partitions. - - On each host that will house block storage, an administrator - must initially create a volume group dedicated to Block Storage - volumes. Blocks are created from LVM logical volumes. - - - LVM does not provide any replication. - Typically, administrators configure RAID on nodes that use LVM - as block storage to protect against failures of individual hard - drives. However, RAID does not protect against a failure of the - entire host. - - - - - - ZFS - ZFS - - - - The Solaris iSCSI driver for OpenStack Block Storage - implements blocks as ZFS entities. ZFS is a file system that also - has the functionality of a volume manager. This is unlike on a - Linux system, where there is a separation of volume manager (LVM) - and file system (such as, ext3, ext4, xfs, and btrfs). ZFS has a - number of advantages over ext4, including improved data-integrity - checking. - - The ZFS back end for OpenStack Block Storage supports only - Solaris-based systems, such as Illumos. While there is a Linux - port of ZFS, it is not included in any of the standard Linux - distributions, and it has not been tested with OpenStack Block - Storage. As with LVM, ZFS does not provide replication across - hosts on its own; you need to add a replication solution on top of - ZFS if your cloud needs to be able to handle storage-node - failures. - - We don't recommend ZFS unless you have previous experience - with deploying it, since the ZFS back end for Block Storage - requires a Solaris-based operating system, and we assume that your - experience is primarily with Linux-based systems. - - - - - Sheepdog - Sheepdog - - - - Sheepdog is a userspace distributed storage system. Sheepdog scales - to several hundred nodes, and has powerful virtual disk management features - like snapshot, cloning, rollback, thin provisioning. - - It is essentially an object storage system that manages disks and aggregates - the space and performance of disks linearly in hyper scale on commodity hardware in - a smart way. On top of its object store, Sheepdog provides elastic volume service - and http service. Sheepdog does not assume anything about kernel version and can - work nicely with xattr-supported file systems. - - - -
-
- -
- Conclusion - - We hope that you now have some considerations in mind and questions - to ask your future cloud users about their storage use cases. As you can - see, your storage decisions will also influence your network design for - performance and security needs. Continue with us to make more informed - decisions about your OpenStack cloud design. -
-
diff --git a/doc/openstack-ops/ch_ops_advanced_configuration.xml b/doc/openstack-ops/ch_ops_advanced_configuration.xml deleted file mode 100644 index febf572f..00000000 --- a/doc/openstack-ops/ch_ops_advanced_configuration.xml +++ /dev/null @@ -1,266 +0,0 @@ - - - - - Advanced Configuration - - OpenStack is intended to work well across a variety of installation - flavors, from very small private clouds to large public clouds. To achieve - this, the developers add configuration options to their code that allow the - behavior of the various components to be tweaked depending on your needs. - Unfortunately, it is not possible to cover all possible deployments with the - default configuration values. - advanced configuration - - configuration options - - configuration options - - wide availability of - - - At the time of writing, OpenStack has more than 3,000 configuration - options. You can see them documented at the OpenStack configuration reference - guide. This chapter cannot hope to document all of these, but we do - try to introduce the important concepts so that you know where to go digging - for more information. - -
- Differences Between Various Drivers - - Many OpenStack projects implement a driver layer, and each of these - drivers will implement its own configuration options. For example, in - OpenStack Compute (nova), there are various hypervisor drivers - implemented—libvirt, xenserver, hyper-v, and vmware, for example. Not all - of these hypervisor drivers have the same features, and each has different - tuning requirements. - hypervisors - - differences between - - drivers - - differences between - - - - The currently implemented hypervisors are listed on the OpenStack documentation - website. You can see a matrix of the various features in - OpenStack Compute (nova) hypervisor drivers on the OpenStack wiki at - the Hypervisor support matrix - page. - - - The point we are trying to make here is that just because an option - exists doesn't mean that option is relevant to your driver choices. - Normally, the documentation notes which drivers the configuration applies - to. -
- -
- Implementing Periodic Tasks - - Another common concept across various OpenStack projects is that of - periodic tasks. Periodic tasks are much like cron jobs on traditional Unix - systems, but they are run inside an OpenStack process. For example, when - OpenStack Compute (nova) needs to work out what images it can remove from - its local cache, it runs a periodic task to do this. - periodic tasks - - configuration options - - periodic task implementation - - - Periodic tasks are important to understand because of limitations in - the threading model that OpenStack uses. OpenStack uses cooperative - threading in Python, which means that if something long and complicated is - running, it will block other tasks inside that process from running unless - it voluntarily yields execution to another cooperative thread. - cooperative threading - - - A tangible example of this is the nova-compute - process. In order to manage the image cache with libvirt, - nova-compute has a periodic process that scans the - contents of the image cache. Part of this scan is calculating a checksum - for each of the images and making sure that checksum matches what - nova-compute expects it to be. However, images can be - very large, and these checksums can take a long time to generate. At one - point, before it was reported as a bug and fixed, - nova-compute would block on this task and stop - responding to RPC requests. This was visible to users as failure of - operations such as spawning or deleting instances. - - The take away from this is if you observe an OpenStack process that - appears to "stop" for a while and then continue to process normally, you - should check that periodic tasks aren't the problem. One way to do this is - to disable the periodic tasks by setting their interval to zero. - Additionally, you can configure how often these periodic tasks run—in some - cases, it might make sense to run them at a different frequency from the - default. - - The frequency is defined separately for each periodic task. - Therefore, to disable every periodic task in OpenStack Compute (nova), you - would need to set a number of configuration options to zero. The current - list of configuration options you would need to set to zero are: - - - - bandwidth_poll_interval - - - - sync_power_state_interval - - - - heal_instance_info_cache_interval - - - - host_state_interval - - - - image_cache_manager_interval - - - - reclaim_instance_interval - - - - volume_usage_poll_interval - - - - shelved_poll_interval - - - - shelved_offload_time - - - - instance_delete_interval - - - - To set a configuration option to zero, include a line such as - image_cache_manager_interval=0 in your - nova.conf file. - - This list will change between releases, so please refer to your - configuration guide for up-to-date information. -
- -
- Specific Configuration Topics - - This section covers specific examples of configuration options you - might consider tuning. It is by no means an exhaustive list. - -
- Security Configuration for Compute, Networking, and - Storage - - The OpenStack - Security Guide provides a deep dive into securing an - OpenStack cloud, including SSL/TLS, key management, PKI and certificate - management, data transport and privacy concerns, and - compliance. - security issues - - configuration options - - configuration options - - security - -
- -
- High Availability - - The OpenStack High Availability - Guide offers suggestions for elimination of a single - point of failure that could cause system downtime. While it is not a - completely prescriptive document, it offers methods and techniques for - avoiding downtime and data loss. - high availability - - configuration options - - high availability - -
- -
- Enabling IPv6 Support - - You can follow the progress being made on IPV6 support by - watching the neutron IPv6 - Subteam at work. - Liberty - - IPv6 support - - IPv6, enabling support for - - configuration options - - IPv6 support - - - By modifying your configuration setup, you can set up IPv6 when - using nova-network for networking, and a tested setup - is documented for FlatDHCP and a multi-host configuration. The key is to - make nova-network think a radvd - command ran successfully. The entire configuration is detailed in a - Cybera blog post, “An IPv6 - enabled cloud”. -
- -
- Geographical Considerations for Object Storage - - Support for global clustering of object storage servers - is available for all supported releases. You would implement these global - clusters to ensure replication across geographic areas in case of a - natural disaster and also to ensure that users can write or access their - objects more quickly based on the closest data center. You configure a - default region with one zone for each cluster, but be sure your network - (WAN) can handle the additional request and response load between - zones as you add more zones and build a ring that handles more zones. - Refer to Geographically Distributed - Clusters in the documentation for additional - information. - Object Storage - - geographical considerations - - storage - - geographical considerations - - configuration options - - geographical storage considerations - -
-
-
diff --git a/doc/openstack-ops/ch_ops_backup_recovery.xml b/doc/openstack-ops/ch_ops_backup_recovery.xml deleted file mode 100644 index 42f8ae29..00000000 --- a/doc/openstack-ops/ch_ops_backup_recovery.xml +++ /dev/null @@ -1,289 +0,0 @@ - - - - - Backup and Recovery - - Standard backup best practices apply when creating your OpenStack - backup policy. For example, how often to back up your data is closely - related to how quickly you need to recover from data loss. - backup/recovery - - considerations - - - - If you cannot have any data loss at all, you should also focus on a - highly available deployment. The OpenStack High Availability - Guide offers suggestions for elimination of a single - point of failure that could cause system downtime. While it is not a - completely prescriptive document, it offers methods and techniques for - avoiding downtime and data loss. - data - - preventing loss of - - - - Other backup considerations include: - - - - How many backups to keep? - - - - Should backups be kept off-site? - - - - How often should backups be tested? - - - - Just as important as a backup policy is a recovery policy (or at least - recovery testing). - -
- What to Back Up - - While OpenStack is composed of many components and moving parts, - backing up the critical data is quite simple. - backup/recovery - - items included - - - This chapter describes only how to back up configuration files and - databases that the various OpenStack components need to run. This chapter - does not describe how to back up objects inside Object Storage or data - contained inside Block Storage. Generally these areas are left for users - to back up on their own. -
- -
- Database Backups - - The example OpenStack architecture designates the cloud controller - as the MySQL server. This MySQL server hosts the databases for nova, - glance, cinder, and keystone. With all of these databases in one place, - it's very easy to create a database backup: - databases - - backup/recovery of - - backup/recovery - - databases - - - # mysqldump --opt --all-databases > openstack.sql - - If you only want to backup a single database, you can instead - run: - - # mysqldump --opt nova > nova.sql - - where nova is the database you want to back up. - - You can easily automate this process by creating a cron job that - runs the following script once per day: - - #!/bin/bash -backup_dir="/var/lib/backups/mysql" -filename="${backup_dir}/mysql-`hostname`-`eval date +%Y%m%d`.sql.gz" -# Dump the entire MySQL database -/usr/bin/mysqldump --opt --all-databases | gzip > $filename -# Delete backups older than 7 days -find $backup_dir -ctime +7 -type f -delete - - This script dumps the entire MySQL database and deletes any backups - older than seven days. -
- -
- File System Backups - - This section discusses which files and directories should be backed - up regularly, organized by service. - file systems - - backup/recovery of - - backup/recovery - - file systems - - -
- Compute - - The /etc/nova directory on both the cloud - controller and compute nodes should be regularly backed up. - cloud controllers - - file system backups and - - compute nodes - - backup/recovery of - - - /var/log/nova does not need to be backed up if you - have all logs going to a central area. It is highly recommended to use a - central logging server or back up the log directory. - - /var/lib/nova is another important directory to back - up. The exception to this is the /var/lib/nova/instances - subdirectory on compute nodes. This subdirectory contains the KVM images - of running instances. You would want to back up this directory only if - you need to maintain backup copies of all instances. Under most - circumstances, you do not need to do this, but this can vary from cloud - to cloud and your service levels. Also be aware that making a backup of - a live KVM instance can cause that instance to not boot properly if it - is ever restored from a backup. -
- -
- Image Catalog and Delivery - - /etc/glance and /var/log/glance follow - the same rules as their nova counterparts. - Image service - - backup/recovery of - - - /var/lib/glance should also be backed up. Take - special notice of /var/lib/glance/images. If you are using - a file-based back end of glance, /var/lib/glance/images is - where the images are stored and care should be taken. - - There are two ways to ensure stability with this directory. The - first is to make sure this directory is run on a RAID array. If a disk - fails, the directory is available. The second way is to use a tool such - as rsync to replicate the images to another server: - - # rsync -az --progress /var/lib/glance/images \ -backup-server:/var/lib/glance/images/ -
- -
- Identity - - /etc/keystone and /var/log/keystone - follow the same rules as other components. - Identity - - backup/recovery - - - /var/lib/keystone, although it should not contain any - data being used, can also be backed up just in case. -
- -
- Block Storage - - /etc/cinder and /var/log/cinder follow - the same rules as other components. - Block Storage - - storage - - block storage - - - /var/lib/cinder should also be backed up. -
- -
- Object Storage - - /etc/swift is very important to have backed up. This - directory contains the swift configuration files as well as the ring - files and ring builder files, which if lost, - render the data on your cluster inaccessible. A best practice is to copy - the builder files to all storage nodes along with the ring files. - Multiple backup copies are spread throughout your storage - cluster. - builder files - - rings - - ring builders - - Object Storage - - backup/recovery of - -
-
- -
- Recovering Backups - - Recovering backups is a fairly simple process. To begin, first - ensure that the service you are recovering is not running. For example, to - do a full recovery of nova on the cloud controller, - first stop all nova services: - recovery - - backup/recovery - - backup/recovery - - recovering backups - - - - - # stop nova-api -# stop nova-cert -# stop nova-consoleauth -# stop nova-novncproxy -# stop nova-objectstore -# stop nova-scheduler - - Now you can import a previously backed-up database: - - # mysql nova < nova.sql - - You can also restore backed-up nova directories: - - # mv /etc/nova{,.orig} -# cp -a /path/to/backup/nova /etc/ - - Once the files are restored, start everything back up: - - # start mysql -# for i in nova-api nova-cert nova-consoleauth nova-novncproxy -nova-objectstore nova-scheduler -> do -> start $i -> done - - Other services follow the same process, with their respective - directories and databases. -
- -
- Summary - - Backup and subsequent recovery is one of the first tasks system - administrators learn. However, each system has different items that need - attention. By taking care of your database, image service, and appropriate - file system locations, you can be assured that you can handle any event - requiring recovery. -
-
diff --git a/doc/openstack-ops/ch_ops_customize.xml b/doc/openstack-ops/ch_ops_customize.xml deleted file mode 100644 index 0877f609..00000000 --- a/doc/openstack-ops/ch_ops_customize.xml +++ /dev/null @@ -1,1166 +0,0 @@ - - - - - Customization - - OpenStack might not do everything you need it to do out of the box. To - add a new feature, you can follow different paths. - customization - - paths available - - - To take the first path, you can modify the OpenStack code directly. - Learn how to contribute, - follow the code review - workflow, make your changes, and contribute them back to the upstream - OpenStack project. This path is recommended if the feature you need requires - deep integration with an existing project. The community is always open to - contributions and welcomes new functionality that follows the - feature-development guidelines. This path still requires you to use DevStack - for testing your feature additions, so this chapter walks you through the - DevStack environment. - OpenStack community - - customization and - - - For the second path, you can write new features and plug them in using - changes to a configuration file. If the project where your feature would - need to reside uses the Python Paste framework, you can create middleware - for it and plug it in through configuration. There may also be specific ways - of customizing a project, such as creating a new scheduler driver for - Compute or a custom tab for the dashboard. - - This chapter focuses on the second path for customizing OpenStack by - providing two examples for writing new features. The first example shows how - to modify Object Storage (swift) middleware to add a new feature, and the - second example provides a new scheduler feature for OpenStack Compute - (nova). To customize OpenStack this way you need a development environment. - The best way to get an environment up and running quickly is to run DevStack - within your cloud. - -
- Create an OpenStack Development Environment - - To create a development environment, you can use DevStack. DevStack - is essentially a collection of shell scripts and configuration files that - builds an OpenStack development environment for you. You use it to create - such an environment for developing a new feature. - customization - - development environment creation for - - development environments, creating - - DevStack - - development environment creation - - - You can find all of the documentation at the DevStack website. - - - To run DevStack on an instance in - your OpenStack cloud: - - - Boot an instance from the dashboard or the nova command-line - interface (CLI) with the following parameters: - - - - Name: devstack - - - - Image: Ubuntu 14.04 LTS - - - - Memory Size: 4 GB RAM - - - - Disk Size: minimum 5 GB - - - - If you are using the nova client, specify - --flavor 3 for the nova boot command to get - adequate memory and disk sizes. - - - - Log in and set up DevStack. Here's an example of the commands - you can use to set up DevStack on a virtual machine: - - - - Log in to the instance: - - $ ssh username@my.instance.ip.address - - - - Update the virtual machine's operating system: - - # apt-get -y update - - - - Install git: - - # apt-get -y install git - - - - Clone the - devstack repository: - - $ git clone https://git.openstack.org/openstack-dev/devstack - - - - Change to the devstack repository: - - $ cd devstack - - - - - - (Optional) If you've logged in to your instance as the root - user, you must create a "stack" user; otherwise you'll run into - permission issues. If you've logged in as a user other than root, you - can skip these steps: - - - - Run the DevStack script to create the stack user: - - # tools/create-stack-user.sh - - - - Give ownership of the devstack directory - to the stack user: - - # chown -R stack:stack /root/devstack - - - - Set some permissions you can use to view the DevStack screen - later: - - # chmod o+rwx /dev/pts/0 - - - - Switch to the stack user: - - $ su stack - - - - - - Edit the local.conf configuration file that - controls what DevStack will deploy. Copy the example - local.conf file at the end of this section (): - - $ vim local.conf - - - - Run the stack script that will install OpenStack: - - $ ./stack.sh - - - - When the stack script is done, you can open the screen session - it started to view all of the running OpenStack services: - - $ screen -r stack - - - - Press - Ctrl - - A - followed by 0 to go to the first - screen window. - - - - - - - The stack.sh script takes a while to run. - Perhaps you can take this opportunity to join the OpenStack - Foundation. - - - - Screen is a useful program for viewing - many related services at once. For more information, see the GNU screen quick - reference. - - - - - Now that you have an OpenStack development environment, you're free - to hack around without worrying about damaging your production deployment. - provides a working environment for running - OpenStack Identity, Compute, Block Storage, Image service, the OpenStack - dashboard, and Object Storage as the starting point. - - - local.conf - - -[[local|localrc]] -FLOATING_RANGE=192.168.1.224/27 -FIXED_RANGE=10.11.12.0/24 -FIXED_NETWORK_SIZE=256 -FLAT_INTERFACE=eth0 -ADMIN_PASSWORD=supersecret -DATABASE_PASSWORD=iheartdatabases -RABBIT_PASSWORD=flopsymopsy -SERVICE_PASSWORD=iheartksl -SERVICE_TOKEN=xyzpdqlazydog - -
- -
- Customizing Object Storage (Swift) Middleware - - OpenStack Object Storage, known as swift when reading the code, is - based on the Python Paste framework. The best - introduction to its architecture is A Do-It-Yourself Framework. - Because of the swift project's use of this framework, you are able to add - features to a project by placing some custom code in a project's pipeline - without having to change any of the core code. - Paste framework - - Python - - swift - - swift middleware - - Object Storage - - customization of - - customization - - Object Storage - - DevStack - - customizing Object Storage (swift) - - - Imagine a scenario where you have public access to one of your - containers, but what you really want is to restrict access to that to a - set of IPs based on a whitelist. In this example, we'll create a piece of - middleware for swift that allows access to a container from only a set of - IP addresses, as determined by the container's metadata items. Only those - IP addresses that you explicitly whitelist using the container's metadata - will be able to access the container. - - - This example is for illustrative purposes only. It should not be - used as a container IP whitelist solution without further development - and extensive security testing. - security issues - - middleware example - - - - When you join the screen session that stack.sh starts - with screen -r stack, you see a screen for each service - running, which can be a few or several, depending on how many services you - configured DevStack to run. - - The asterisk * indicates which screen window you are viewing. This - example shows we are viewing the key (for keystone) screen window: - - 0$ shell 1$ key* 2$ horizon 3$ s-proxy 4$ s-object 5$ s-container 6$ s-account - - The purpose of the screen windows are as follows: - - - - shell - - - A shell where you can get some work done - - - - - key* - - - The keystone service - - - - - horizon - - - The horizon dashboard web application - - - - - s-{name} - - - The swift services - - - - - - To create the middleware and plug it in through Paste - configuration: - - All of the code for OpenStack lives in /opt/stack. Go - to the swift directory in the shell screen and edit your - middleware module. - - - Change to the directory where Object Storage is - installed: - - $ cd /opt/stack/swift - - - - Create the ip_whitelist.py Python source code - file: - - $ vim swift/common/middleware/ip_whitelist.py - - - - Copy the code in into - ip_whitelist.py. The following code is a - middleware example that restricts access to a container based on IP - address as explained at the beginning of the section. Middleware - passes the request on to another application. This example uses the - swift "swob" library to wrap Web Server Gateway Interface (WSGI) - requests and responses into objects for swift to interact with. When - you're done, save and close the file. - - - ip_whitelist.py - - # vim: tabstop=4 shiftwidth=4 softtabstop=4 -# Copyright (c) 2014 OpenStack Foundation -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -import socket - -from swift.common.utils import get_logger -from swift.proxy.controllers.base import get_container_info -from swift.common.swob import Request, Response - -class IPWhitelistMiddleware(object): - """ - IP Whitelist Middleware - - Middleware that allows access to a container from only a set of IP - addresses as determined by the container's metadata items that start - with the prefix 'allow'. E.G. allow-dev=192.168.0.20 - """ - - def __init__(self, app, conf, logger=None): - self.app = app - - if logger: - self.logger = logger - else: - self.logger = get_logger(conf, log_route='ip_whitelist') - - self.deny_message = conf.get('deny_message', "IP Denied") - self.local_ip = socket.gethostbyname(socket.gethostname()) - - def __call__(self, env, start_response): - """ - WSGI entry point. - Wraps env in swob.Request object and passes it down. - - :param env: WSGI environment dictionary - :param start_response: WSGI callable - """ - req = Request(env) - - try: - version, account, container, obj = req.split_path(1, 4, True) - except ValueError: - return self.app(env, start_response) - - container_info = get_container_info( - req.environ, self.app, swift_source='IPWhitelistMiddleware') - - remote_ip = env['REMOTE_ADDR'] - self.logger.debug("Remote IP: %(remote_ip)s", - {'remote_ip': remote_ip}) - - meta = container_info['meta'] - allow = {k:v for k,v in meta.iteritems() if k.startswith('allow')} - allow_ips = set(allow.values()) - allow_ips.add(self.local_ip) - self.logger.debug("Allow IPs: %(allow_ips)s", - {'allow_ips': allow_ips}) - - if remote_ip in allow_ips: - return self.app(env, start_response) - else: - self.logger.debug( - "IP %(remote_ip)s denied access to Account=%(account)s " - "Container=%(container)s. Not in %(allow_ips)s", locals()) - return Response( - status=403, - body=self.deny_message, - request=req)(env, start_response) - - -def filter_factory(global_conf, **local_conf): - """ - paste.deploy app factory for creating WSGI proxy apps. - """ - conf = global_conf.copy() - conf.update(local_conf) - - def ip_whitelist(app): - return IPWhitelistMiddleware(app, conf) - return ip_whitelist - - - There is a lot of useful information in env and - conf that you can use to decide what to do with the - request. To find out more about what properties are available, you can - insert the following log statement into the __init__ - method: - - self.logger.debug("conf = %(conf)s", locals()) - - and the following log statement into the __call__ - method: - - self.logger.debug("env = %(env)s", locals()) - - - - To plug this middleware into the swift Paste pipeline, you edit - one configuration file, - /etc/swift/proxy-server.conf: - - $ vim /etc/swift/proxy-server.conf - - - - Find the [filter:ratelimit] section in - /etc/swift/proxy-server.conf, and copy in the - following configuration section after it: - - [filter:ip_whitelist] -paste.filter_factory = swift.common.middleware.ip_whitelist:filter_factory -# You can override the default log routing for this filter here: -# set log_name = ratelimit -# set log_facility = LOG_LOCAL0 -# set log_level = INFO -# set log_headers = False -# set log_address = /dev/log -deny_message = You shall not pass! - - - - Find the [pipeline:main] section in - /etc/swift/proxy-server.conf, and add - ip_whitelist after ratelimit to the list like so. When - you're done, save and close the file: - - [pipeline:main] -pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit ip_whitelist ... - - - - Restart the swift proxy service to make swift - use your middleware. Start by switching to the - swift-proxy screen: - - - - Press - Ctrl - - A - followed by 3. - - - - Press - Ctrl - - C - to kill the service. - - - - Press Up Arrow to bring up the last command. - - - - Press Enter to run it. - - - - - - Test your middleware with the swift CLI. Start - by switching to the shell screen and finish by switching back to the - swift-proxy screen to check the log output: - - - - Press  - Ctrl - - A - followed by 0. - - - - Make sure you're in the devstack - directory: - - $ cd /root/devstack - - - - Source openrc to set up your environment variables for the - CLI: - - $ source openrc - - - - Create a container called - middleware-test: - - $ swift post middleware-test - - - - Press - Ctrl - - A - followed by 3 to check the log - output. - - - - - - Among the log statements you'll see the lines: - - proxy-server Remote IP: my.instance.ip.address (txn: ...) -proxy-server Allow IPs: set(['my.instance.ip.address']) (txn: ...) - - These two statements are produced by our middleware and show - that the request was sent from our DevStack instance and was - allowed. - - - - Test the middleware from outside DevStack on a remote machine - that has access to your DevStack instance: - - - - Install the keystone and swift - clients on your local machine: - - # pip install python-keystoneclient python-swiftclient - - - - Attempt to list the objects in the - middleware-test container: - - $ swift --os-auth-url=http://my.instance.ip.address:5000/v2.0/ \ ---os-region-name=RegionOne --os-username=demo:demo \ ---os-password=devstack list middleware-test -Container GET failed: http://my.instance.ip.address:8080/v1/AUTH_.../ - middleware-test?format=json 403 Forbidden   You shall not pass! - - - - - - Press - Ctrl - - A - followed by 3 to check the log output. - Look at the swift log statements again, and among the log statements, - you'll see the lines: - - proxy-server Authorizing from an overriding middleware (i.e: tempurl) (txn: ...) -proxy-server ... IPWhitelistMiddleware -proxy-server Remote IP: my.local.ip.address (txn: ...) -proxy-server Allow IPs: set(['my.instance.ip.address']) (txn: ...) -proxy-server IP my.local.ip.address denied access to Account=AUTH_... \ - Container=None. Not in set(['my.instance.ip.address']) (txn: ...) - - Here we can see that the request was denied because the remote - IP address wasn't in the set of allowed IPs. - - - - Back in your DevStack instance on the shell screen, add some - metadata to your container to allow the request from the remote - machine: - - - - Press - Ctrl - - A - followed by 0. - - - - Add metadata to the container to allow the IP: - - $ swift post --meta allow-dev:my.local.ip.address middleware-test - - - - Now try the command from Step 10 again and it succeeds. - There are no objects in the container, so there is nothing to - list; however, there is also no error to report. - - - - - - - Functional testing like this is not a replacement for proper unit - and integration testing, but it serves to get you started. - testing - - functional testing - - functional testing - - - - You can follow a similar pattern in other projects that use the - Python Paste framework. Simply create a middleware module and plug it in - through configuration. The middleware runs in sequence as part of that - project's pipeline and can call out to other services as necessary. No - project core code is touched. Look for a pipeline value in - the project's conf or ini configuration files in - /etc/<project> to identify projects that use - Paste. - - When your middleware is done, we encourage you to open source it and - let the community know on the OpenStack mailing list. Perhaps others need - the same functionality. They can use your code, provide feedback, and - possibly contribute. If enough support exists for it, perhaps you can - propose that it be added to the official swift middleware. -
- -
- Customizing the OpenStack Compute (nova) Scheduler - - Many OpenStack projects allow for customization of specific features - using a driver architecture. You can write a driver that conforms to a - particular interface and plug it in through configuration. For example, - you can easily plug in a new scheduler for Compute. The existing - schedulers for Compute are feature full and well documented at Scheduling. However, depending - on your user's use cases, the existing schedulers might not meet your - requirements. You might need to create a new scheduler. - customization - - OpenStack Compute (nova) Scheduler - - schedulers - - customization of - - DevStack - - customizing OpenStack Compute (nova) scheduler - - - To create a scheduler, you must inherit from the class - nova.scheduler.driver.Scheduler. Of the five methods that you - can override, you must override the two methods - marked with an asterisk (*) below: - - - - update_service_capabilities - - - - hosts_up - - - - group_hosts - - - - * schedule_run_instance - - - - * select_destinations - - - - To demonstrate customizing OpenStack, we'll create an example of a - Compute scheduler that randomly places an instance on a subset of hosts, - depending on the originating IP address of the request and the prefix of - the hostname. Such an example could be useful when you have a group of - users on a subnet and you want all of their instances to start within some - subset of your hosts. - - - This example is for illustrative purposes only. It should not be - used as a scheduler for Compute without further development and testing. - security issues - - scheduler example - - - - When you join the screen session that stack.sh starts - with screen -r stack, you are greeted with many screen - windows: - - 0$ shell*  1$ key  2$ horizon  ...  9$ n-api  ...  14$ n-sch ... - - - - shell - - - A shell where you can get some work done - - - - - key - - - The keystone service - - - - - horizon - - - The horizon dashboard web application - - - - - n-{name} - - - The nova services - - - - - n-sch - - - The nova scheduler service - - - - - - To create the scheduler and plug it in through - configuration: - - - The code for OpenStack lives in /opt/stack, so go - to the nova directory and edit your scheduler - module. Change to the directory where nova is - installed: - - $ cd /opt/stack/nova - - - - Create the ip_scheduler.py Python source - code file: - - $ vim nova/scheduler/ip_scheduler.py - - - - The code in is a driver that - will schedule servers to hosts based on IP address as explained at the - beginning of the section. Copy the code into - ip_scheduler.py. When you're done, save and close - the file. - - - ip_scheduler.py - - # vim: tabstop=4 shiftwidth=4 softtabstop=4 -# Copyright (c) 2014 OpenStack Foundation -# All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -""" -IP Scheduler implementation -""" - -import random - -from oslo.config import cfg - -from nova.compute import rpcapi as compute_rpcapi -from nova import exception -from nova.openstack.common import log as logging -from nova.openstack.common.gettextutils import _ -from nova.scheduler import driver - -CONF = cfg.CONF -CONF.import_opt('compute_topic', 'nova.compute.rpcapi') -LOG = logging.getLogger(__name__) - -class IPScheduler(driver.Scheduler): - """ - Implements Scheduler as a random node selector based on - IP address and hostname prefix. - """ - - def __init__(self, *args, **kwargs): - super(IPScheduler, self).__init__(*args, **kwargs) - self.compute_rpcapi = compute_rpcapi.ComputeAPI() - - def _filter_hosts(self, request_spec, hosts, filter_properties, - hostname_prefix): - """Filter a list of hosts based on hostname prefix.""" - - hosts = [host for host in hosts if host.startswith(hostname_prefix)] - return hosts - - def _schedule(self, context, topic, request_spec, filter_properties): - """Picks a host that is up at random.""" - - elevated = context.elevated() - hosts = self.hosts_up(elevated, topic) - if not hosts: - msg = _("Is the appropriate service running?") - raise exception.NoValidHost(reason=msg) - - remote_ip = context.remote_address - - if remote_ip.startswith('10.1'): - hostname_prefix = 'doc' - elif remote_ip.startswith('10.2'): - hostname_prefix = 'ops' - else: - hostname_prefix = 'dev' - - hosts = self._filter_hosts(request_spec, hosts, filter_properties, - hostname_prefix) - if not hosts: - msg = _("Could not find another compute") - raise exception.NoValidHost(reason=msg) - - host = random.choice(hosts) - LOG.debug("Request from %(remote_ip)s scheduled to %(host)s" % locals()) - - return host - - def select_destinations(self, context, request_spec, filter_properties): - """Selects random destinations.""" - num_instances = request_spec['num_instances'] - # NOTE(timello): Returns a list of dicts with 'host', 'nodename' and - # 'limits' as keys for compatibility with filter_scheduler. - dests = [] - for i in range(num_instances): - host = self._schedule(context, CONF.compute_topic, - request_spec, filter_properties) - host_state = dict(host=host, nodename=None, limits=None) - dests.append(host_state) - - if len(dests) < num_instances: - raise exception.NoValidHost(reason='') - return dests - - def schedule_run_instance(self, context, request_spec, - admin_password, injected_files, - requested_networks, is_first_time, - filter_properties, legacy_bdm_in_spec): - """Create and run an instance or instances.""" - instance_uuids = request_spec.get('instance_uuids') - for num, instance_uuid in enumerate(instance_uuids): - request_spec['instance_properties']['launch_index'] = num - try: - host = self._schedule(context, CONF.compute_topic, - request_spec, filter_properties) - updated_instance = driver.instance_update_db(context, - instance_uuid) - self.compute_rpcapi.run_instance(context, - instance=updated_instance, host=host, - requested_networks=requested_networks, - injected_files=injected_files, - admin_password=admin_password, - is_first_time=is_first_time, - request_spec=request_spec, - filter_properties=filter_properties, - legacy_bdm_in_spec=legacy_bdm_in_spec) - except Exception as ex: - # NOTE(vish): we don't reraise the exception here to make sure - # that all instances in the request get set to - # error properly - driver.handle_schedule_error(context, ex, instance_uuid, - request_spec) - - - There is a lot of useful information in context, - request_spec, and filter_properties that you - can use to decide where to schedule the instance. To find out more - about what properties are available, you can insert the following log - statements into the schedule_run_instance method of the - scheduler above: - - LOG.debug("context = %(context)s" % {'context': context.__dict__}) -LOG.debug("request_spec = %(request_spec)s" % locals()) -LOG.debug("filter_properties = %(filter_properties)s" % locals()) - - - - To plug this scheduler into nova, edit one configuration file, - /etc/nova/nova.conf: - - $ vim /etc/nova/nova.conf - - - - Find the scheduler_driver config and change it like - so: - - scheduler_driver=nova.scheduler.ip_scheduler.IPScheduler - - - - Restart the nova scheduler service to make nova use your - scheduler. Start by switching to the n-sch screen: - - - - Press - Ctrl - - A - followed by 9. - - - - Press - Ctrl - - A - followed by N until you reach the - n-sch screen. - - - - Press - Ctrl - - C - to kill the service. - - - - Press Up Arrow to bring up the last command. - - - - Press Enter to run it. - - - - - - Test your scheduler with the nova CLI. Start by switching - to the shell screen and finish by switching back to the - n-sch screen to check the log output: - - - - Press  - Ctrl - - A - followed by 0. - - - - Make sure you're in the devstack - directory: - - $ cd /root/devstack - - - - Source openrc to set up your - environment variables for the CLI: - - $ source openrc - - - - Put the image ID for the only installed image into an - environment variable: - - $ IMAGE_ID=`nova image-list | egrep cirros | egrep -v "kernel|ramdisk" | awk '{print $2}'` - - - - Boot a test server: - - $ nova boot --flavor 1 --image $IMAGE_ID scheduler-test - - - - - - Switch back to the n-sch screen. Among the log - statements, you'll see the line: - - 2014-01-23 19:57:47.262 DEBUG nova.scheduler.ip_scheduler \ -[req-... demo demo] Request from 162.242.221.84 \ -scheduled to devstack-havana \ -_schedule /opt/stack/nova/nova/scheduler/ip_scheduler.py:76 - - - - - Functional testing like this is not a replacement for proper unit - and integration testing, but it serves to get you started. - - - A similar pattern can be followed in other projects that use the - driver architecture. Simply create a module and class that conform to the - driver interface and plug it in through configuration. Your code runs when - that feature is used and can call out to other services as necessary. No - project core code is touched. Look for a "driver" value in the project's - .conf configuration files in - /etc/<project> to identify projects that use a driver - architecture. - - When your scheduler is done, we encourage you to open source it and - let the community know on the OpenStack mailing list. Perhaps others need - the same functionality. They can use your code, provide feedback, and - possibly contribute. If enough support exists for it, perhaps you can - propose that it be added to the official Compute schedulers. -
- -
- Customizing the Dashboard (Horizon) - - The dashboard is based on the Python Django web application - framework. The best guide to customizing it has already been written and - can be found at Building on - Horizon. - Django - - Python - - dashboard - - DevStack - - customizing dashboard - - customization - - dashboard - -
- -
- Conclusion - - When operating an OpenStack cloud, you may discover that your users - can be quite demanding. If OpenStack doesn't do what your users need, it - may be up to you to fulfill those requirements. This chapter provided you - with some options for customization and gave you the tools you need to get - started. -
-
diff --git a/doc/openstack-ops/ch_ops_lay_of_land.xml b/doc/openstack-ops/ch_ops_lay_of_land.xml deleted file mode 100644 index 0d625888..00000000 --- a/doc/openstack-ops/ch_ops_lay_of_land.xml +++ /dev/null @@ -1,797 +0,0 @@ - - - - - Lay of the Land - - This chapter helps you set up your working environment and use it to - take a look around your cloud. - -
- Using the OpenStack Dashboard for Administration - - As a cloud administrative user, you can use the OpenStack dashboard - to create and manage projects, users, images, and flavors. Users are - allowed to create and manage images within specified projects and to share - images, depending on the Image service configuration. Typically, the - policy configuration allows admin users only to set quotas and create and - manage services. The dashboard provides an Admin tab - with a System Panel and an Identity - tab. These interfaces give you access to system information - and usage as well as to settings for configuring what end users can do. - Refer to the OpenStack - Administrator Guide for detailed how-to information about using the - dashboard as an admin user. - working environment - - dashboard - - dashboard - -
- -
- Command-Line Tools - - We recommend using a combination of the OpenStack command-line - interface (CLI) tools and the OpenStack dashboard for administration. Some - users with a background in other cloud technologies may be using the EC2 - Compatibility API, which uses naming conventions somewhat different from - the native API. We highlight those differences. - working environment - - command-line tools - - - We strongly suggest that you install the command-line clients from - the Python Package - Index (PyPI) instead of from the distribution packages. The clients - are under heavy development, and it is very likely at any given time that - the version of the packages distributed by your operating-system vendor - are out of date. - command-line tools - - Python Package Index (PyPI) - - pip utility - - Python Package Index (PyPI) - - - The pip utility is used to manage package installation from the PyPI - archive and is available in the python-pip package in most Linux - distributions. Each OpenStack project has its own client, so depending on - which services your site runs, install some or all of the - following - neutron - - python-neutronclient - - swift - - python-swiftclient - - cinder - - keystone - - glance - - python-glanceclient - - nova - - python-novaclient - packages: - - - - python-novaclient (nova CLI) - - - - python-glanceclient (glance CLI) - - - - python-keystoneclient (keystone - CLI) - - - - python-cinderclient (cinder CLI) - - - - python-swiftclient (swift CLI) - - - - python-neutronclient (neutron CLI) - - - -
- Installing the Tools - - To install (or upgrade) a package from the PyPI archive with pip, - - command-line tools - - installing - as root: - - # pip install [--upgrade] <package-name> - - To remove the package: - - # pip uninstall <package-name> - - If you need even newer versions of the clients, pip can install - directly from the upstream git repository using the -e - flag. You must specify a name for the Python egg that is installed. For - example: - - # pip install -e \ - git+https://git.openstack.org/openstack/python-novaclient#egg=python-novaclient - - If you support the EC2 API on your cloud, you should also install - the euca2ools package or some other EC2 API tool so that you can get the - same view your users have. Using EC2 API-based tools is mostly out of - the scope of this guide, though we discuss getting credentials for use - with it. -
- -
- Administrative Command-Line Tools - - There are also several *-manage command-line - tools. These are installed with the project's services on the cloud - controller and do not need to be installed - *-manage command-line tools - - command-line tools - - administrative - separately: - - - - glance-manage - - - - keystone-manage - - - - cinder-manage - - - - Unlike the CLI tools mentioned above, the *-manage - tools must be run from the cloud controller, as root, because they need - read access to the config files such as /etc/nova/nova.conf - and to make queries directly against the database rather than against - the OpenStack API - endpoints. - API (application programming interface) - - API endpoint - - endpoints - - API endpoint - - - - The existence of the *-manage tools is a legacy - issue. It is a goal of the OpenStack project to eventually migrate all - of the remaining functionality in the *-manage tools into - the API-based tools. Until that day, you need to SSH into the - cloud controller node to perform some - maintenance operations that require one of the *-manage - tools. - cloud controller nodes - - command-line tools and - - -
- -
- Getting Credentials - - You must have the appropriate credentials if you want to use the - command-line tools to make queries against your OpenStack cloud. By far, - the easiest way to obtain authentication - credentials to use with command-line clients is to use the OpenStack - dashboard. Select Project, click the - Project tab, and click Access - & Security on the Compute - category. On the Access & Security page, - click the API Access tab to display - two buttons, Download OpenStack RC File and - Download EC2 Credentials, which let you generate - files that you can source in your shell to populate the environment - variables the command-line tools require to know where your service - endpoints and your authentication information are. The user you logged - in to the dashboard dictates the filename for the openrc file, such as - demo-openrc.sh. When logged in as admin, the file - is named admin-openrc.sh. - credentials - - authentication - - command-line tools - - getting credentials - - - The generated file looks something like this: - - #!/bin/bash - -# With the addition of Keystone, to use an openstack cloud you should -# authenticate against keystone, which returns a **Token** and **Service -# Catalog**. The catalog contains the endpoint for all services the -# user/tenant has access to--including nova, glance, keystone, swift. -# -# *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. -# We use the 1.1 *compute api* -export OS_AUTH_URL=http://203.0.113.10:5000/v2.0 - -# With the addition of Keystone we have standardized on the term **tenant** -# as the entity that owns the resources. -export OS_TENANT_ID=98333aba48e756fa8f629c83a818ad57 -export OS_TENANT_NAME="test-project" - -# In addition to the owning entity (tenant), openstack stores the entity -# performing the action as the **user**. -export OS_USERNAME=demo - -# With Keystone you pass the keystone password. -echo "Please enter your OpenStack Password: " -read -s OS_PASSWORD_INPUT -export OS_PASSWORD=$OS_PASSWORD_INPUT - - - This does not save your password in plain text, which is a good - thing. But when you source or run the script, it prompts you for your - password and then stores your response in the environment variable - OS_PASSWORD. It is important to note that this does - require interactivity. It is possible to store a value directly in the - script if you require a noninteractive operation, but you then need to - be extremely cautious with the security and permissions of this - file. - passwords - - security issues - - passwords - - - - EC2 compatibility credentials can be downloaded by selecting - Project, then Compute - , then Access & Security, - then API Access to display the - Download EC2 Credentials button. Click - the button to generate a ZIP file with server x509 certificates and a - shell script fragment. Create a new directory in a secure location - because these are live credentials containing all the authentication - information required to access your cloud identity, unlike the default - user-openrc. Extract the ZIP file here. You should have - cacert.pem, cert.pem, - ec2rc.sh, and pk.pem. The - ec2rc.sh is similar to this: - access key - - - #!/bin/bash - -NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) ||\ -NOVARC=$(python -c 'import os,sys; \ -print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}") -NOVA_KEY_DIR=${NOVARC%/*} -export EC2_ACCESS_KEY=df7f93ec47e84ef8a347bbb3d598449a -export EC2_SECRET_KEY=ead2fff9f8a344e489956deacd47e818 -export EC2_URL=http://203.0.113.10:8773/services/Cloud -export EC2_USER_ID=42 # nova does not use user id, but bundling requires it -export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem -export EC2_CERT=${NOVA_KEY_DIR}/cert.pem -export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem -export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this - -alias ec2-bundle-image="ec2-bundle-image --cert $EC2_CERT --privatekey \ -$EC2_PRIVATE_KEY --user 42 --ec2cert $NOVA_CERT" -alias ec2-upload-bundle="ec2-upload-bundle -a $EC2_ACCESS_KEY -s \ -$EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT" - - To put the EC2 credentials into your environment, source the - ec2rc.sh file. -
- -
- Inspecting API Calls - - The command-line tools can be made to show the OpenStack API - calls they make by passing the --debug flag to - them. - API (application programming interface) - - API calls, inspecting - - command-line tools - - inspecting API calls - For example: - - # nova --debug list - - This example shows the HTTP requests from the client and the - responses from the endpoints, which can be helpful in creating custom - tools written to the OpenStack API. - -
- Using cURL for further inspection - - Underlying the use of the command-line tools is the OpenStack - API, which is a RESTful API that runs over HTTP. There may be cases - where you want to interact with the API directly or need to use it - because of a suspected bug in one of the CLI tools. The best way to do - this is to use a combination of cURL and another tool, - such as jq, to - parse the JSON from the responses. - authentication tokens - - cURL - - - The first thing you must do is authenticate with the cloud - using your credentials to get an authentication - token. - - Your credentials are a combination of username, password, and - tenant (project). You can extract these values from the - openrc.sh discussed above. The token allows you to - interact with your other service endpoints without needing to - reauthenticate for every request. Tokens are typically good for 24 - hours, and when the token expires, you are alerted with a 401 - (Unauthorized) response and you can request another token. - catalog - - - - - Look at your OpenStack service - catalog: - - -$ curl -s -X POST http://203.0.113.10:35357/v2.0/tokens \ --d '{"auth": {"passwordCredentials": {"username":"test-user", \ - "password":"test-password"}, \ - "tenantName":"test-project"}}' \ --H "Content-type: application/json" | jq . - - - - Read through the JSON response to get a feel for how the - catalog is laid out. - - To make working with subsequent requests easier, store the - token in an environment variable: - - -$ TOKEN=`curl -s -X POST http://203.0.113.10:35357/v2.0/tokens \ --d '{"auth": {"passwordCredentials": {"username":"test-user", \ - "password":"test-password"}, \ - "tenantName":"test-project"}}' \ --H "Content-type: application/json" |  jq -r .access.token.id` - - Now you can refer to your token on the command line as - $TOKEN. - - - - Pick a service endpoint from your service catalog, such as - compute. Try a request, for example, listing instances - (servers): - - -$ curl -s \ --H "X-Auth-Token: $TOKEN" \ -http://203.0.113.10:8774/v2/98333aba48e756fa8f629c83a818ad57/servers | jq . - - - - To discover how API requests should be structured, read the - OpenStack API - Reference. To chew through the responses using jq, see the - jq Manual. - - The -s flag used in the cURL commands above are - used to prevent the progress meter from being shown. If you are - having trouble running cURL commands, you'll want to remove it. - Likewise, to help you troubleshoot cURL commands, you can include the - -v flag to show you the verbose output. There are many - more extremely useful features in cURL; refer to the man page for all - the options. -
-
- -
- Servers and Services - - As an administrator, you have a few ways to discover what your - OpenStack cloud looks like simply by using the OpenStack tools - available. This section gives you an idea of how to get an overview of - your cloud, its shape, size, and current state. - services - - obtaining overview of - - servers - - obtaining overview of - - cloud computing - - cloud overview - - command-line tools - - servers and services - - - First, you can discover what servers belong to your OpenStack - cloud by running: - - # nova service-list - - The output looks like the following: - - -+----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ -| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | -+----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ -| 1 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 2 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 3 | nova-compute | c01.example.com. | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 4 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 5 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 6 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 7 | nova-conductor | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 8 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:42.000000 | - | -| 9 | nova-scheduler | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | -| 10 | nova-consoleauth | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:35.000000 | - | -+----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ - - The output shows that there are five compute nodes and one cloud - controller. You can see all the services are in up state, which - indicates that the services are up and running. If a service is no - longer available, then service state changes to down state. This is an indication - that you should troubleshoot why the service is down. - - If you are using cinder, run the following command to see a - similar listing: - - # cinder-manage host list | sort - - host zone -c01.example.com nova -c02.example.com nova -c03.example.com nova -c04.example.com nova -c05.example.com nova -cloud.example.com nova - - With these two tables, you now have a good overview of what - servers and services make up your cloud. - - You can also use the Identity service (keystone) to see what - services are available in your cloud as well as what endpoints have been - configured for the services. - Identity - - displaying services and endpoints with - - - The following command requires you to have your shell environment - configured with the proper administrative variables: - - $ openstack catalog list - - -+----------+------------+---------------------------------------------------------------------------------+ -| Name | Type | Endpoints | -+----------+------------+---------------------------------------------------------------------------------+ -| nova | compute | RegionOne | -| | | publicURL: http://192.168.122.10:8774/v2/9faa845768224258808fc17a1bb27e5e | -| | | internalURL: http://192.168.122.10:8774/v2/9faa845768224258808fc17a1bb27e5e | -| | | adminURL: http://192.168.122.10:8774/v2/9faa845768224258808fc17a1bb27e5e | -| | | | -| cinderv2 | volumev2 | RegionOne | -| | | publicURL: http://192.168.122.10:8776/v2/9faa845768224258808fc17a1bb27e5e | -| | | internalURL: http://192.168.122.10:8776/v2/9faa845768224258808fc17a1bb27e5e | -| | | adminURL: http://192.168.122.10:8776/v2/9faa845768224258808fc17a1bb27e5e | -| | | | - - - - - The preceding output has been truncated to show only two services. - You will see one service entry for each service that your cloud - provides. Note how the endpoint domain can be different depending on the - endpoint type. Different endpoint domains per type are not required, but - this can be done for different reasons, such as endpoint privacy or - network traffic segregation. - - You can find the version of the Compute installation by using the - nova client command: # nova version-list -
- -
- Diagnose Your Compute Nodes - - You can obtain extra information about virtual machines that are - running—their CPU usage, the memory, the disk I/O or network I/O—per - instance, by running the nova diagnostics command - with - compute nodes - - diagnosing - - command-line tools - - compute node diagnostics - a server ID: - - $ nova diagnostics <serverID> - - The output of this command varies depending on the hypervisor - because hypervisors support different attributes. - hypervisors - - compute node diagnosis and - The following demonstrates the difference between the two - most popular hypervisors. Here is example output when the hypervisor is - Xen: +----------------+-----------------+ -| Property | Value | -+----------------+-----------------+ -| cpu0 | 4.3627 | -| memory | 1171088064.0000 | -| memory_target | 1171088064.0000 | -| vbd_xvda_read | 0.0 | -| vbd_xvda_write | 0.0 | -| vif_0_rx | 3223.6870 | -| vif_0_tx | 0.0 | -| vif_1_rx | 104.4955 | -| vif_1_tx | 0.0 | -+----------------+-----------------+While the - command should work with any hypervisor that is controlled through - libvirt (KVM, QEMU, or LXC), it has been tested only with KVM. - Here is the example output when the hypervisor is KVM: - - - - +------------------+------------+ -| Property | Value | -+------------------+------------+ -| cpu0_time | 2870000000 | -| memory | 524288 | -| vda_errors | -1 | -| vda_read | 262144 | -| vda_read_req | 112 | -| vda_write | 5606400 | -| vda_write_req | 376 | -| vnet0_rx | 63343 | -| vnet0_rx_drop | 0 | -| vnet0_rx_errors | 0 | -| vnet0_rx_packets | 431 | -| vnet0_tx | 4905 | -| vnet0_tx_drop | 0 | -| vnet0_tx_errors | 0 | -| vnet0_tx_packets | 45 | -+------------------+------------+ -
-
- -
- Network Inspection - - To see which fixed IP networks are configured in your cloud, you can - use the nova command-line client to get the IP - ranges: - networks - - inspection of - - working environment - - network inspection - $ nova network-list -+--------------------------------------+--------+--------------+ -| ID | Label | Cidr | -+--------------------------------------+--------+--------------+ -| 3df67919-9600-4ea8-952e-2a7be6f70774 | test01 | 10.1.0.0/24 | -| 8283efb2-e53d-46e1-a6bd-bb2bdef9cb9a | test02 | 10.1.1.0/24 | -+--------------------------------------+--------+--------------+ - - The nova command-line client can provide some additional - details: - - # nova network-list -id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid  -1 10.1.0.0/24 None 10.1.0.3 None None 300 2725bbd beacb3f2 -2 10.1.1.0/24 None 10.1.1.3 None None 301 none d0b1a796 - - This output shows that two networks are configured, each network - containing 255 IPs (a /24 subnet). The first network has been assigned to - a certain project, while the second network is still open for assignment. - You can assign this network manually; otherwise, it is automatically - assigned when a project launches its first instance. - - To find out whether any floating IPs are available in your cloud, - run: - - # nova floating-ip-list - - 2725bb...59f43f 1.2.3.4 None nova vlan20 -None 1.2.3.5 48a415...b010ff nova vlan20 - - Here, two floating IPs are available. The first has been allocated - to a project, while the other is unallocated. -
- -
- Users and Projects - - To see a list of projects that have been added to the - cloud, - projects - - obtaining list of current - - user management - - listing users - - working environment - - users and projects - run: - - $ openstack project list - - +----------------------------------+--------------------+ -| ID | Name | -+----------------------------------+--------------------+ -| 422c17c0b26f4fbe9449f37a5621a5e6 | alt_demo | -| 5dc65773519248f3a580cfe28ba7fa3f | demo | -| 9faa845768224258808fc17a1bb27e5e | admin | -| a733070a420c4b509784d7ea8f6884f7 | invisible_to_admin | -| aeb3e976e7794f3f89e4a7965db46c1e | service | -+----------------------------------+--------------------+ - - To see a list of users, run: - - $ openstack user list - - +----------------------------------+----------+ -| ID | Name | -+----------------------------------+----------+ -| 5837063598694771aedd66aa4cddf0b8 | demo | -| 58efd9d852b74b87acc6efafaf31b30e | cinder | -| 6845d995a57a441f890abc8f55da8dfb | glance | -| ac2d15a1205f46d4837d5336cd4c5f5a | alt_demo | -| d8f593c3ae2b47289221f17a776a218b | admin | -| d959ec0a99e24df0b7cb106ff940df20 | nova | -+----------------------------------+----------+ - - - Sometimes a user and a group have a one-to-one mapping. This - happens for standard system accounts, such as cinder, glance, nova, and - swift, or when only one user is part of a group. - -
- -
- Running Instances - - To see a list of running instances, - instances - - list of running - - working environment - - running instances - run: - - $ nova list --all-tenants - - +-----+------------------+--------+-------------------------------------------+ -| ID | Name | Status | Networks | -+-----+------------------+--------+-------------------------------------------+ -| ... | Windows | ACTIVE | novanetwork_1=10.1.1.3, 199.116.232.39 | -| ... | cloud controller | ACTIVE | novanetwork_0=10.1.0.6; jtopjian=10.1.2.3 | -| ... | compute node 1 | ACTIVE | novanetwork_0=10.1.0.4; jtopjian=10.1.2.4 | -| ... | devbox | ACTIVE | novanetwork_0=10.1.0.3 | -| ... | devstack | ACTIVE | novanetwork_0=10.1.0.5 | -| ... | initial | ACTIVE | nova_network=10.1.7.4, 10.1.8.4 | -| ... | lorin-head | ACTIVE | nova_network=10.1.7.3, 10.1.8.3 | -+-----+------------------+--------+-------------------------------------------+ - - Unfortunately, this command does not tell you various details about - the running instances, such as what - compute node the instance is running on, what flavor the instance is, and - so on. You can use the following command to view details about individual - instances: - config drive - - - $ nova show <uuid> - - For example: # nova show 81db556b-8aa5-427d-a95c-2a9a6972f630+-------------------------------------+-----------------------------------+ -| Property | Value | -+-------------------------------------+-----------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-SRV-ATTR:host | c02.example.com | -| OS-EXT-SRV-ATTR:hypervisor_hostname | c02.example.com | -| OS-EXT-SRV-ATTR:instance_name | instance-00000029 | -| OS-EXT-STS:power_state | 1 | -| OS-EXT-STS:task_state | None | -| OS-EXT-STS:vm_state | active | -| accessIPv4 | | -| accessIPv6 | | -| config_drive | | -| created | 2013-02-13T20:08:36Z | -| flavor | m1.small (6) | -| hostId | ... | -| id | ... | -| image | Ubuntu 12.04 cloudimg amd64 (...) | -| key_name | jtopjian-sandbox | -| metadata | {} | -| name | devstack | -| novanetwork_0 network | 10.1.0.5 | -| progress | 0 | -| security_groups | [{u'name': u'default'}] | -| status | ACTIVE | -| tenant_id | ... | -| updated | 2013-02-13T20:08:59Z | -| user_id | ... | -+-------------------------------------+-----------------------------------+ - - This output shows that an instance named - devstack was created from an Ubuntu 12.04 image - using a flavor of m1.small and is hosted on the compute - node c02.example.com. -
- -
- Summary - - We hope you have enjoyed this quick tour of your working - environment, including how to interact with your cloud and extract useful - information. From here, you can use the OpenStack Administrator Guide - as your reference for all of the command-line functionality in your - cloud. -
-
diff --git a/doc/openstack-ops/ch_ops_log_monitor.xml b/doc/openstack-ops/ch_ops_log_monitor.xml deleted file mode 100644 index 6630c702..00000000 --- a/doc/openstack-ops/ch_ops_log_monitor.xml +++ /dev/null @@ -1,1055 +0,0 @@ - - - - - Logging and Monitoring - - As an OpenStack cloud is composed of so many different services, there - are a large number of log files. This chapter aims to assist you in locating - and working with them and describes other ways to track the status of your - deployment. - debugging - - logging/monitoring; maintenance/debugging - - -
- Where Are the Logs? - - Most services use the convention of writing their log files to - subdirectories of the /var/log directory, as listed in . - cloud controllers - - log information - - logging/monitoring - - log location - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OpenStack log locations
Node typeServiceLog location
Cloud controller nova-* /var/log/nova
Cloud controller glance-* /var/log/glance
Cloud controller cinder-* /var/log/cinder
Cloud controller keystone-* /var/log/keystone
Cloud controller neutron-* /var/log/neutron
Cloud controllerhorizon /var/log/apache2/
All nodesmisc (swift, dnsmasq) /var/log/syslog
Compute nodeslibvirt /var/log/libvirt/libvirtd.log
Compute nodesConsole (boot up messages) for VM instances: /var/lib/nova/instances/instance-<instance id>/console.log -
Block Storage nodescinder-volume /var/log/cinder/cinder-volume.log -
-
- -
- Reading the Logs - - OpenStack services use the standard logging levels, at increasing - severity: DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE. That - is, messages only appear in the logs if they are more "severe" than the - particular log level, with DEBUG allowing all log statements through. For - example, TRACE is logged only if the software has a stack trace, while - INFO is logged for every message including those that are only for - information. - logging/monitoring - - logging levels - - - To disable DEBUG-level logging, edit - /etc/nova/nova.conf as follows: - - debug=false - - Keystone is handled a little differently. To modify the logging - level, edit the /etc/keystone/logging.conf file and - look at the logger_root and handler_file - sections. - - Logging for horizon is configured in - /etc/openstack_dashboard/local_settings.py. - Because horizon is a Django web application, it follows the Django - Logging framework conventions. - - The first step in finding the source of an error is typically to - search for a CRITICAL, TRACE, or ERROR message in the log starting at the - bottom of the log file. - logging/monitoring - - reading log messages - - - Here is an example of a CRITICAL log message, with the corresponding - TRACE (Python traceback) immediately following: - - 2013-02-25 21:05:51 17409 CRITICAL cinder [-] Bad or unexpected response from the storage volume backend API: volume group - cinder-volumes doesn't exist -2013-02-25 21:05:51 17409 TRACE cinder Traceback (most recent call last): -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/bin/cinder-volume", line 48, in <module> -2013-02-25 21:05:51 17409 TRACE cinder service.wait() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 422, in wait -2013-02-25 21:05:51 17409 TRACE cinder _launcher.wait() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 127, in wait -2013-02-25 21:05:51 17409 TRACE cinder service.wait() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait -2013-02-25 21:05:51 17409 TRACE cinder return self._exit_event.wait() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait -2013-02-25 21:05:51 17409 TRACE cinder return hubs.get_hub().switch() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch -2013-02-25 21:05:51 17409 TRACE cinder return self.greenlet.switch() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main -2013-02-25 21:05:51 17409 TRACE cinder result = function(*args, **kwargs) -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 88, in run_server -2013-02-25 21:05:51 17409 TRACE cinder server.start() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 159, in start -2013-02-25 21:05:51 17409 TRACE cinder self.manager.init_host() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 95, - in init_host -2013-02-25 21:05:51 17409 TRACE cinder self.driver.check_for_setup_error() -2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 116, - in check_for_setup_error -2013-02-25 21:05:51 17409 TRACE cinder raise exception.VolumeBackendAPIException(data=exception_message) -2013-02-25 21:05:51 17409 TRACE cinder VolumeBackendAPIException: Bad or unexpected response from the storage volume - backend API: volume group cinder-volumes doesn't exist -2013-02-25 21:05:51 17409 TRACE cinder - - In this example, cinder-volumes failed to start - and has provided a stack trace, since its volume back end has been unable - to set up the storage volume—probably because the LVM volume that is - expected from the configuration does not exist. - - Here is an example error log: - - 2013-02-25 20:26:33 6619 ERROR nova.openstack.common.rpc.common [-] AMQP server on localhost:5672 is unreachable: - [Errno 111] ECONNREFUSED. Trying again in 23 seconds. - - In this error, a nova service has failed to connect to the RabbitMQ - server because it got a connection refused error. -
- -
- Tracing Instance Requests - - When an instance fails to behave properly, you will often have to - trace activity associated with that instance across the log files of - various nova-* services and across both the cloud controller - and compute nodes. - instances - - tracing instance requests - - logging/monitoring - - tracing instance requests - - - The typical way is to trace the UUID associated with an instance - across the service logs. - - Consider the following example: - - $ nova list -+--------------------------------+--------+--------+--------------------------+ -| ID | Name | Status | Networks | -+--------------------------------+--------+--------+--------------------------+ -| fafed8-4a46-413b-b113-f1959ffe | cirros | ACTIVE | novanetwork=192.168.100.3| -+--------------------------------------+--------+--------+--------------------+ - - Here, the ID associated with the instance is - faf7ded8-4a46-413b-b113-f19590746ffe. If you search for this - string on the cloud controller in the - /var/log/nova-*.log files, it appears in - nova-api.log and - nova-scheduler.log. If you search for this on the - compute nodes in /var/log/nova-*.log, it appears in - nova-network.log and - nova-compute.log. If no ERROR or CRITICAL messages - appear, the most recent log entry that reports this may provide a hint - about what has gone wrong. -
- -
- Adding Custom Logging Statements - - If there is not enough information in the existing logs, you may - need to add your own custom logging statements to the nova-* - services. - customization - - custom log statements - - logging/monitoring - - adding custom log statements - - - The source files are located in - /usr/lib/python2.7/dist-packages/nova. - - To add logging statements, the following line should be near the top - of the file. For most files, these should already be there: - - from nova.openstack.common import log as logging -LOG = logging.getLogger(__name__) - - To add a DEBUG logging statement, you would do: - - LOG.debug("This is a custom debugging statement") - - You may notice that all the existing logging messages are preceded - by an underscore and surrounded by parentheses, for example: - - LOG.debug(_("Logging statement appears here")) - - This formatting is used to support translation of logging messages - into different languages using the gettext internationalization - library. You don't need to do this for your own custom log messages. - However, if you want to contribute the code back to the OpenStack project - that includes logging statements, you must surround your log messages with - underscores and parentheses. -
- -
- RabbitMQ Web Management Interface or rabbitmqctl - - Aside from connection failures, RabbitMQ log files are generally not - useful for debugging OpenStack related issues. Instead, we recommend you - use the RabbitMQ web management interface. - RabbitMQ - - logging/monitoring - - RabbitMQ web management interface - Enable it on your cloud controller: - cloud controllers - - enabling RabbitMQ - - - # /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management - - # service rabbitmq-server restart - - The RabbitMQ web management interface is accessible on your cloud - controller at http://localhost:55672. - - - Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port - 55672. RabbitMQ versions 3.0 and above use port 15672 instead. You can - check which version of RabbitMQ you have running on your local Ubuntu - machine by doing: - - $ dpkg -s rabbitmq-server | grep "Version:" -Version: 2.7.1-0ubuntu4 - - - An alternative to enabling the RabbitMQ web management interface is - to use the rabbitmqctl commands. For example, - rabbitmqctl list_queues| grep cinder displays any - messages left in the queue. If there are messages, it's a possible sign - that cinder services didn't connect properly to rabbitmq and might have to - be restarted. - - Items to monitor for RabbitMQ include the number of items in each of - the queues and the processing time statistics for the server. -
- -
- Centrally Managing Logs - - Because your cloud is most likely composed of many servers, you must - check logs on each of those servers to properly piece an event together. A - better solution is to send the logs of all servers to a central location - so that they can all be accessed from the same area. - logging/monitoring - - central log management - - - Ubuntu uses rsyslog as the default logging service. Since it is - natively able to send logs to a remote location, you don't have to install - anything extra to enable this feature, just modify the configuration file. - In doing this, consider running your logging over a management network or - using an encrypted VPN to avoid interception. - -
- rsyslog Client Configuration - - To begin, configure all OpenStack components to log to syslog in - addition to their standard log file location. Also configure each - component to log to a different syslog facility. This makes it easier to - split the logs into individual components on the central - server: - rsyslog - - - nova.conf: - - use_syslog=True -syslog_log_facility=LOG_LOCAL0 - - glance-api.conf and - glance-registry.conf: - - use_syslog=True -syslog_log_facility=LOG_LOCAL1 - - cinder.conf: - - use_syslog=True -syslog_log_facility=LOG_LOCAL2 - - keystone.conf: - - use_syslog=True -syslog_log_facility=LOG_LOCAL3 - - By default, Object Storage logs to syslog. - - Next, create /etc/rsyslog.d/client.conf with - the following line: - - *.* @192.168.1.10 - - This instructs rsyslog to send all logs to the IP listed. In this - example, the IP points to the cloud controller. -
- -
- rsyslog Server Configuration - - Designate a server as the central logging server. The best - practice is to choose a server that is solely dedicated to this purpose. - Create a file called /etc/rsyslog.d/server.conf - with the following contents: - - # Enable UDP -$ModLoad imudp -# Listen on 192.168.1.10 only -$UDPServerAddress 192.168.1.10 -# Port 514 -$UDPServerRun 514 - -# Create logging templates for nova -$template NovaFile,"/var/log/rsyslog/%HOSTNAME%/nova.log" -$template NovaAll,"/var/log/rsyslog/nova.log" - -# Log everything else to syslog.log -$template DynFile,"/var/log/rsyslog/%HOSTNAME%/syslog.log" -*.* ?DynFile - -# Log various openstack components to their own individual file -local0.* ?NovaFile -local0.* ?NovaAll -& ~ - - This example configuration handles the nova service only. It first - configures rsyslog to act as a server that runs on port 514. Next, it - creates a series of logging templates. Logging templates control where - received logs are stored. Using the last example, a nova log from - c01.example.com goes to the following locations: - - - - /var/log/rsyslog/c01.example.com/nova.log - - - - /var/log/rsyslog/nova.log - - - - This is useful, as logs from c02.example.com go to: - - - - /var/log/rsyslog/c02.example.com/nova.log - - - - /var/log/rsyslog/nova.log - - - - You have an individual log file for each compute node as well as - an aggregated log that contains nova logs from all nodes. -
-
- -
- Monitoring - - There are two types of monitoring: watching for problems and - watching usage trends. The former ensures that all services are up and - running, creating a functional cloud. The latter involves monitoring - resource usage over time in order to make informed decisions about - potential bottlenecks and upgrades. - cloud controllers - - process monitoring and - - - - - - Nagios - - Nagios is an open source monitoring service. It's capable of - executing arbitrary commands to check the status of server and network - services, remotely executing arbitrary commands directly on servers, and - allowing servers to push notifications back in the form of passive - monitoring. Nagios has been around since 1999. Although newer monitoring - services are available, Nagios is a tried-and-true systems - administration staple. - Nagios - - - -
- Process Monitoring - - A basic type of alert monitoring is to simply check and see - whether a required process is running. - monitoring - - process monitoring - - process monitoring - - logging/monitoring - - process monitoring - For example, ensure that the nova-api - service is running on the cloud controller: - - # ps aux | grep nova-api -nova 12786 0.0 0.0 37952 1312 ? Ss Feb11 0:00 su -s /bin/sh -c exec nova-api ---config-file=/etc/nova/nova.conf nova -nova 12787 0.0 0.1 135764 57400 ? S Feb11 0:01 /usr/bin/python - /usr/bin/nova-api --config-file=/etc/nova/nova.conf -nova 12792 0.0 0.0 96052 22856 ? S Feb11 0:01 /usr/bin/python -/usr/bin/nova-api --config-file=/etc/nova/nova.conf -nova 12793 0.0 0.3 290688 115516 ? S Feb11 1:23 /usr/bin/python -/usr/bin/nova-api --config-file=/etc/nova/nova.conf -nova 12794 0.0 0.2 248636 77068 ? S Feb11 0:04 /usr/bin/python -/usr/bin/nova-api --config-file=/etc/nova/nova.conf -root 24121 0.0 0.0 11688 912 pts/5 S+ 13:07 0:00 grep nova-api - - You can create automated alerts for critical processes by using - Nagios and NRPE. For example, to ensure that the - nova-compute process is running on compute nodes, create an - alert on your Nagios server that looks like this: - - define service { - host_name c01.example.com - check_command check_nrpe_1arg!check_nova-compute - use generic-service - notification_period 24x7 - contact_groups sysadmins - service_description nova-compute -} - - Then on the actual compute node, create the following NRPE - configuration: - - \command[check_nova-compute]=/usr/lib/nagios/plugins/check_procs -c 1: \ --a nova-compute - - Nagios checks that at least one nova-compute - service is running at all times. -
- -
- Resource Alerting - - Resource alerting provides notifications when one or more - resources are critically low. While the monitoring thresholds should be - tuned to your specific OpenStack environment, monitoring resource usage - is not specific to OpenStack at all—any generic type of alert will work - fine. - monitoring - - resource alerting - - alerts - - resource - - resources - - resource alerting - - logging/monitoring - - resource alerting - - - Some of the resources that you want to monitor include: - - - - Disk usage - - - - Server load - - - - Memory usage - - - - Network I/O - - - - Available vCPUs - - - - For example, to monitor disk capacity on a compute node with - Nagios, add the following to your Nagios configuration: - - define service { - host_name c01.example.com - check_command check_nrpe!check_all_disks!20% 10% - use generic-service - contact_groups sysadmins - service_description Disk -} - - On the compute node, add the following to your NRPE - configuration: - - command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c \ -$ARG2$ -e - - Nagios alerts you with a WARNING when any disk on the compute node - is 80 percent full and CRITICAL when 90 percent is full. -
- -
- StackTach - - StackTach is a tool that collects and reports the notifications - sent by nova. Notifications are essentially the same as logs - but can be much more detailed. Nearly all OpenStack components are - capable of generating notifications when significant events occur. - Notifications are messages placed on the OpenStack queue (generally - RabbitMQ) for consumption by downstream systems. An overview of - notifications can be found at System Usage Data. - StackTach - - logging/monitoring - - StackTack tool - - - To enable nova to send notifications, add the following - to nova.conf: - - notification_topics=monitor -notification_driver=messagingv2 - - Once nova is sending notifications, install and - configure StackTach. StackTach workers for Queue consumption and pipeling - processing are configured to read these notifications from RabbitMQ - servers and store them in a database. Users can inquire on instances, - requests and servers by using the browser interface or command line tool, Stacky. - Since StackTach is relatively new and constantly changing, installation - instructions quickly become outdated. Please refer to the StackTach Git repo for instructions as well as a - demo video. Additional details on the latest developments can be - discovered at theofficial page -
-
- Logstash - - Logstash is a high performance indexing and search engine for logs. Logs - from Jenkins test runs are sent to logstash where they are indexed and - stored. Logstash facilitates reviewing logs from multiple sources in a - single test run, searching for errors or particular events within a test - run, and searching for log event trends across test runs. - - There are four major layers in Logstash setup which are - - - Log Pusher - - - Log Indexer - - - ElasticSearch - - - Kibana - - - Each layer scales horizontally. As the number of logs grows you can add - more log pushers, more Logstash indexers, and more ElasticSearch nodes. - - - Logpusher is a pair of Python scripts which first listens to Jenkins - build events and converts them into Gearman jobs. Gearman provides a - generic application framework to farm out work to other machines or - processes that are better suited to do the work. It allows you to do work - in parallel, to load balance processing, and to call functions between - languages.Later Logpusher performs Gearman jobs to push log files into - logstash. Logstash indexer reads these log events, filters them to remove - unwanted lines, collapse multiple events together, and parses useful - information before shipping them to ElasticSearch for storage and - indexing. Kibana is a logstash oriented web client for ElasticSearch. - Logstash - - logging/monitoring - - Logstash - -
- -
- OpenStack Telemetry - - An integrated OpenStack project (code-named ceilometer) collects - metering and event data relating to OpenStack services. - Data collected by the Telemetry service could be used for billing. - Depending on deployment configuration, collected data may be accessible to - users based on the deployment configuration. The Telemetry service - provides a REST API documented at . - You can read more about the module in the - - OpenStack Administrator Guide or in the developer documentation. - monitoring - metering and telemetry - - telemetry/metering - - metering/telemetry - - ceilometer - - logging/monitoring - ceilometer project - -
- -
- OpenStack-Specific Resources - - Resources such as memory, disk, and CPU are generic resources that - all servers (even non-OpenStack servers) have and are important to the - overall health of the server. When dealing with OpenStack specifically, - these resources are important for a second reason: ensuring that enough - are available to launch instances. There are a few ways you can see - OpenStack resource usage. - monitoring - - OpenStack-specific resources - - resources - - generic vs. OpenStack-specific - - logging/monitoring - - OpenStack-specific resources - The first is through the nova - command: - - # nova usage-list - - This command displays a list of how many instances a tenant has - running and some light usage statistics about the combined instances. - This command is useful for a quick overview of your cloud, but it - doesn't really get into a lot of details. - - Next, the nova database contains three tables that - store usage information. - - The nova.quotas and nova.quota_usages - tables store quota information. If a tenant's quota is different from - the default quota settings, its quota is stored in the nova.quotas table. For - example: - - mysql> select project_id, resource, hard_limit from quotas; -+----------------------------------+-----------------------------+------------+ -| project_id | resource | hard_limit | -+----------------------------------+-----------------------------+------------+ -| 628df59f091142399e0689a2696f5baa | metadata_items | 128 | -| 628df59f091142399e0689a2696f5baa | injected_file_content_bytes | 10240 | -| 628df59f091142399e0689a2696f5baa | injected_files | 5 | -| 628df59f091142399e0689a2696f5baa | gigabytes | 1000 | -| 628df59f091142399e0689a2696f5baa | ram | 51200 | -| 628df59f091142399e0689a2696f5baa | floating_ips | 10 | -| 628df59f091142399e0689a2696f5baa | instances | 10 | -| 628df59f091142399e0689a2696f5baa | volumes | 10 | -| 628df59f091142399e0689a2696f5baa | cores | 20 | -+----------------------------------+-----------------------------+------------+ - - The nova.quota_usages table keeps track of how many - resources the tenant currently has in use: - - mysql> select project_id, resource, in_use from quota_usages where project_id like '628%'; -+----------------------------------+--------------+--------+ -| project_id | resource | in_use | -+----------------------------------+--------------+--------+ -| 628df59f091142399e0689a2696f5baa | instances | 1 | -| 628df59f091142399e0689a2696f5baa | ram | 512 | -| 628df59f091142399e0689a2696f5baa | cores | 1 | -| 628df59f091142399e0689a2696f5baa | floating_ips | 1 | -| 628df59f091142399e0689a2696f5baa | volumes | 2 | -| 628df59f091142399e0689a2696f5baa | gigabytes | 12 | -| 628df59f091142399e0689a2696f5baa | images | 1 | -+----------------------------------+--------------+--------+ - - By comparing a tenant's hard limit with their current resource - usage, you can see their usage percentage. For example, if this tenant - is using 1 floating IP out of 10, then they are using 10 percent of - their floating IP quota. Rather than doing the calculation manually, you - can use SQL or the scripting language of your choice and create a - formatted report: - - +----------------------------------+------------+-------------+---------------+ -| some_tenant | -+-----------------------------------+------------+------------+---------------+ -| Resource | Used | Limit | | -+-----------------------------------+------------+------------+---------------+ -| cores | 1 | 20 | 5 % | -| floating_ips | 1 | 10 | 10 % | -| gigabytes | 12 | 1000 | 1 % | -| images | 1 | 4 | 25 % | -| injected_file_content_bytes | 0 | 10240 | 0 % | -| injected_file_path_bytes | 0 | 255 | 0 % | -| injected_files | 0 | 5 | 0 % | -| instances | 1 | 10 | 10 % | -| key_pairs | 0 | 100 | 0 % | -| metadata_items | 0 | 128 | 0 % | -| ram | 512 | 51200 | 1 % | -| reservation_expire | 0 | 86400 | 0 % | -| security_group_rules | 0 | 20 | 0 % | -| security_groups | 0 | 10 | 0 % | -| volumes | 2 | 10 | 20 % | -+-----------------------------------+------------+------------+---------------+ - - The preceding information was generated by using a custom script - that can be found on GitHub. - - - This script is specific to a certain OpenStack installation and - must be modified to fit your environment. However, the logic should - easily be transferable. - -
- -
- Intelligent Alerting - - Intelligent alerting can be thought of as a form of continuous - integration for operations. For example, you can easily check to see - whether the Image service is up and running by ensuring that - the glance-api and glance-registry - processes are running or by seeing whether glace-api is - responding on port 9292. - monitoring - - intelligent alerting - - alerts - - intelligent - - logging/monitoring - - intelligent alerting - - logging/monitoring - - intelligent alerting - - - But how can you tell whether images are being successfully - uploaded to the Image service? Maybe the disk that Image service is - storing the images on is full or the S3 back end is down. You could - naturally check this by doing a quick image upload: - - - - #!/bin/bash -# -# assumes that reasonable credentials have been stored at -# /root/auth - - -. /root/openrc -wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img -glance image-create --name='cirros image' --is-public=true ---container-format=bare --disk-format=qcow2 < cirros-0.3.4-x8 -6_64-disk.img - - By taking this script and rolling it into an alert for your - monitoring system (such as Nagios), you now have an automated way of - ensuring that image uploads to the Image Catalog are working. - - - You must remove the image after each test. Even better, test - whether you can successfully delete an image from the Image - Service. - - - Intelligent alerting takes considerably more time to plan and - implement than the other alerts described in this chapter. A good - outline to implement intelligent alerting is: - - - - Review common actions in your cloud. - - - - Create ways to automatically test these actions. - - - - Roll these tests into an alerting system. - - - - Some other examples for Intelligent Alerting include: - - - - Can instances launch and be destroyed? - - - - Can users be created? - - - - Can objects be stored and deleted? - - - - Can volumes be created and destroyed? - - -
- -
- Trending - - Trending can give you great insight into how your cloud is - performing day to day. You can learn, for example, if a busy day was - simply a rare occurrence or if you should start adding new compute - nodes. - monitoring - - trending - - logging/monitoring - - trending - - monitoring cloud performance with - - logging/monitoring - - trending - - - Trending takes a slightly different approach than alerting. While - alerting is interested in a binary result (whether a check succeeds or - fails), trending records the current state of something at a certain - point in time. Once enough points in time have been recorded, you can - see how the value has changed over time. - trending - - vs. alerts - - binary - - binary results in trending - - - All of the alert types mentioned earlier can also be used for - trend reporting. Some other trend examples include: - trending - - report examples - - - - - The number of instances on each compute node - - - - The types of flavors in use - - - - The number of volumes in use - - - - The number of Object Storage requests each hour - - - - The number of nova-api requests each - hour - - - - The I/O statistics of your storage services - - - - As an example, recording nova-api usage can allow you - to track the need to scale your cloud controller. By keeping an eye on - nova-api requests, you can determine whether you need to - spawn more nova-api processes or go as far as - introducing an entirely new server to run nova-api. To get - an approximate count of the requests, look for standard INFO messages in - /var/log/nova/nova-api.log: - - # grep INFO /var/log/nova/nova-api.log | wc - - You can obtain further statistics by looking for the number of - successful requests: - - # grep " 200 " /var/log/nova/nova-api.log | wc - - By running this command periodically and keeping a record of the - result, you can create a trending report over time that shows whether - your nova-api usage is increasing, decreasing, or keeping - steady. - - A tool such as collectd can be used to store this information. - While collectd is out of the scope of this book, a good starting point - would be to use collectd to store the result as a COUNTER data type. - More information can be found in collectd's - documentation. -
-
- -
- Summary - - For stable operations, you want to detect failure promptly and - determine causes efficiently. With a distributed system, it's even more - important to track the right items to meet a service-level target. - Learning where these logs are located in the file system or API gives you - an advantage. This chapter also showed how to read, interpret, and - manipulate information from OpenStack services so that you can monitor - effectively. -
-
diff --git a/doc/openstack-ops/ch_ops_maintenance.xml b/doc/openstack-ops/ch_ops_maintenance.xml deleted file mode 100644 index 95b88882..00000000 --- a/doc/openstack-ops/ch_ops_maintenance.xml +++ /dev/null @@ -1,1492 +0,0 @@ - - - - - Maintenance, Failures, and Debugging - - Downtime, whether planned or unscheduled, is a certainty when running - a cloud. This chapter aims to provide useful information for dealing - proactively, or reactively, with these occurrences. - maintenance/debugging - - troubleshooting - - -
- - - Cloud Controller and Storage Proxy Failures and Maintenance - - The cloud controller and storage proxy are very similar to each - other when it comes to expected and unexpected downtime. One of each - server type typically runs in the cloud, which makes them very noticeable - when they are not running. - - For the cloud controller, the good news is if your cloud is using - the FlatDHCP multi-host HA network mode, existing instances and volumes - continue to operate while the cloud controller is offline. For the storage - proxy, however, no storage traffic is possible until it is back up and - running. - -
- - - Planned Maintenance - - One way to plan for cloud controller or storage proxy maintenance - is to simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy - affects fewer users. If your cloud controller or storage proxy is too - important to have unavailable at any point in time, you must look into - high-availability options. - cloud controllers - - planned maintenance of - - maintenance/debugging - - cloud controller planned maintenance - -
- -
- - - Rebooting a Cloud Controller or Storage Proxy - - All in all, just issue the "reboot" command. The operating system - cleanly shuts down services and then automatically reboots. If you want - to be very thorough, run your backup jobs just before you - reboot. - maintenance/debugging - - rebooting following - - storage - - storage proxy maintenance - - reboot - - cloud controller or storage proxy - - cloud controllers - - rebooting - -
- -
- - - After a Cloud Controller or Storage Proxy Reboots - - After a cloud controller reboots, ensure that all required - services were successfully started. The following commands use - ps and grep to determine if nova, glance, and - keystone are currently running: - - # ps aux | grep nova- -# ps aux | grep glance- -# ps aux | grep keystone -# ps aux | grep cinder - - Also check that all services are functioning. The following set of - commands sources the openrc file, then runs some basic - glance, nova, and openstack commands. If the commands work as expected, - you can be confident that those services are in working - condition: - - # source openrc -# glance index -# nova list -# openstack project list - - For the storage proxy, ensure that the Object Storage service has - resumed: - - # ps aux | grep swift - - Also check that it is functioning: - - # swift stat -
- -
- - - Total Cloud Controller Failure - - The cloud controller could completely fail if, for example, its - motherboard goes bad. Users will immediately notice the loss of a cloud - controller since it provides core functionality to your cloud - environment. If your infrastructure monitoring does not alert you that - your cloud controller has failed, your users definitely will. - Unfortunately, this is a rough situation. The cloud controller is an - integral part of your cloud. If you have only one controller, you will - have many missing services if it goes down. - cloud controllers - - total failure of - - maintenance/debugging - - cloud controller total failure - - - To avoid this situation, create a highly available cloud - controller cluster. This is outside the scope of this document, but you - can read more in the OpenStack High Availability - Guide. - - The next best approach is to use a configuration-management tool, - such as Puppet, to automatically build a cloud controller. This should - not take more than 15 minutes if you have a spare server available. - After the controller rebuilds, restore any backups taken (see ). - - Also, in practice, the nova-compute services on - the compute nodes do not always reconnect cleanly to rabbitmq hosted on - the controller when it comes back up after a long reboot; a restart on - the nova services on the compute nodes is required. -
-
- -
- - - Compute Node Failures and Maintenance - - Sometimes a compute node either crashes unexpectedly or requires a - reboot for maintenance reasons. - -
- - - Planned Maintenance - - If you need to reboot a compute node due to planned maintenance - (such as a software or hardware upgrade), first ensure that all hosted - instances have been moved off the node. If your cloud is utilizing - shared storage, use the nova live-migration command. First, - get a list of instances that need to be moved: - compute nodes - - maintenance - - maintenance/debugging - - compute node planned maintenance - - - # nova list --host c01.example.com --all-tenants - - Next, migrate them one by one: - - # nova live-migration <uuid> c02.example.com - - If you are not using shared storage, you can use the - --block-migrate option: - - # nova live-migration --block-migrate <uuid> c02.example.com - - After you have migrated all instances, ensure that the - nova-compute service has stopped: - - # stop nova-compute - - If you use a configuration-management system, such as Puppet, that - ensures the nova-compute service is always running, you can - temporarily move the init files: - - # mkdir /root/tmp -# mv /etc/init/nova-compute.conf /root/tmp -# mv /etc/init.d/nova-compute /root/tmp - - Next, shut down your compute node, perform your maintenance, and - turn the node back on. You can reenable the nova-compute - service by undoing the previous commands: - - # mv /root/tmp/nova-compute.conf /etc/init -# mv /root/tmp/nova-compute /etc/init.d/ - - Then start the nova-compute service: - - # start nova-compute - - You can now optionally migrate the instances back to their - original compute node. -
- -
- - - After a Compute Node Reboots - - When you reboot a compute node, first verify that it booted - successfully. This includes ensuring that the nova-compute - service is running: - reboot - - compute node - - maintenance/debugging - - compute node reboot - - - # ps aux | grep nova-compute -# status nova-compute - - Also ensure that it has successfully connected to the AMQP - server: - - # grep AMQP /var/log/nova/nova-compute -2013-02-26 09:51:31 12427 INFO nova.openstack.common.rpc.common [-] Connected to AMQP server on 199.116.232.36:5672 - - After the compute node is successfully running, you must deal with - the instances that are hosted on that compute node because none of them - are running. Depending on your SLA with your users or customers, you - might have to start each instance and ensure that they start - correctly. -
- -
- - - Instances - - You can create a list of instances that are hosted on the compute - node by performing the following command: - instances - - maintenance/debugging - - maintenance/debugging - - instances - - - # nova list --host c01.example.com --all-tenants - - After you have the list, you can use the nova command to start - each instance: - - # nova reboot <uuid> - - - Any time an instance shuts down unexpectedly, it might have - problems on boot. For example, the instance might require an - fsck on the root partition. If this happens, the user can - use the dashboard VNC console to fix this. - - - If an instance does not boot, meaning virsh list - never shows the instance as even attempting to boot, do the following on - the compute node: - - # tail -f /var/log/nova/nova-compute.log - - Try executing the nova reboot command again. You - should see an error message about why the instance was not able to - boot - - In most cases, the error is the result of something in libvirt's - XML file (/etc/libvirt/qemu/instance-xxxxxxxx.xml) that no - longer exists. You can enforce re-creation of the XML file as well as - rebooting the instance by running the following command: - - # nova reboot --hard <uuid> -
- -
- - - Inspecting and Recovering Data from Failed Instances - - In some scenarios, instances are running but are inaccessible - through SSH and do not respond to any command. The VNC console could be - displaying a boot failure or kernel panic error messages. This could be - an indication of file system corruption on the VM itself. If you need to - recover files or inspect the content of the instance, qemu-nbd can be - used to mount the disk. - data - - inspecting/recovering failed instances - - - - If you access or view the user's content and data, get approval - first! - security issues - - failed instance data inspection - - - - To access the instance's disk - (/var/lib/nova/instances/instance-xxxxxx/disk), - use the following steps: - - - - Suspend the instance using the virsh - command. - - - - Connect the qemu-nbd device to the disk. - - - - Mount the qemu-nbd device. - - - - Unmount the device after inspecting. - - - - Disconnect the qemu-nbd device. - - - - Resume the instance. - - - - If you do not follow steps 4 through 6, OpenStack Compute cannot - manage the instance any longer. It fails to respond to any command - issued by OpenStack Compute, and it is marked as shut down. - - Once you mount the disk file, you should be able to access it and - treat it as a collection of normal directories with files and a - directory structure. However, we do not recommend that you edit or touch - any files because this could change the access control lists (ACLs) that - are used to determine which accounts can perform what operations on - files and directories. Changing ACLs can make the instance unbootable if - it is not already. - access control list (ACL) - - - - - Suspend the instance using the virsh - command, taking note of the internal ID: - - # virsh list -Id Name State ----------------------------------- -1 instance-00000981 running -2 instance-000009f5 running -30 instance-0000274a running - -# virsh suspend 30 -Domain 30 suspended - - - - Connect the qemu-nbd device to the disk: - - # cd /var/lib/nova/instances/instance-0000274a -# ls -lh -total 33M --rw-rw---- 1 libvirt-qemu kvm 6.3K Oct 15 11:31 console.log --rw-r--r-- 1 libvirt-qemu kvm 33M Oct 15 22:06 disk --rw-r--r-- 1 libvirt-qemu kvm 384K Oct 15 22:06 disk.local --rw-rw-r-- 1 nova nova 1.7K Oct 15 11:30 libvirt.xml -# qemu-nbd -c /dev/nbd0 `pwd`/disk - - - - Mount the qemu-nbd device. - - The qemu-nbd device tries to export the instance disk's - different partitions as separate devices. For example, if vda is the - disk and vda1 is the root partition, qemu-nbd exports the device as - /dev/nbd0 and /dev/nbd0p1, - respectively: - - # mount /dev/nbd0p1 /mnt/ - - You can now access the contents of /mnt, which - correspond to the first partition of the instance's disk. - - To examine the secondary or ephemeral disk, use an alternate - mount point if you want both primary and secondary drives mounted at - the same time: - - # umount /mnt -# qemu-nbd -c /dev/nbd1 `pwd`/disk.local -# mount /dev/nbd1 /mnt/ - - # ls -lh /mnt/ -total 76K -lrwxrwxrwx. 1 root root 7 Oct 15 00:44 bin -> usr/bin -dr-xr-xr-x. 4 root root 4.0K Oct 15 01:07 boot -drwxr-xr-x. 2 root root 4.0K Oct 15 00:42 dev -drwxr-xr-x. 70 root root 4.0K Oct 15 11:31 etc -drwxr-xr-x. 3 root root 4.0K Oct 15 01:07 home -lrwxrwxrwx. 1 root root 7 Oct 15 00:44 lib -> usr/lib -lrwxrwxrwx. 1 root root 9 Oct 15 00:44 lib64 -> usr/lib64 -drwx------. 2 root root 16K Oct 15 00:42 lost+found -drwxr-xr-x. 2 root root 4.0K Feb 3 2012 media -drwxr-xr-x. 2 root root 4.0K Feb 3 2012 mnt -drwxr-xr-x. 2 root root 4.0K Feb 3 2012 opt -drwxr-xr-x. 2 root root 4.0K Oct 15 00:42 proc -dr-xr-x---. 3 root root 4.0K Oct 15 21:56 root -drwxr-xr-x. 14 root root 4.0K Oct 15 01:07 run -lrwxrwxrwx. 1 root root 8 Oct 15 00:44 sbin -> usr/sbin -drwxr-xr-x. 2 root root 4.0K Feb 3 2012 srv -drwxr-xr-x. 2 root root 4.0K Oct 15 00:42 sys -drwxrwxrwt. 9 root root 4.0K Oct 15 16:29 tmp -drwxr-xr-x. 13 root root 4.0K Oct 15 00:44 usr -drwxr-xr-x. 17 root root 4.0K Oct 15 00:44 var - - - - Once you have completed the inspection, unmount the mount - point and release the qemu-nbd device: - - # umount /mnt -# qemu-nbd -d /dev/nbd0 -/dev/nbd0 disconnected - - - - Resume the instance using virsh: - - # virsh list -Id Name State ----------------------------------- -1 instance-00000981 running -2 instance-000009f5 running -30 instance-0000274a paused - -# virsh resume 30 -Domain 30 resumed - - -
- -
- - - Volumes - - If the affected instances also had attached volumes, first - generate a list of instance and volume UUIDs: - volume - - maintenance/debugging - - maintenance/debugging - - volumes - - - mysql> select nova.instances.uuid as instance_uuid, -cinder.volumes.id as volume_uuid, cinder.volumes.status, -cinder.volumes.attach_status, cinder.volumes.mountpoint, -cinder.volumes.display_name from cinder.volumes -inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid - where nova.instances.host = 'c01.example.com'; - - You should see a result similar to the following: - - -+--------------+------------+-------+--------------+-----------+--------------+ -|instance_uuid |volume_uuid |status |attach_status |mountpoint | display_name | -+--------------+------------+-------+--------------+-----------+--------------+ -|9b969a05 |1f0fbf36 |in-use |attached |/dev/vdc | test | -+--------------+------------+-------+--------------+-----------+--------------+ -1 row in set (0.00 sec) - - Next, manually detach and reattach the volumes, where X is the - proper mount point: - - # nova volume-detach <instance_uuid> <volume_uuid> -# nova volume-attach <instance_uuid> <volume_uuid> /dev/vdX - - Be sure that the instance has successfully booted and is at a - login screen before doing the above. -
- -
- - - Total Compute Node Failure - - Compute nodes can fail the same way a cloud controller can fail. A - motherboard failure or some other type of hardware failure can cause an - entire compute node to go offline. When this happens, all instances - running on that compute node will not be available. Just like with a - cloud controller failure, if your infrastructure monitoring does not - detect a failed compute node, your users will notify you because of - their lost instances. - compute nodes - - failures - - maintenance/debugging - - compute node total failures - - - If a compute node fails and won't be fixed for a few hours (or at - all), you can relaunch all instances that are hosted on the failed node - if you use shared storage for - /var/lib/nova/instances. - - To do this, generate a list of instance UUIDs that are hosted on - the failed node by running the following query on the nova - database: - - mysql> select uuid from instances where host = \ - 'c01.example.com' and deleted = 0; - - Next, update the nova database to indicate that all instances that - used to be hosted on c01.example.com are now hosted on - c02.example.com: - - mysql> update instances set host = 'c02.example.com' where host = \ - 'c01.example.com' and deleted = 0; - - If you're using the Networking service ML2 plug-in, update the - Networking service database to indicate that all ports that - used to be hosted on c01.example.com are now hosted on - c02.example.com: - - mysql> update ml2_port_bindings set host = 'c02.example.com' where host = \ - 'c01.example.com'; - - mysql> update ml2_port_binding_levels set host = 'c02.example.com' where host = \ - 'c01.example.com'; - - After that, use the nova command to reboot all - instances that were on c01.example.com while regenerating their XML - files at the same time: - - # nova reboot --hard <uuid> - - Finally, reattach volumes using the same method described in the - section Volumes. -
- -
- - - /var/lib/nova/instances - - It's worth mentioning this directory in the context of failed - compute nodes. This directory contains the libvirt KVM file-based disk - images for the instances that are hosted on that compute node. If you - are not running your cloud in a shared storage environment, this - directory is unique across all compute nodes. - /var/lib/nova/instances directory - - maintenance/debugging - - /var/lib/nova/instances - - - /var/lib/nova/instances contains two types of - directories. - - The first is the _base directory. This contains all - the cached base images from glance for each unique image that has been - launched on that compute node. Files ending in _20 (or a - different number) are the ephemeral base images. - - The other directories are titled instance-xxxxxxxx. - These directories correspond to instances running on that compute node. - The files inside are related to one of the files in the - _base directory. They're essentially differential-based - files containing only the changes made from the original - _base directory. - - All files and directories in /var/lib/nova/instances - are uniquely named. The files in _base are uniquely titled for the - glance image that they are based on, and the directory names - instance-xxxxxxxx are uniquely titled for that particular - instance. For example, if you copy all data from - /var/lib/nova/instances on one compute node to another, you - do not overwrite any files or cause any damage to images that have the - same unique name, because they are essentially the same file. - - Although this method is not documented or supported, you can use - it when your compute node is permanently offline but you have instances - locally stored on it. -
-
- -
- - - Storage Node Failures and Maintenance - - Because of the high redundancy of Object Storage, dealing with - object storage node issues is a lot easier than dealing with compute node - issues. - -
- - - Rebooting a Storage Node - - If a storage node requires a reboot, simply reboot it. Requests - for data hosted on that node are redirected to other copies while the - server is rebooting. - storage node - - nodes - - storage nodes - - maintenance/debugging - - storage node reboot - -
- -
- - - Shutting Down a Storage Node - - If you need to shut down a storage node for an extended period of - time (one or more days), consider removing the node from the storage - ring. For example: - maintenance/debugging - - storage node shut down - - - # swift-ring-builder account.builder remove <ip address of storage node> -# swift-ring-builder container.builder remove <ip address of storage node> -# swift-ring-builder object.builder remove <ip address of storage node> -# swift-ring-builder account.builder rebalance -# swift-ring-builder container.builder rebalance -# swift-ring-builder object.builder rebalance - - Next, redistribute the ring files to the other nodes: - - # for i in s01.example.com s02.example.com s03.example.com -> do -> scp *.ring.gz $i:/etc/swift -> done - - These actions effectively take the storage node out of the storage - cluster. - - When the node is able to rejoin the cluster, just add it back to - the ring. The exact syntax you use to add a node to your swift cluster - with swift-ring-builder heavily depends on the original - options used when you originally created your cluster. Please refer back - to those commands. -
- -
- - - Replacing a Swift Disk - - If a hard drive fails in an Object Storage node, replacing it is - relatively easy. This assumes that your Object Storage environment is - configured correctly, where the data that is stored on the failed drive - is also replicated to other drives in the Object Storage - environment. - hard drives, replacing - - maintenance/debugging - - swift disk replacement - - - This example assumes that /dev/sdb has failed. - - First, unmount the disk: - - # umount /dev/sdb - - Next, physically remove the disk from the server and replace it - with a working disk. - - Ensure that the operating system has recognized the new - disk: - - # dmesg | tail - - You should see a message about /dev/sdb. - - Because it is recommended to not use partitions on a swift disk, - simply format the disk as a whole: - - # mkfs.xfs /dev/sdb - - Finally, mount the disk: - - # mount -a - - Swift should notice the new disk and that no data exists. It then - begins replicating the data to the disk from the other existing - replicas. -
-
- -
- - - Handling a Complete Failure - - A common way of dealing with the recovery from a full system - failure, such as a power outage of a data center, is to assign each - service a priority, and restore in order. shows an example. - service restoration - - maintenance/debugging - - complete failures - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Example service restoration priority list
PriorityServices
1Internal network connectivity
2Backing storage services
3Public network connectivity for user virtual - machines
4nova-compute, - nova-network, cinder hosts
5User virtual machines
10Message queue and database services
15Keystone services
20cinder-scheduler
21Image Catalog and Delivery services
22nova-scheduler services
98cinder-api
99nova-api services
100Dashboard node
- - Use this example priority list to ensure that user-affected services - are restored as soon as possible, but not before a stable environment is - in place. Of course, despite being listed as a single-line item, each step - requires significant work. For example, just after starting the database, - you should check its integrity, or, after starting the nova services, you - should verify that the hypervisor matches the database and fix any mismatches. -
- -
- - - Configuration Management - - Maintaining an OpenStack cloud requires that you manage multiple - physical servers, and this number might grow over time. Because managing - nodes manually is error prone, we strongly recommend that you use a - configuration-management tool. These tools automate the process of - ensuring that all your nodes are configured properly and encourage you to - maintain your configuration information (such as packages and - configuration options) in a version-controlled repository. - configuration management - - networks - - configuration management - - maintenance/debugging - - configuration management - - - - Several configuration-management tools are available, and this - guide does not recommend a specific one. The two most popular ones in - the OpenStack community are Puppet, with available - OpenStack Puppet - modules; and Chef, with available OpenStack Chef recipes. - Other newer configuration tools include Juju, Ansible, and Salt; and more mature - configuration management tools include CFEngine and Bcfg2. - -
- -
- - - Working with Hardware - - As for your initial deployment, you should ensure that all hardware - is appropriately burned in before adding it to production. Run software - that uses the hardware to its limits—maxing out RAM, CPU, disk, and - network. Many options are available, and normally double as benchmark - software, so you also get a good idea of the performance of your - system. - hardware - - maintenance/debugging - - maintenance/debugging - - hardware - - -
- - - Adding a Compute Node - - If you find that you have reached or are reaching the capacity - limit of your computing resources, you should plan to add additional - compute nodes. Adding more nodes is quite easy. The process for adding - compute nodes is the same as when the initial compute nodes were - deployed to your cloud: use an automated deployment system to bootstrap - the bare-metal server with the operating system and then have a - configuration-management system install and configure OpenStack Compute. - Once the Compute service has been installed and configured in the same - way as the other compute nodes, it automatically attaches itself to the - cloud. The cloud controller notices the new node(s) and begins - scheduling instances to launch there. - cloud controllers - - new compute nodes and - - nodes - - adding - - compute nodes - - adding - - - If your OpenStack Block Storage nodes are separate from your - compute nodes, the same procedure still applies because the same queuing - and polling system is used in both services. - - We recommend that you use the same hardware for new compute and - block storage nodes. At the very least, ensure that the CPUs are similar - in the compute nodes to not break live migration. -
- -
- - - Adding an Object Storage Node - - Adding a new object storage node is different from adding compute - or block storage nodes. You still want to initially configure the server - by using your automated deployment and configuration-management systems. - After that is done, you need to add the local disks of the object - storage node into the object storage ring. The exact command to do this - is the same command that was used to add the initial disks to the ring. - Simply rerun this command on the object storage proxy server for all - disks on the new object storage node. Once this has been done, rebalance - the ring and copy the resulting ring files to the other storage - nodes. - Object Storage - - adding nodes - - - - If your new object storage node has a different number of disks - than the original nodes have, the command to add the new node is - different from the original commands. These parameters vary from - environment to environment. - -
- -
- - - Replacing Components - - Failures of hardware are common in large-scale deployments such as - an infrastructure cloud. Consider your processes and balance time saving - against availability. For example, an Object Storage cluster can easily - live with dead disks in it for some period of time if it has sufficient - capacity. Or, if your compute installation is not full, you could - consider live migrating instances off a host with a RAM failure until - you have time to deal with the problem. -
-
- -
- - - Databases - - Almost all OpenStack components have an underlying database to store - persistent information. Usually this database is MySQL. Normal MySQL - administration is applicable to these databases. OpenStack does not - configure the databases out of the ordinary. Basic administration includes - performance tweaking, high availability, backup, recovery, and repairing. - For more information, see a standard MySQL administration guide. - databases - - maintenance/debugging - - maintenance/debugging - - databases - - - You can perform a couple of tricks with the database to either more - quickly retrieve information or fix a data inconsistency error—for - example, an instance was terminated, but the status was not updated in the - database. These tricks are discussed throughout this book. - -
- - - Database Connectivity - - Review the component's configuration file to see how each - OpenStack component accesses its corresponding database. Look for either - sql_connection or simply connection. The - following command uses grep to display the SQL connection - string for nova, glance, cinder, and keystone: - - # grep -hE "connection ?=" /etc/nova/nova.conf /etc/glance/glance-*.conf -/etc/cinder/cinder.conf /etc/keystone/keystone.conf -sql_connection = mysql+pymysql://nova:nova@cloud.alberta.sandbox.cybera.ca/nova -sql_connection = mysql+pymysql://glance:password@cloud.example.com/glance -sql_connection = mysql+pymysql://glance:password@cloud.example.com/glance -sql_connection = mysql+pymysql://cinder:password@cloud.example.com/cinder - connection = mysql+pymysql://keystone_admin:password@cloud.example.com/keystone - - The connection strings take this format: - - mysql+pymysql:// <username> : <password> @ <hostname> / <database name> -
- -
- - - Performance and Optimizing - - As your cloud grows, MySQL is utilized more and more. If you - suspect that MySQL might be becoming a bottleneck, you should start - researching MySQL optimization. The MySQL manual has an entire section - dedicated to this topic: Optimization - Overview. -
-
- -
- - - HDWMY - - Here's a quick list of various to-do items for each hour, day, week, - month, and year. Please note that these tasks are neither required nor - definitive but helpful ideas: - maintenance/debugging - - schedule of tasks - - -
- - - Hourly - - - - Check your monitoring system for alerts and act on - them. - - - - Check your ticket queue for new tickets. - - -
- -
- - - Daily - - - - Check for instances in a failed or weird state and investigate - why. - - - - Check for security patches and apply them as needed. - - -
- -
- - - Weekly - - - - Check cloud usage: - - User quotas - - - - Disk space - - - - Image usage - - - - Large instances - - - - Network usage (bandwidth and IP usage) - - - - - - Verify your alert mechanisms are still working. - - -
- -
- - - Monthly - - - - Check usage and trends over the past month. - - - - Check for user accounts that should be removed. - - - - Check for operator accounts that should be removed. - - -
- -
- - - Quarterly - - - - Review usage and trends over the past quarter. - - - - Prepare any quarterly reports on usage and statistics. - - - - Review and plan any necessary cloud additions. - - - - Review and plan any major OpenStack upgrades. - - -
- -
- - - Semiannually - - - - Upgrade OpenStack. - - - - Clean up after an OpenStack upgrade (any unused or new - services to be aware of?). - - -
-
- -
- - - Determining Which Component Is Broken - - OpenStack's collection of different components interact with each - other strongly. For example, uploading an image requires interaction from - nova-api, glance-api, - glance-registry, keystone, and potentially - swift-proxy. As a result, it is sometimes difficult to - determine exactly where problems lie. Assisting in this is the purpose of - this section. - logging/monitoring - - tailing logs - - maintenance/debugging - - determining component affected - - -
- - - Tailing Logs - - The first place to look is the log file related to the command you - are trying to run. For example, if nova list is failing, - try tailing a nova log file and running the command again: - tailing logs - - - Terminal 1: - - # tail -f /var/log/nova/nova-api.log - - Terminal 2: - - # nova list - - Look for any errors or traces in the log file. For more - information, see . - - If the error indicates that the problem is with another component, - switch to tailing that component's log file. For example, if nova cannot - access glance, look at the glance-api log: - - Terminal 1: - - # tail -f /var/log/glance/api.log - - Terminal 2: - - # nova list - - Wash, rinse, and repeat until you find the core cause of the - problem. -
- -
- - - Running Daemons on the CLI - - Unfortunately, sometimes the error is not apparent from the log - files. In this case, switch tactics and use a different command; maybe - run the service directly on the command line. For example, if the - glance-api service refuses to start and stay running, try - launching the daemon from the command line: - daemons - - running on CLI - - Command-line interface (CLI) - - - # sudo -u glance -H glance-api - - This might print the error and cause of the problem. - The -H flag is required when running the - daemons with sudo because some daemons will write files relative to - the user's home directory, and this write may fail if - -H is left off. - - - - Example of Complexity - - One morning, a compute node failed to run any instances. The log - files were a bit vague, claiming that a certain instance was unable to - be started. This ended up being a red herring because the instance was - simply the first instance in alphabetical order, so it was the first - instance that nova-compute would touch. - - Further troubleshooting showed that libvirt was not running at - all. This made more sense. If libvirt wasn't running, then no instance - could be virtualized through KVM. Upon trying to start libvirt, it - would silently die immediately. The libvirt logs did not explain - why. - - Next, the libvirtd daemon was run on the command - line. Finally a helpful error message: it could not connect to d-bus. - As ridiculous as it sounds, libvirt, and thus - nova-compute, relies on d-bus and somehow d-bus crashed. - Simply starting d-bus set the entire chain back on track, and soon - everything was back up and running. - -
-
- - - -
- - - What to do when things are running slowly - - - When you are getting slow responses from various services, it can be - hard to know where to start looking. The first thing to check is the - extent of the slowness: is it specific to a single service, or varied - among different services? If your problem is isolated to a specific - service, it can temporarily be fixed by restarting the service, but that - is often only a fix for the symptom and not the actual problem. - - - - This is a collection of ideas from experienced operators on common - things to look at that may be the cause of slowness. It is not, however, - designed to be an exhaustive list. - - -
- - OpenStack Identity service - - If OpenStack Identity is responding slowly, it could be due to the - token table getting large. This can be fixed by running the - keystone-manage token_flush command. - - - Additionally, for Identity-related issues, try the tips in - . - -
- -
- - OpenStack Image service - - OpenStack Image service can be slowed down by things related to the - Identity service, but the Image service itself can be slowed down if - connectivity to the back-end storage in use is slow or otherwise - problematic. For example, your back-end NFS server might have gone - down. - -
- -
- - OpenStack Block Storage service - - OpenStack Block Storage service is similar to the Image service, so - start by checking Identity-related services, and the back-end storage. - Additionally, both the Block Storage and Image services rely on AMQP - and SQL functionality, so consider these when debugging. - -
- -
- - OpenStack Compute service - - Services related to OpenStack Compute are normally fairly fast and - rely on a couple of backend services: Identity for authentication and - authorization), and AMQP for interoperability. Any slowness related to - services is normally related to one of these. Also, as with all other - services, SQL is used extensively. - -
- -
- - OpenStack Networking service - - Slowness in the OpenStack Networking service can be caused by services - that it relies upon, but it can also be related to either physical or - virtual networking. For example: network namespaces that do not exist - or are not tied to interfaces correctly; DHCP daemons that have hung - or are not running; a cable being physically disconnected; a switch - not being configured correctly. When debugging Networking service - problems, begin by verifying all physical networking functionality - (switch configuration, physical cabling, etc.). After the physical - networking is verified, check to be sure all of the Networking - services are running (neutron-server, neutron-dhcp-agent, etc.), then - check on AMQP and SQL back ends. - -
- -
- - AMQP broker - - Regardless of which AMQP broker you use, such as RabbitMQ, there are - common issues which not only slow down operations, but can also cause - real problems. Sometimes messages queued for services stay on the - queues and are not consumed. This can be due to dead or stagnant - services and can be commonly cleared up by either restarting the - AMQP-related services or the OpenStack service in question. - -
- -
- - SQL back end - - Whether you use SQLite or an RDBMS (such as MySQL), SQL - interoperability is essential to a functioning OpenStack environment. - A large or fragmented SQLite file can cause slowness when using files - as a back end. A locked or long-running query can cause delays for - most RDBMS services. In this case, do not kill the query immediately, - but look into it to see if it is a problem with something that is - hung, or something that is just taking a long time to run and needs to - finish on its own. The administration of an RDBMS is outside the scope - of this document, but it should be noted that a properly functioning - RDBMS is essential to most OpenStack services. - -
- -
- - - -
- - - Uninstalling - - While we'd always recommend using your automated deployment system - to reinstall systems from scratch, sometimes you do need to remove - OpenStack from a system the hard way. Here's how: - uninstall operation - - maintenance/debugging - - uninstalling - - - - - Remove all packages. - - - - Remove remaining files. - - - - Remove databases. - - - - These steps depend on your underlying distribution, but in general - you should be looking for "purge" commands in your package manager, like - aptitude purge ~c $package. Following this, you can - look for orphaned files in the directories referenced throughout this - guide. To uninstall the database properly, refer to the manual appropriate - for the product in use. -
-
diff --git a/doc/openstack-ops/ch_ops_network_troubleshooting.xml b/doc/openstack-ops/ch_ops_network_troubleshooting.xml deleted file mode 100644 index 74da5998..00000000 --- a/doc/openstack-ops/ch_ops_network_troubleshooting.xml +++ /dev/null @@ -1,1297 +0,0 @@ - - - - - Network Troubleshooting - - Network troubleshooting can unfortunately be a very difficult and - confusing procedure. A network issue can cause a problem at several points - in the cloud. Using a logical troubleshooting procedure can help mitigate - the confusion and more quickly isolate where exactly the network issue is. - This chapter aims to give you the information you need to identify any - issues for either nova-network or OpenStack Networking - (neutron) with Linux Bridge or Open vSwitch. - OpenStack Networking (neutron) - - troubleshooting - - Linux Bridge - - troubleshooting - - network troubleshooting - - troubleshooting - - -
- Using "ip a" to Check Interface States - - On compute nodes and nodes running nova-network, - use the following command to see information about interfaces, including - information about IPs, VLANs, and whether your interfaces are - up: - ip a command - - interface states, checking - - troubleshooting - - checking interface states - - - # ip a - - If you're encountering any sort of networking difficulty, one good - initial sanity check is to make sure that your interfaces are up. For - example: - - $ ip a | grep state -1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN -2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP - qlen 1000 -3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast - master br100 state UP qlen 1000 -4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN -5: br100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP - - - You can safely ignore the state of virbr0, which - is a default bridge created by libvirt and not used by OpenStack. -
- -
- Visualizing nova-network Traffic in the Cloud - - If you are logged in to an instance and ping an external host—for - example, Google—the ping packet takes the route shown in . - ping packets - - troubleshooting - - nova-network traffic - - -
- Traffic route for ping packet - - - - - - -
- - - - The instance generates a packet and places it on the virtual - Network Interface Card (NIC) inside the instance, such as - eth0. - - - - The packet transfers to the virtual NIC of the compute host, - such as, vnet1. You can find out what vnet NIC is - being used by looking at the - /etc/libvirt/qemu/instance-xxxxxxxx.xml - file. - - - - From the vnet NIC, the packet transfers to a bridge on the - compute node, such as br100. - - If you run FlatDHCPManager, one bridge is on the compute node. - If you run VlanManager, one bridge exists for each VLAN. - - To see which bridge the packet will use, run the command: - $ brctl show - - Look for the vnet NIC. You can also reference - nova.conf and look for the - flat_interface_bridge option. - - - - The packet transfers to the main NIC of the compute node. You - can also see this NIC in the brctl output, or you - can find it by referencing the flat_interface - option in nova.conf. - - - - After the packet is on this NIC, it transfers to the compute - node's default gateway. The packet is now most likely out of your - control at this point. The diagram depicts an external gateway. - However, in the default configuration with multi-host, the compute - host is the gateway. - - - - Reverse the direction to see the path of a ping reply. From this - path, you can see that a single packet travels across four different NICs. - If a problem occurs with any of these NICs, a network issue occurs. -
- -
- Visualizing OpenStack Networking Service Traffic in the - Cloud - - OpenStack Networking has many more degrees of - freedom than nova-network does because of its pluggable - back end. It can be configured with open source or vendor proprietary - plug-ins that control software defined networking (SDN) hardware or - plug-ins that use Linux native facilities on your hosts, such as Open - vSwitch or Linux Bridge. - troubleshooting - - OpenStack traffic - - - The networking chapter of the OpenStack Administrator Guide - shows a variety of networking scenarios and their connection paths. The - purpose of this section is to give you the tools to troubleshoot the - various components involved however they are plumbed together in your - environment. - - For this example, we will use the Open vSwitch (OVS) back end. Other - back-end plug-ins will have very different flow paths. OVS is the most - popularly deployed network driver, according to the October 2015 OpenStack - User Survey, with 41 percent more sites using it than the Linux Bridge - driver. We'll describe each step in turn, with for reference. - - - - The instance generates a packet and places it on the virtual NIC - inside the instance, such as eth0. - - - - The packet transfers to a Test Access Point (TAP) device on the - compute host, such as tap690466bc-92. You can find out what TAP is - being used by looking at the - /etc/libvirt/qemu/instance-xxxxxxxx.xml - file. - - The TAP device name is constructed using the first 11 characters - of the port ID (10 hex digits plus an included '-'), so another means - of finding the device name is to use the neutron - command. This returns a pipe-delimited list, the first item of which - is the port ID. For example, to get the port ID associated with IP - address 10.0.0.10, do this: - - # neutron port-list | grep 10.0.0.10 | cut -d \| -f 2 - ff387e54-9e54-442b-94a3-aa4481764f1d - - Taking the first 11 characters, we can construct a device name - of tapff387e54-9e from this output. - - - -
- Neutron network paths - - - - - - -
- - - - The TAP device is connected to the integration bridge, - br-int. This bridge connects all the instance TAP devices - and any other bridges on the system. In this example, we have - int-br-eth1 and patch-tun. - int-br-eth1 is one half of a veth pair connecting to the - bridge br-eth1, which handles VLAN networks trunked over - the physical Ethernet device eth1. patch-tun - is an Open vSwitch internal port that connects to the - br-tun bridge for GRE networks. - - The TAP devices and veth devices are normal Linux network - devices and may be inspected with the usual tools, such as - ip and tcpdump. Open vSwitch - internal devices, such as patch-tun, are only visible - within the Open vSwitch environment. If you try to run - tcpdump -i patch-tun, it will raise an error, - saying that the device does not exist. - - It is possible to watch packets on internal interfaces, but it - does take a little bit of networking gymnastics. First you need to - create a dummy network device that normal Linux tools can see. Then - you need to add it to the bridge containing the internal interface you - want to snoop on. Finally, you need to tell Open vSwitch to mirror all - traffic to or from the internal port onto this dummy port. After all - this, you can then run tcpdump on the dummy - interface and see the traffic on the internal port. - - - To capture packets from the <code>patch-tun</code> internal - interface on integration bridge, <code>br-int</code>: - - - Create and bring up a dummy interface, - snooper0: - - # ip link add name snooper0 type dummy - - # ip link set dev snooper0 up - - - - - Add device snooper0 to bridge - br-int: - - # ovs-vsctl add-port br-int snooper0 - - - - - Create mirror of patch-tun to - snooper0 (returns UUID of mirror port): - - # ovs-vsctl -- set Bridge br-int mirrors=@m -- --id=@snooper0 \ -get Port snooper0 -- --id=@patch-tun get Port patch-tun \ --- --id=@m create Mirror name=mymirror select-dst-port=@patch-tun \ -select-src-port=@patch-tun output-port=@snooper0 select_all=1 - - - - Profit. You can now see traffic on patch-tun by - running tcpdump -i snooper0. - - - - Clean up by clearing all mirrors on br-int and - deleting the dummy interface: - - # ovs-vsctl clear Bridge br-int mirrors - -# ovs-vsctl del-port br-int snooper0 - - # ip link delete dev snooper0 - - - - - On the integration bridge, networks are distinguished using - internal VLANs regardless of how the networking service defines them. - This allows instances on the same host to communicate directly without - transiting the rest of the virtual, or physical, network. These - internal VLAN IDs are based on the order they are created on the node - and may vary between nodes. These IDs are in no way related to the - segmentation IDs used in the network definition and on the physical - wire. - - VLAN tags are translated between the external tag defined in the - network settings, and internal tags in several places. On the - br-int, incoming packets from the - int-br-eth1 are translated from external tags to internal - tags. Other translations also happen on the other bridges and will be - discussed in those sections. - - - - - To discover which internal VLAN tag is in use for a given - external VLAN by using the <literal>ovs-ofctl</literal> - command: - - - Find the external VLAN tag of the network you're interested - in. This is the provider:segmentation_id as returned - by the networking service: - - # neutron net-show --fields provider:segmentation_id <network name> -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| provider:network_type | vlan | -| provider:segmentation_id | 2113 | -+---------------------------+--------------------------------------+ - - - - - Grep for the provider:segmentation_id, 2113 in - this case, in the output of ovs-ofctl dump-flows - br-int: - - # ovs-ofctl dump-flows br-int|grep vlan=2113 -cookie=0x0, duration=173615.481s, table=0, n_packets=7676140, -n_bytes=444818637, idle_age=0, hard_age=65534, priority=3, -in_port=1,dl_vlan=2113 actions=mod_vlan_vid:7,NORMAL - - - Here you can see packets received on port ID 1 with the VLAN - tag 2113 are modified to have the internal VLAN tag 7. Digging a - little deeper, you can confirm that port 1 is in fact - int-br-eth1: - - # ovs-ofctl show br-int -OFPT_FEATURES_REPLY (xid=0x2): dpid:000022bc45e1914b -n_tables:254, n_buffers:256 -capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS -ARP_MATCH_IP -actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC -SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC -SET_TP_DST ENQUEUE - 1(int-br-eth1): addr:c2:72:74:7f:86:08 - config: 0 - state: 0 - current: 10GB-FD COPPER - speed: 10000 Mbps now, 0 Mbps max - 2(patch-tun): addr:fa:24:73:75:ad:cd - config: 0 - state: 0 - speed: 0 Mbps now, 0 Mbps max - 3(tap9be586e6-79): addr:fe:16:3e:e6:98:56 - config: 0 - state: 0 - current: 10MB-FD COPPER - speed: 10 Mbps now, 0 Mbps max - LOCAL(br-int): addr:22:bc:45:e1:91:4b - config: 0 - state: 0 - speed: 0 Mbps now, 0 Mbps max -OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 - - - - - - - The next step depends on whether the virtual network is - configured to use 802.1q VLAN tags or GRE: - - - - VLAN-based networks exit the integration bridge via veth - interface int-br-eth1 and arrive on the bridge - br-eth1 on the other member of the veth pair - phy-br-eth1. Packets on this interface arrive with - internal VLAN tags and are translated to external tags in the - reverse of the process described above: - - # ovs-ofctl dump-flows br-eth1|grep 2113 -cookie=0x0, duration=184168.225s, table=0, n_packets=0, n_bytes=0, -idle_age=65534, hard_age=65534, priority=4,in_port=1,dl_vlan=7 -actions=mod_vlan_vid:2113,NORMAL - - Packets, now tagged with the external VLAN tag, then exit - onto the physical network via eth1. The Layer2 switch - this interface is connected to must be configured to accept - traffic with the VLAN ID used. The next hop for this packet must - also be on the same layer-2 network. - - - - GRE-based networks are passed with patch-tun to - the tunnel bridge br-tun on interface - patch-int. This bridge also contains one port for - each GRE tunnel peer, so one for each compute node and network - node in your network. The ports are named sequentially from - gre-1 onward. - - Matching gre-<n> interfaces to tunnel - endpoints is possible by looking at the Open vSwitch state: - - # ovs-vsctl show |grep -A 3 -e Port\ \"gre- - Port "gre-1" - Interface "gre-1" - type: gre - options: {in_key=flow, local_ip="10.10.128.21", - out_key=flow, remote_ip="10.10.128.16"} - - - In this case, gre-1 is a tunnel from IP - 10.10.128.21, which should match a local interface on this node, - to IP 10.10.128.16 on the remote side. - - These tunnels use the regular routing tables on the host to - route the resulting GRE packet, so there is no requirement that - GRE endpoints are all on the same layer-2 network, unlike VLAN - encapsulation. - - All interfaces on the br-tun are internal to - Open vSwitch. To monitor traffic on them, you need to set up a - mirror port as described above for patch-tun in the - br-int bridge. - - All translation of GRE tunnels to and from internal VLANs - happens on this bridge. - - - - - To discover which internal VLAN tag is in use for a GRE - tunnel by using the <literal>ovs-ofctl</literal> command: - - - Find the provider:segmentation_id of the - network you're interested in. This is the same field used for the - VLAN ID in VLAN-based networks: - - # neutron net-show --fields provider:segmentation_id <network name> -+--------------------------+-------+ -| Field | Value | -+--------------------------+-------+ -| provider:network_type | gre | -| provider:segmentation_id | 3 | -+--------------------------+-------+ - - - - - Grep for 0x<provider:segmentation_id>, - 0x3 in this case, in the output of ovs-ofctl dump-flows - br-tun: - - # ovs-ofctl dump-flows br-tun|grep 0x3 -cookie=0x0, duration=380575.724s, table=2, n_packets=1800, -n_bytes=286104, priority=1,tun_id=0x3 -actions=mod_vlan_vid:1,resubmit(,10) - cookie=0x0, duration=715.529s, table=20, n_packets=5, -n_bytes=830, hard_timeout=300,priority=1, -vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:a6:48:24 -actions=load:0->NXM_OF_VLAN_TCI[], -load:0x3->NXM_NX_TUN_ID[],output:53 - cookie=0x0, duration=193729.242s, table=21, n_packets=58761, -n_bytes=2618498, dl_vlan=1 actions=strip_vlan,set_tunnel:0x3, -output:4,output:58,output:56,output:11,output:12,output:47, -output:13,output:48,output:49,output:44,output:43,output:45, -output:46,output:30,output:31,output:29,output:28,output:26, -output:27,output:24,output:25,output:32,output:19,output:21, -output:59,output:60,output:57,output:6,output:5,output:20, -output:18,output:17,output:16,output:15,output:14,output:7, -output:9,output:8,output:53,output:10,output:3,output:2, -output:38,output:37,output:39,output:40,output:34,output:23, -output:36,output:35,output:22,output:42,output:41,output:54, -output:52,output:51,output:50,output:55,output:33 - - - Here, you see three flows related to this GRE tunnel. The - first is the translation from inbound packets with this tunnel ID - to internal VLAN ID 1. The second shows a unicast flow to output - port 53 for packets destined for MAC address fa:16:3e:a6:48:24. - The third shows the translation from the internal VLAN - representation to the GRE tunnel ID flooded to all output ports. - For further details of the flow descriptions, see the man page for - ovs-ofctl. As in the previous VLAN example, - numeric port IDs can be matched with their named representations - by examining the output of ovs-ofctl show - br-tun. - - - - - - The packet is then received on the network node. Note that any - traffic to the l3-agent or dhcp-agent will be visible only within - their network namespace. Watching any interfaces outside those - namespaces, even those that carry the network traffic, will only show - broadcast packets like Address Resolution Protocols (ARPs), but - unicast traffic to the router or DHCP address will not be seen. See - Dealing with Network Namespaces for detail on how to run - commands within these namespaces. - - Alternatively, it is possible to configure VLAN-based networks to - use external routers rather than the l3-agent shown here, so long as - the external router is on the same VLAN: - - - - VLAN-based networks are received as tagged packets on a - physical network interface, eth1 in this example. - Just as on the compute node, this interface is a member of the - br-eth1 bridge. - - - - GRE-based networks will be passed to the tunnel bridge - br-tun, which behaves just like the GRE interfaces - on the compute node. - - - - - - Next, the packets from either input go through the integration - bridge, again just as on the compute node. - - - - The packet then makes it to the l3-agent. This is actually - another TAP device within the router's network namespace. Router - namespaces are named in the form - qrouter-<router-uuid>. Running ip - a within the namespace will show the TAP device name, - qr-e6256f7d-31 in this example: - - # ip netns exec qrouter-e521f9d0-a1bd-4ff4-bc81-78a60dd88fe5 ip a|grep state -10: qr-e6256f7d-31: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue - state UNKNOWN -11: qg-35916e1f-36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 - qdisc pfifo_fast state UNKNOWN qlen 500 -28: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN - - - - - The qg-<n> interface in the l3-agent router - namespace sends the packet on to its next hop through device - eth2 on the external bridge br-ex. This - bridge is constructed similarly to br-eth1 and may be - inspected in the same way. - - - - This external bridge also includes a physical network interface, - eth2 in this example, which finally lands the packet on - the external network destined for an external router or - destination. - - - - DHCP agents running on OpenStack networks run in namespaces - similar to the l3-agents. DHCP namespaces are named - qdhcp-<uuid> and have a TAP device on the - integration bridge. Debugging of DHCP issues usually involves working - inside this network namespace. - - -
- -
- Finding a Failure in the Path - - Use ping to quickly find where a failure exists in the network path. - In an instance, first see whether you can ping an external host, such as - google.com. If you can, then there shouldn't be a network problem at - all. - - If you can't, try pinging the IP address of the compute node where - the instance is hosted. If you can ping this IP, then the problem is - somewhere between the compute node and that compute node's gateway. - - If you can't ping the IP address of the compute node, the problem is - between the instance and the compute node. This includes the bridge - connecting the compute node's main NIC with the vnet NIC of the - instance. - - One last test is to launch a second instance and see whether the two - instances can ping each other. If they can, the issue might be related to - the firewall on the compute node. - path failures - - troubleshooting - - detecting path failures - -
- -
- tcpdump - - One great, although very in-depth, way of troubleshooting network - issues is to use tcpdump. We recommended using - tcpdump at several points along the network path to - correlate where a problem might be. If you prefer working with a GUI, - either live or by using a tcpdump capture, do also - check out Wireshark. - tcpdump - - - For example, run the following command: - - tcpdump -i any -n -v \ 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] = -icmp-echo' - - Run this on the command line of the following areas: - - - - An external server outside of the cloud - - - - A compute node - - - - An instance running on that compute node - - - - In this example, these locations have the following IP - addresses: - - Instance - 10.0.2.24 - 203.0.113.30 - Compute Node - 10.0.0.42 - 203.0.113.34 - External Server - 1.2.3.4 - - Next, open a new shell to the instance and then ping the external - host where tcpdump is running. If the network path to - the external server and back is fully functional, you see something like - the following: - - On the external server: - - 12:51:42.020227 IP (tos 0x0, ttl 61, id 0, offset 0, flags [DF], -proto ICMP (1), length 84) - 203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 -12:51:42.020255 IP (tos 0x0, ttl 64, id 8137, offset 0, flags [none], -proto ICMP (1), length 84) - 1.2.3.4 > 203.0.113.30: ICMP echo reply, id 24895, seq 1, - length 64 - - On the compute node: - - 12:51:42.019519 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], -proto ICMP (1), length 84) - 10.0.2.24 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 -12:51:42.019519 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], -proto ICMP (1), length 84) - 10.0.2.24 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 -12:51:42.019545 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], -proto ICMP (1), length 84) - 203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 -12:51:42.019780 IP (tos 0x0, ttl 62, id 8137, offset 0, flags [none], -proto ICMP (1), length 84) - 1.2.3.4 > 203.0.113.30: ICMP echo reply, id 24895, seq 1, length 64 -12:51:42.019801 IP (tos 0x0, ttl 61, id 8137, offset 0, flags [none], -proto ICMP (1), length 84) - 1.2.3.4 > 10.0.2.24: ICMP echo reply, id 24895, seq 1, length 64 -12:51:42.019807 IP (tos 0x0, ttl 61, id 8137, offset 0, flags [none], -proto ICMP (1), length 84) - 1.2.3.4 > 10.0.2.24: ICMP echo reply, id 24895, seq 1, length 64 - - On the instance: - - 12:51:42.020974 IP (tos 0x0, ttl 61, id 8137, offset 0, flags [none], -proto ICMP (1), length 84) - 1.2.3.4 > 10.0.2.24: ICMP echo reply, id 24895, seq 1, length 64 - - Here, the external server received the ping request and sent a ping - reply. On the compute node, you can see that both the ping and ping reply - successfully passed through. You might also see duplicate packets on the - compute node, as seen above, because tcpdump captured - the packet on both the bridge and outgoing interface. -
- -
- iptables - - Through nova-network - or neutron, OpenStack Compute automatically manages - iptables, including forwarding packets to and from instances on a compute - node, forwarding floating IP traffic, and managing security group rules. - In addition to managing the rules, comments (if supported) will be - inserted in the rules to help indicate the purpose - of the rule. - iptables - - troubleshooting - - iptables - - - The following comments are added to the rule set as - appropriate: - - - Perform source NAT on outgoing traffic. - - - Default drop rule for unmatched traffic. - - - Direct traffic from the VM interface to the security group - chain. - - - Jump to the VM specific chain. - - - Direct incoming traffic from VM to the security group - chain. - - - Allow traffic from defined IP/MAC pairs. - - - Drop traffic without an IP/MAC allow rule. - - - Allow DHCP client traffic. - - - Prevent DHCP Spoofing by VM. - - - Send unmatched traffic to the fallback chain. - - - Drop packets that are not associated with a state. - - - Direct packets associated with a known session to the - RETURN chain. - - - Allow IPv6 ICMP traffic to allow RA packets. - - - - Run the following command to view the current iptables - configuration: - - # iptables-save - - - If you modify the configuration, it reverts the next time you - restart nova-network or - neutron-server. You must use OpenStack to - manage iptables. - -
- -
- Network Configuration in the Database for nova-network - - With nova-network, the nova database table - contains a few tables with networking information: - databases - - nova-network troubleshooting - - troubleshooting - - nova-network database - - - - - fixed_ips - - - Contains each possible IP address for the subnet(s) added to - Compute. This table is related to the instances - table by way of the fixed_ips.instance_uuid - column. - - - - - floating_ips - - - Contains each floating IP address that was added to Compute. - This table is related to the fixed_ips table by - way of the floating_ips.fixed_ip_id - column. - - - - - instances - - - Not entirely network specific, but it contains information - about the instance that is utilizing the fixed_ip - and optional floating_ip. - - - - - From these tables, you can see that a floating IP is technically - never directly related to an instance; it must always go through a fixed - IP. - -
- Manually Disassociating a Floating IP - - Sometimes an instance is terminated but the floating IP was not - correctly de-associated from that instance. Because the database is in - an inconsistent state, the usual tools to disassociate the IP no longer - work. To fix this, you must manually update the database. - IP addresses - - floating - - floating IP address - - - First, find the UUID of the instance in question: - - mysql> select uuid from instances where hostname = 'hostname'; - - Next, find the fixed IP entry for that UUID: - - mysql> select * from fixed_ips where instance_uuid = '<uuid>'; - - You can now get the related floating IP entry: - - mysql> select * from floating_ips where fixed_ip_id = '<fixed_ip_id>'; - - And finally, you can disassociate the floating IP: - - mysql> update floating_ips set fixed_ip_id = NULL, host = NULL where - fixed_ip_id = '<fixed_ip_id>'; - - You can optionally also deallocate the IP from the user's - pool: - - mysql> update floating_ips set project_id = NULL where - fixed_ip_id = '<fixed_ip_id>'; -
-
- -
- Debugging DHCP Issues with nova-network - - One common networking problem is that an instance boots successfully - but is not reachable because it failed to obtain an IP address from - dnsmasq, which is the DHCP server that is launched by the - nova-network service. - DHCP (Dynamic Host Configuration Protocol) - - debugging - - troubleshooting - - nova-network DHCP - - - The simplest way to identify that this is the problem with your - instance is to look at the console output of your instance. If DHCP - failed, you can retrieve the console log by doing: - - $ nova console-log <instance name or uuid> - - If your instance failed to obtain an IP through DHCP, some messages - should appear in the console. For example, for the Cirros image, you see - output that looks like the following: - - udhcpc (v1.17.2) started -Sending discover... -Sending discover... -Sending discover... -No lease, forking to background -starting DHCP forEthernet interface eth0 [ [1;32mOK[0;39m ] -cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id -wget: can't connect to remote host (169.254.169.254): Network is -unreachable - - After you establish that the instance booted properly, the task is - to figure out where the failure is. - - A DHCP problem might be caused by a misbehaving dnsmasq process. - First, debug by checking logs and then restart the dnsmasq processes only - for that project (tenant). In VLAN mode, there is a dnsmasq process for - each tenant. Once you have restarted targeted dnsmasq processes, the - simplest way to rule out dnsmasq causes is to kill all of the dnsmasq - processes on the machine and restart nova-network. As a - last resort, do this as root: - - # killall dnsmasq -# restart nova-network - - - Use openstack-nova-network on - RHEL/CentOS/Fedora but nova-network on - Ubuntu/Debian. - - - Several minutes after nova-network is restarted, - you should see new dnsmasq processes running: - - - - # ps aux | grep dnsmasq - - nobody 3735 0.0 0.0 27540 1044 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order - --bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro -root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order - --bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - - If your instances are still not able to obtain IP addresses, the - next thing to check is whether dnsmasq is seeing the DHCP requests from - the instance. On the machine that is running the dnsmasq process, which is - the compute host if running in multi-host mode, look at - /var/log/syslog to see the dnsmasq output. If dnsmasq - is seeing the request properly and handing out an IP, the output looks - like this: - - Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPDISCOVER(br100) fa:16:3e:56:0b:6f -Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPOFFER(br100) 192.168.100.3 - fa:16:3e:56:0b:6f -Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPREQUEST(br100) 192.168.100.3 - fa:16:3e:56:0b:6f -Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPACK(br100) 192.168.100.3 -fa:16:3e:56:0b:6f test - - If you do not see the DHCPDISCOVER, a problem - exists with the packet getting from the instance to the machine running - dnsmasq. If you see all of the preceding output and your instances are - still not able to obtain IP addresses, then the packet is able to get from - the instance to the host running dnsmasq, but it is not able to make the - return trip. - - You might also see a message such as this: - - Feb 27 22:01:36 mynode dnsmasq-dhcp[25435]: DHCPDISCOVER(br100) - fa:16:3e:78:44:84 no address available - - This may be a dnsmasq and/or nova-network related - issue. (For the preceding example, the problem happened to be that dnsmasq - did not have any more IP addresses to give away because there were no more - fixed IPs available in the OpenStack Compute database.) - - If there's a suspicious-looking dnsmasq log message, take a look at - the command-line arguments to the dnsmasq processes to see if they look - correct: - - $ ps aux | grep dnsmasq - - The output looks something like the following: - - 108 1695 0.0 0.0 25972 1000 ? S Feb26 0:00 /usr/sbin/dnsmasq --u libvirt-dnsmasq ---strict-order --bind-interfaces - --pid-file=/var/run/libvirt/network/default.pid --conf-file= - --except-interface lo --listen-address 192.168.122.1 - --dhcp-range 192.168.122.2,192.168.122.254 - --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases - --dhcp-lease-max=253 --dhcp-no-override -nobody 2438 0.0 0.0 27540 1096 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order ---bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 - --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order ---bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 - --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - - The output shows three different dnsmasq processes. The dnsmasq - process that has the DHCP subnet range of 192.168.122.0 belongs to libvirt - and can be ignored. The other two dnsmasq processes belong to - nova-network. The two processes are actually - related—one is simply the parent process of the other. The arguments of - the dnsmasq processes should correspond to the details you configured - nova-network with. - - If the problem does not seem to be related to dnsmasq itself, at - this point use tcpdump on the interfaces to determine where - the packets are getting lost. - - DHCP traffic uses UDP. The client sends from port 68 to port 67 on - the server. Try to boot a new instance and then systematically listen on - the NICs until you identify the one that isn't seeing the traffic. To use - tcpdump to listen to ports 67 and 68 on br100, you would - do: - - # tcpdump -i br100 -n port 67 or port 68 - - You should be doing sanity checks on the interfaces using command - such as ip a and brctl show to ensure that the - interfaces are actually up and configured the way that you think that they - are. -
- -
- Debugging DNS Issues - - If you are able to use SSH to log into an instance, but it takes a - very long time (on the order of a minute) to get a prompt, then you might - have a DNS issue. The reason a DNS issue can cause this problem is that - the SSH server does a reverse DNS lookup on the IP address that you are - connecting from. If DNS lookup isn't working on your instances, then you - must wait for the DNS reverse lookup timeout to occur for the SSH login - process to complete. - DNS (Domain Name Server, Service or System) - - debugging - - troubleshooting - - DNS issues - - - When debugging DNS issues, start by making sure that the host where - the dnsmasq process for that instance runs is able to correctly resolve. - If the host cannot resolve, then the instances won't be able to - either. - - A quick way to check whether DNS is working is to resolve a hostname - inside your instance by using the host command. If DNS is - working, you should see: - - $ host openstack.org -openstack.org has address 174.143.194.225 -openstack.org mail is handled by 10 mx1.emailsrvr.com. -openstack.org mail is handled by 20 mx2.emailsrvr.com. - - If you're running the Cirros image, it doesn't have the "host" - program installed, in which case you can use ping to try to access a - machine by hostname to see whether it resolves. If DNS is working, the - first line of ping would be: - - $ ping openstack.org -PING openstack.org (174.143.194.225): 56 data bytes - - If the instance fails to resolve the hostname, you have a DNS - problem. For example: - - $ ping openstack.org -ping: bad address 'openstack.org' - - In an OpenStack cloud, the dnsmasq process acts as the DNS server - for the instances in addition to acting as the DHCP server. A misbehaving - dnsmasq process may be the source of DNS-related issues inside the - instance. As mentioned in the previous section, the simplest way to rule - out a misbehaving dnsmasq process is to kill all the dnsmasq processes on - the machine and restart nova-network. However, be aware - that this command affects everyone running instances on this node, - including tenants that have not seen the issue. As a last resort, as - root: - - # killall dnsmasq -# restart nova-network - - After the dnsmasq processes start again, check whether DNS is - working. - - If restarting the dnsmasq process doesn't fix the issue, you might - need to use tcpdump to look at the packets to trace where the - failure is. The DNS server listens on UDP port 53. You should see the DNS - request on the bridge (such as, br100) of your compute node. Let's say you - start listening with tcpdump on the compute node: - - # tcpdump -i br100 -n -v udp port 53 -tcpdump: listening on br100, link-type EN10MB (Ethernet), capture size 65535 -bytes - - Then, if you use SSH to log into your instance and try ping - openstack.org, you should see something like: - - 16:36:18.807518 IP (tos 0x0, ttl 64, id 56057, offset 0, flags [DF], -proto UDP (17), length 59) - 192.168.100.4.54244 > 192.168.100.1.53: 2+ A? openstack.org. (31) -16:36:18.808285 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], -proto UDP (17), length 75) - 192.168.100.1.53 > 192.168.100.4.54244: 2 1/0/0 openstack.org. A - 174.143.194.225 (47) -
- -
- Troubleshooting Open vSwitch - - Open vSwitch, as used in the previous OpenStack Networking - examples is a full-featured multilayer virtual switch licensed under the - open source Apache 2.0 license. Full documentation can be found at the project's website. In - practice, given the preceding configuration, the most common issues are - being sure that the required bridges (br-int, - br-tun, and br-ex) exist and have the proper - ports connected to them. - Open vSwitch - - troubleshooting - - troubleshooting - - Open vSwitch - - - The Open vSwitch driver should and usually does manage this - automatically, but it is useful to know how to do this by hand with the - ovs-vsctl command. This command has many more - subcommands than we will use here; see the man page or use - ovs-vsctl --help for the full listing. - - To list the bridges on a system, use ovs-vsctl - list-br. This example shows a compute node that has an internal - bridge and a tunnel bridge. VLAN networks are trunked through the - eth1 network interface: - - # ovs-vsctl list-br -br-int -br-tun -eth1-br - - - Working from the physical interface inwards, we can see the chain of - ports and bridges. First, the bridge eth1-br, which contains - the physical network interface eth1 and the virtual - interface phy-eth1-br: - - # ovs-vsctl list-ports eth1-br -eth1 -phy-eth1-br - - - Next, the internal bridge, br-int, contains - int-eth1-br, which pairs with phy-eth1-br to - connect to the physical network shown in the previous bridge, - patch-tun, which is used to connect to the GRE tunnel bridge - and the TAP devices that connect to the instances currently running on the - system: - - # ovs-vsctl list-ports br-int -int-eth1-br -patch-tun -tap2d782834-d1 -tap690466bc-92 -tap8a864970-2d - - - The tunnel bridge, br-tun, contains the - patch-int interface and gre-<N> interfaces - for each peer it connects to via GRE, one for each compute and network - node in your cluster: - - # ovs-vsctl list-ports br-tun -patch-int -gre-1 -. -. -. -gre-<N> - - - If any of these links is missing or incorrect, it suggests a - configuration error. Bridges can be added with ovs-vsctl - add-br, and ports can be added to bridges with - ovs-vsctl add-port. While running these by hand can be - useful debugging, it is imperative that manual changes that you intend to - keep be reflected back into your configuration files. -
- -
- Dealing with Network Namespaces - - Linux network namespaces are a kernel feature the networking service - uses to support multiple isolated layer-2 networks with overlapping IP - address ranges. The support may be disabled, but it is on by default. If - it is enabled in your environment, your network nodes will run their - dhcp-agents and l3-agents in isolated namespaces. Network interfaces and - traffic on those interfaces will not be visible in the default - namespace. - network namespaces, troubleshooting - - namespaces, troubleshooting - - troubleshooting - - network namespaces - - - To see whether you are using namespaces, run ip - netns: - - # ip netns -qdhcp-e521f9d0-a1bd-4ff4-bc81-78a60dd88fe5 -qdhcp-a4d00c60-f005-400e-a24c-1bf8b8308f98 -qdhcp-fe178706-9942-4600-9224-b2ae7c61db71 -qdhcp-0a1d0a27-cffa-4de3-92c5-9d3fd3f2e74d -qrouter-8a4ce760-ab55-4f2f-8ec5-a2e858ce0d39 - - - L3-agent router namespaces are named - qrouter-<router_uuid>, - and dhcp-agent name spaces are named - qdhcp-<net_uuid>. - This output shows a network node with four networks running dhcp-agents, - one of which is also running an l3-agent router. It's important to know - which network you need to be working in. A list of existing networks and - their UUIDs can be obtained by running neutron - net-list with administrative credentials. - - Once you've determined which namespace you need to work in, you can - use any of the debugging tools mention earlier by prefixing the command - with ip netns exec <namespace>. For example, to - see what network interfaces exist in the first qdhcp namespace returned - above, do this: - - # ip netns exec qdhcp-e521f9d0-a1bd-4ff4-bc81-78a60dd88fe5 ip a -10: tape6256f7d-31: <BROADCAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN - link/ether fa:16:3e:aa:f7:a1 brd ff:ff:ff:ff:ff:ff - inet 10.0.1.100/24 brd 10.0.1.255 scope global tape6256f7d-31 - inet 169.254.169.254/16 brd 169.254.255.255 scope global tape6256f7d-31 - inet6 fe80::f816:3eff:feaa:f7a1/64 scope link - valid_lft forever preferred_lft forever -28: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever - - - From this you see that the DHCP server on that network is using the - tape6256f7d-31 device and has an IP address of 10.0.1.100. Seeing the - address 169.254.169.254, you can also see that the dhcp-agent is running a - metadata-proxy service. Any of the commands mentioned previously in this - chapter can be run in the same way. It is also possible to run a shell, - such as bash, and have an interactive session within - the namespace. In the latter case, exiting the shell returns you to the - top-level default namespace. -
- -
- Summary - - The authors have spent too much time looking at packet dumps in - order to distill this information for you. We trust that, following the - methods outlined in this chapter, you will have an easier time! Aside from - working with the tools and steps above, don't forget that sometimes an - extra pair of eyes goes a long way to assist. -
-
diff --git a/doc/openstack-ops/ch_ops_projects_users.xml b/doc/openstack-ops/ch_ops_projects_users.xml deleted file mode 100644 index 9e54086b..00000000 --- a/doc/openstack-ops/ch_ops_projects_users.xml +++ /dev/null @@ -1,1106 +0,0 @@ - - - - - Managing Projects and Users - - An OpenStack cloud does not have much value without users. This - chapter covers topics that relate to managing users, projects, and quotas. - This chapter describes users and projects as described by version 2 of the - OpenStack Identity API. - - - While version 3 of the Identity API is available, the client tools - do not yet implement those calls, and most OpenStack clouds are still - implementing Identity API v2.0. - Identity - - Identity service API - - - -
- Projects or Tenants? - - In OpenStack user interfaces and documentation, a group of users is - referred to as a project or - tenant. These terms are interchangeable. - user management - - terminology for - - tenant - - definition of - - projects - - definition of - - - The initial implementation of OpenStack Compute - had its own authentication system and used the term - project. When authentication moved into the OpenStack - Identity (keystone) project, it used the term - tenant to refer to a group of users. Because of this - legacy, some of the OpenStack tools refer to projects and some refer to - tenants. - - - This guide uses the term project, unless an - example shows interaction with a tool that uses the term - tenant. - -
- -
- Managing Projects - - Users must be associated with at least one project, though they may - belong to many. Therefore, you should add at least one project before - adding users. - user management - - adding projects - - -
- Adding Projects - - To create a project through the OpenStack dashboard: - - - - Log in as an administrative user. - - - - Select the Identity tab in the left - navigation bar. - - - - Under Identity tab, click - Projects. - - - - Click the Create Project button. - - - - You are prompted for a project name and an optional, but - recommended, description. Select the checkbox at the bottom of the form - to enable this project. By default, it is enabled, as shown in . - -
- Dashboard's Create Project form - - - - - - -
- - It is also possible to add project members and adjust the project - quotas. We'll discuss those actions later, but in practice, it can be - quite convenient to deal with all these operations at one time. - - To add a project through the command line, you must use the - OpenStack command line client. - - # openstack project create demo - - This command creates a project named "demo." Optionally, you can - add a description string by appending --description - tenant-description, which can be very - useful. You can also create a group in a disabled state by appending - --disable to the command. By default, projects are - created in an enabled state. -
-
- -
- Quotas - - To prevent system capacities from being exhausted without - notification, you can set up quotas. Quotas are operational limits. For - example, the number of gigabytes allowed per tenant can be controlled to - ensure that a single tenant cannot consume all of the disk space. Quotas - are currently enforced at the tenant (or project) level, rather than the - user level. - quotas - - user management - - quotas - - - - Because without sensible quotas a single tenant could use up all - the available resources, default quotas are shipped with OpenStack. You - should pay attention to which quota settings make sense for your - hardware capabilities. - - - Using the command-line interface, you can manage quotas for the - OpenStack Compute service and the Block Storage service. - - Typically, default values are changed because a tenant requires more - than the OpenStack default of 10 volumes per tenant, or more than the - OpenStack default of 1 TB of disk space on a compute node. - - - To view all tenants, run: $ openstack project list - -+---------------------------------+----------+ -| ID | Name | -+---------------------------------+----------+ -| a981642d22c94e159a4a6540f70f9f8 | admin | -| 934b662357674c7b9f5e4ec6ded4d0e | tenant01 | -| 7bc1dbfd7d284ec4a856ea1eb82dca8 | tenant02 | -| 9c554aaef7804ba49e1b21cbd97d218 | services | -+---------------------------------+----------+ - - - -
- Set Image Quotas - - You can restrict a project's image storage by total - number of bytes. Currently, this quota is applied cloud-wide, so if you - were to set an Image quota limit of 5 GB, then all projects in your - cloud will be able to store only 5 GB of images and snapshots. - Image service - - quota setting - - - To enable this feature, edit the - /etc/glance/glance-api.conf file, and under the - [DEFAULT] section, add: - - user_storage_quota = <bytes> - - For example, to restrict a project's image storage to 5 GB, do - this: - - user_storage_quota = 5368709120 - - - There is a configuration option in - glance-api.conf that limits the number of members - allowed per image, called image_member_quota, set to 128 - by default. That setting is a different quota from the storage - quota. - image quotas - - -
- -
- Set Compute Service Quotas - - As an administrative user, you can update the Compute service - quotas for an existing tenant, as well as update the quota defaults for - a new tenant. - Compute - - Compute service - See . - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Compute quota descriptions
QuotaDescriptionProperty name
Fixed IPsNumber of fixed IP addresses allowed per tenant. This - number must be equal to or greater than the number of allowed - instances. fixed-ips
Floating IPsNumber of floating IP addresses allowed per - tenant. floating-ips
Injected file content bytesNumber of content bytes allowed per injected - file. injected-file-content-bytes -
Injected file path bytesNumber of bytes allowed per injected file - path. injected-file-path-bytes -
Injected filesNumber of injected files allowed per tenant. injected-files
InstancesNumber of instances allowed per tenant. instances
Key pairsNumber of key pairs allowed per user. key-pairs
Metadata itemsNumber of metadata items allowed per - instance. metadata-items
RAMMegabytes of instance RAM allowed per - tenant. ram
Security group rulesNumber of rules per security group. security-group-rules -
Security groupsNumber of security groups per tenant. security-groups
VCPUsNumber of instance cores allowed per tenant. cores
- -
- View and update compute quotas for a tenant (project) - - As an administrative user, you can use the nova - quota-* commands, which are provided by the - python-novaclient package, to view and update - tenant quotas. - - - To view and update default quota values - - - List all default quotas for all tenants, as follows: - - $ nova quota-defaults - - For example: - - $ nova quota-defaults -+-----------------------------+-------+ -| Property | Value | -+-----------------------------+-------+ -| metadata_items | 128 | -| injected_file_content_bytes | 10240 | -| ram | 51200 | -| floating_ips | 10 | -| key_pairs | 100 | -| instances | 10 | -| security_group_rules | 20 | -| injected_files | 5 | -| cores | 20 | -| fixed_ips | -1 | -| injected_file_path_bytes | 255 | -| security_groups | 10 | -+-----------------------------+-------+ - - - - Update a default value for a new tenant, as follows: - - $ nova quota-class-update default key value - - For example: - - $ nova quota-class-update default --instances 15 - - - - - - - To view quota values for a tenant (project) - - - Place the tenant ID in a variable: - - $ tenant=$(openstack project list | awk '/tenantName/ {print $2}') - - - - List the currently set quota values for a tenant, as - follows: - - $ nova quota-show --tenant $tenant - - For example: - - $ nova quota-show --tenant $tenant -+-----------------------------+-------+ -| Property | Value | -+-----------------------------+-------+ -| metadata_items | 128 | -| injected_file_content_bytes | 10240 | -| ram | 51200 | -| floating_ips | 12 | -| key_pairs | 100 | -| instances | 10 | -| security_group_rules | 20 | -| injected_files | 5 | -| cores | 20 | -| fixed_ips | -1 | -| injected_file_path_bytes | 255 | -| security_groups | 10 | -+-----------------------------+-------+ - - - - - To update quota values for a tenant (project) - - - Obtain the tenant ID, as follows: - - $ tenant=$(openstack project list | awk '/tenantName/ {print $2}') - - - - Update a particular quota value, as follows: - - # nova quota-update --quotaName quotaValue tenantID - - For example: - - # nova quota-update --floating-ips 20 $tenant -# nova quota-show --tenant $tenant -+-----------------------------+-------+ -| Property | Value | -+-----------------------------+-------+ -| metadata_items | 128 | -| injected_file_content_bytes | 10240 | -| ram | 51200 | -| floating_ips | 20 | -| key_pairs | 100 | -| instances | 10 | -| security_group_rules | 20 | -| injected_files | 5 | -| cores | 20 | -| fixed_ips | -1 | -| injected_file_path_bytes | 255 | -| security_groups | 10 | -+-----------------------------+-------+ - - - To view a list of options for the - quota-update command, run: - - $ nova help quota-update - - - -
-
- -
- Set Object Storage Quotas - - There are currently two categories of quotas for Object - Storage: - account quotas - - containers - - quota setting - - Object Storage - - quota setting - - - - - Container quotas - - - Limit the total size (in bytes) or number of objects that - can be stored in a single container. - - - - - Account quotas - - - Limit the total size (in bytes) that a user has available in - the Object Storage service. - - - - - To take advantage of either container quotas or account quotas, - your Object Storage proxy server must have container_quotas - or account_quotas (or both) added to the - [pipeline:main] pipeline. Each quota type also - requires its own section in the proxy-server.conf - file: - - [pipeline:main] -pipeline = catch_errors [...] slo dlo account_quotas proxy-server - -[filter:account_quotas] -use = egg:swift#account_quotas - -[filter:container_quotas] -use = egg:swift#container_quotas - - - To view and update Object Storage quotas, use the - swift command provided by the - python-swiftclient package. Any user included in the - project can view the quotas placed on their project. To update Object - Storage quotas on a project, you must have the role of ResellerAdmin in - the project that the quota is being applied to. - - - - To view account quotas placed on a project: - - $ swift stat - - Account: AUTH_b36ed2d326034beba0a9dd1fb19b70f9 -Containers: 0 - Objects: 0 - Bytes: 0 -Meta Quota-Bytes: 214748364800 -X-Timestamp: 1351050521.29419 -Content-Type: text/plain; charset=utf-8 -Accept-Ranges: bytes - - To apply or update account quotas on a project: - - $ swift post -m quota-bytes: - <bytes> - - For example, to place a 5 GB quota on an account: - - $ swift post -m quota-bytes: - 5368709120 - - To verify the quota, run the swift stat command - again: - - $ swift stat - - Account: AUTH_b36ed2d326034beba0a9dd1fb19b70f9 -Containers: 0 - Objects: 0 - Bytes: 0 -Meta Quota-Bytes: 5368709120 -X-Timestamp: 1351541410.38328 -Content-Type: text/plain; charset=utf-8 -Accept-Ranges: bytes -
- -
- Set Block Storage Quotas - - As an administrative user, you can update the Block Storage - service quotas for a tenant, as well as update the quota defaults for a - new tenant. See . - Block Storage - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Block Storage quota descriptions
Property nameDescription
gigabytesNumber of volume gigabytes allowed per - tenant
snapshotsNumber of Block Storage snapshots allowed per - tenant.
volumesNumber of Block Storage volumes allowed per - tenant
- - - -
- View and update Block Storage quotas for a tenant - (project) - - As an administrative user, you can use the cinder - quota-* commands, which are provided by the - python-cinderclient package, to view and update - tenant quotas. - - - To view and update default Block Storage quota values - - - List all default quotas for all tenants, as follows: - - $ cinder quota-defaults - - For example: - - $ cinder quota-defaults -+-----------+-------+ -| Property | Value | -+-----------+-------+ -| gigabytes | 1000 | -| snapshots | 10 | -| volumes | 10 | -+-----------+-------+ - - - - To update a default value for a new tenant, update the - property in the /etc/cinder/cinder.conf - file. - - - - - To view Block Storage quotas for a tenant (project) - - - View quotas for the tenant, as follows: - - # cinder quota-show tenantName - - For example: - - # cinder quota-show tenant01 -+-----------+-------+ -| Property | Value | -+-----------+-------+ -| gigabytes | 1000 | -| snapshots | 10 | -| volumes | 10 | -+-----------+-------+ - - - - - To update Block Storage quotas for a tenant (project) - - - Place the tenant ID in a variable: - - $ tenant=$(openstack project list | awk '/tenantName/ {print $2}') - - - - Update a particular quota value, as follows: - - # cinder quota-update --quotaName NewValue tenantID - - For example: - - # cinder quota-update --volumes 15 $tenant -# cinder quota-show tenant01 -+-----------+-------+ -| Property | Value | -+-----------+-------+ -| gigabytes | 1000 | -| snapshots | 10 | -| volumes | 15 | -+-----------+-------+ - - -
-
-
- -
- User Management - - The command-line tools for managing users are inconvenient to use - directly. They require issuing multiple commands to complete a single - task, and they use UUIDs rather than symbolic names for many items. In - practice, humans typically do not use these tools directly. Fortunately, - the OpenStack dashboard provides a reasonable interface to this. In - addition, many sites write custom tools for local needs to enforce local - policies and provide levels of self-service to users that aren't currently - available with packaged tools. - user management - - creating new users - -
- -
- Creating New Users - - To create a user, you need the following information: - - - - Username - - - - Email address - - - - Password - - - - Primary project - - - - Role - - - - Enabled - - - - Username and email address are self-explanatory, though your site - may have local conventions you should observe. - The primary project is simply the first project the user is associated - with and must exist prior to creating the user. Role is almost always - going to be "member." Out of the box, OpenStack comes with two roles - defined: - - - - - - member - - - A typical user - - - - - admin - - - An administrative super user, which has full permissions - across all projects and should be used with great care - - - - - It is possible to define other roles, but doing so is - uncommon. - - Once you've gathered this information, creating the user in the - dashboard is just another web form similar to what we've seen before and - can be found by clicking the Users link in the Identity navigation bar and - then clicking the Create User button at the top right. - - Modifying users is also done from this Users page. If you have a - large number of users, this page can get quite crowded. The Filter search - box at the top of the page can be used to limit the users listing. A form - very similar to the user creation dialog can be pulled up by selecting - Edit from the actions dropdown menu at the end of the line for the user - you are modifying. -
- -
- Associating Users with Projects - - Many sites run with users being associated with only one project. - This is a more conservative and simpler choice both for administration and - for users. Administratively, if a user reports a problem with an instance - or quota, it is obvious which project this relates to. Users needn't worry - about what project they are acting in if they are only in one project. - However, note that, by default, any user can affect the resources of any - other user within their project. It is also possible to associate users - with multiple projects if that makes sense for your - organization. - Project Members tab - - user management - - associating users with projects - - - Associating existing users with an additional project or removing - them from an older project is done from the Projects page of the dashboard - by selecting Modify Users from the Actions column, as shown in . - - From this view, you can do a number of useful things, as well as a - few dangerous ones. - - The first column of this form, named All Users, includes a list of - all the users in your cloud who are not already associated with this - project. The second column shows all the users who are. These lists can be - quite long, but they can be limited by typing a substring of the username - you are looking for in the filter field at the top of the column. - - From here, click the + icon to add users to the - project. Click the - to remove them. - -
- <guilabel>Edit Project Members</guilabel> tab - - - - - - -
- - The dangerous possibility comes with the ability to change member - roles. This is the dropdown list below the username in the - Project Members list. In virtually all cases, this - value should be set to Member. This example purposefully shows an - administrative user where this value is admin. - - - The admin is global, not per project, so granting a user the admin - role in any project gives the user administrative rights across the - whole cloud. - - - Typical use is to only create administrative users in a single - project, by convention the admin project, which is created by default - during cloud setup. If your administrative users also use the cloud to - launch and manage instances, it is strongly recommended that you use - separate user accounts for administrative access and normal operations and - that they be in distinct projects. - accounts - - -
- Customizing Authorization - - The default authorization settings allow - administrative users only to create resources on behalf of a different - project. OpenStack handles two kinds of authorization policies: - authorization - - - - - - - Operation based - - - Policies specify access criteria for specific operations, - possibly with fine-grained control over specific - attributes. - - - - - Resource based - - - Whether access to a specific resource might be granted or - not according to the permissions configured for the resource - (currently available only for the network resource). The actual - authorization policies enforced in an OpenStack service vary from - deployment to deployment. - - - - - The policy engine reads entries from the policy.json - file. The actual location of this file might vary from distribution to - distribution: for nova, it is typically in - /etc/nova/policy.json. You can update entries while the - system is running, and you do not have to restart services. Currently, - the only way to update such policies is to edit the policy file. - - The OpenStack service's policy engine matches a policy directly. A - rule indicates evaluation of the elements of such policies. For - instance, in a compute:create: [["rule:admin_or_owner"]] - statement, the policy is compute:create, and the rule is - admin_or_owner. - - Policies are triggered by an OpenStack policy engine whenever one - of them matches an OpenStack API operation or a specific attribute being - used in a given operation. For instance, the engine tests the - create:compute policy every time a user sends a POST - /v2/{tenant_id}/servers request to the OpenStack Compute API - server. Policies can be also related to specific API - extensions. For instance, if a user needs an extension like - compute_extension:rescue, the attributes defined by the - provider extensions trigger the rule test for that operation. - - An authorization policy can be composed by one or more rules. If - more rules are specified, evaluation policy is successful if any of the - rules evaluates successfully; if an API operation matches multiple - policies, then all the policies must evaluate successfully. Also, - authorization rules are recursive. Once a rule is matched, the rule(s) - can be resolved to another rule, until a terminal rule is reached. These - are the rules defined: - - - - Role-based rules - - - Evaluate successfully if the user submitting the request has - the specified role. For instance, "role:admin" is - successful if the user submitting the request is an - administrator. - - - - - Field-based rules - - - Evaluate successfully if a field of the resource specified - in the current request matches a specific value. For instance, - "field:networks:shared=True" is successful if the - attribute shared of the network resource is set to - true. - - - - - Generic rules - - - Compare an attribute in the resource with an attribute - extracted from the user's security credentials and evaluates - successfully if the comparison is successful. For instance, - "tenant_id:%(tenant_id)s" is successful if the tenant - identifier in the resource is equal to the tenant identifier of - the user submitting the request. - - - - - Here are snippets of the default nova - policy.json file: - - { - "context_is_admin": [["role:admin"]], - "admin_or_owner": [["is_admin:True"], \ - ["project_id:%(project_id)s"]], - "default": [["rule:admin_or_owner"]], - "compute:create": [ ], - "compute:create:attach_network": [ ], - "compute:create:attach_volume": [ ], - "compute:get_all": [ ], - "admin_api": [["is_admin:True"]], - "compute_extension:accounts": [["rule:admin_api"]], - "compute_extension:admin_actions": [["rule:admin_api"]], - "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], - ... - "compute_extension:admin_actions:migrate": [["rule:admin_api"]], - "compute_extension:aggregates": [["rule:admin_api"]], - "compute_extension:certificates": [ ], - ... - "compute_extension:flavorextraspecs": [ ], - "compute_extension:flavormanage": [["rule:admin_api"]], - } - - - - - Shows a rule that evaluates successfully if the current user - is an administrator or the owner of the resource specified in the - request (tenant identifier is equal). - - - - Shows the default policy, which is always evaluated if an API - operation does not match any of the policies in - policy.json. - - - - Shows a policy restricting the ability to manipulate flavors - to administrators using the Admin API only. - admin API - - - - - In some cases, some operations should be restricted to - administrators only. Therefore, as a further example, let us consider - how this sample policy file could be modified in a scenario where we - enable users to create their own flavors: - - "compute_extension:flavormanage": [ ], -
- -
- Users Who Disrupt Other Users - - Users on your cloud can disrupt other users, sometimes - intentionally and maliciously and other times by accident. Understanding - the situation allows you to make a better decision on how to handle the - disruption. - user management - - handling disruptive users - - - For example, a group of users have instances that are utilizing a - large amount of compute resources for very compute-intensive tasks. This - is driving the load up on compute nodes and affecting other users. In - this situation, review your user use cases. You may find that high - compute scenarios are common, and should then plan for proper - segregation in your cloud, such as host aggregation or regions. - - Another example is a user consuming a very large amount of - bandwidth - bandwidth - - recognizing DDOS attacks - . Again, the key is to understand what the user is doing. - If she naturally needs a high amount of bandwidth, you might have to - limit her transmission rate as to not affect other users or move her to - an area with more bandwidth available. On the other hand, maybe her - instance has been hacked and is part of a botnet launching DDOS attacks. - Resolution of this issue is the same as though any other server on your - network has been hacked. Contact the user and give her time to respond. - If she doesn't respond, shut down the instance. - - A final example is if a user is hammering cloud resources - repeatedly. Contact the user and learn what he is trying to do. Maybe he - doesn't understand that what he's doing is inappropriate, or maybe there - is an issue with the resource he is trying to access that is causing his - requests to queue or lag. -
-
- -
- Summary - - One key element of systems administration that is often overlooked - is that end users are the reason systems administrators exist. Don't go - the BOFH route and terminate every user who causes an alert to go off. - Work with users to understand what they're trying to accomplish and see - how your environment can better assist them in achieving their goals. Meet - your users needs by organizing your users into projects, applying - policies, managing quotas, and working with them. - systems administration - - user management - -
-
diff --git a/doc/openstack-ops/ch_ops_resources.xml b/doc/openstack-ops/ch_ops_resources.xml deleted file mode 100644 index a484d9b6..00000000 --- a/doc/openstack-ops/ch_ops_resources.xml +++ /dev/null @@ -1,108 +0,0 @@ - - - - Resources -
- OpenStack - - - Installation - Guide for openSUSE 13.2 and SUSE Linux Enterprise - Server 12 - - - Installation - Guide for Red Hat Enterprise Linux 7, CentOS 7, and - Fedora 22 - - - Installation - Guide for Ubuntu 14.04 (LTS) Server - - - OpenStack - Administrator Guide - - - OpenStack - Cloud Computing Cookbook (Packt Publishing) - - -
-
- Cloud (General) - - - “The NIST Definition - of Cloud Computing” - - -
-
- Python - - - Dive Into - Python (Apress) - - -
-
- Networking - - - TCP/IP - Illustrated, Volume 1: The Protocols, 2/E - (Pearson) - - - The TCP/IP - Guide (No Starch Press) - - - “A - tcpdump Tutorial and Primer” - - -
-
- Systems Administration - - - UNIX and - Linux Systems Administration Handbook (Prentice - Hall) - - -
-
- Virtualization - - - The Book - of Xen (No Starch Press) - - -
-
- Configuration Management - - - Puppet Labs - Documentation - - - Pro - Puppet (Apress) - - -
-
diff --git a/doc/openstack-ops/ch_ops_upgrades.xml b/doc/openstack-ops/ch_ops_upgrades.xml deleted file mode 100644 index 651332e7..00000000 --- a/doc/openstack-ops/ch_ops_upgrades.xml +++ /dev/null @@ -1,678 +0,0 @@ - - - Upgrades - - With the exception of Object Storage, upgrading from one - version of OpenStack to another can take a great deal of effort. - This chapter provides some guidance on the operational aspects - that you should consider for performing an upgrade for a basic - architecture. - -
- Pre-upgrade considerations - -
- Upgrade planning - - - Thoroughly review the - release notes to learn about new, updated, and deprecated features. - Find incompatibilities between versions. - - - Consider the impact of an upgrade to users. The upgrade process - interrupts management of your environment including the dashboard. - If you properly prepare for the upgrade, existing instances, networking, - and storage should continue to operate. However, instances might experience - intermittent network interruptions. - - - Consider the approach to upgrading your environment. You can perform - an upgrade with operational instances, but this is a dangerous approach. - You might consider using live migration to temporarily relocate instances - to other compute nodes while performing upgrades. However, you must - ensure database consistency throughout the process; otherwise your - environment might become unstable. Also, don't forget to provide - sufficient notice to your users, including giving them plenty of - time to perform their own backups. - - - Consider adopting structure and options from the service - configuration files and merging them with existing configuration - files. The - OpenStack Configuration Reference - contains new, updated, and deprecated options for most - services. - - - Like all major system upgrades, your upgrade could fail for - one or more reasons. You should prepare for this situation by - having the ability to roll back your environment to the previous - release, including databases, configuration files, and packages. - We provide an example process for rolling back your environment in - . - upgrading - process overview - - rollbacks - preparing for - - upgrading - preparation for - - - - Develop an upgrade procedure and assess it thoroughly by - using a test environment similar to your production - environment. - - -
-
- Pre-upgrade testing environment - The most important step is the pre-upgrade testing. If you - are upgrading immediately after release of a new version, - undiscovered bugs might hinder your progress. Some deployers - prefer to wait until the first point release is announced. - However, if you have a significant deployment, you might follow - the development and testing of the release to ensure that bugs - for your use cases are fixed. - upgrading - pre-upgrade testing - - - Each OpenStack cloud is different even if you have a near-identical - architecture as described in this guide. As a result, you must still - test upgrades between versions in your environment using an - approximate clone of your environment. - - However, that is not to say that it needs to be the same - size or use identical hardware as the production environment. - It is important to consider the hardware and scale of the cloud that - you are upgrading. The following tips can help you minimise the cost: - upgrading - controlling cost of - - - - - Use your own cloud - - The simplest place to start testing the next version - of OpenStack is by setting up a new environment inside - your own cloud. This might seem odd, especially the double - virtualization used in running compute nodes. But it is a - sure way to very quickly test your configuration. - - - - Use a public cloud - - - Consider using a public cloud to test the scalability - limits of your cloud controller configuration. Most public - clouds bill by the hour, which means it can be inexpensive - to perform even a test with many nodes. - cloud controllers - scalability and - - - - - - Make another storage endpoint on the same system - - - If you use an external storage plug-in or shared file - system with your cloud, you can test whether it works by - creating a second share or endpoint. This allows you to - test the system before entrusting the new version on to your - storage. - - - - - Watch the network - - - Even at smaller-scale testing, look for excess network - packets to determine whether something is going horribly - wrong in inter-component communication. - - - - - To set up the test environment, you can use one of several - methods: - - - - Do a full manual install by using the OpenStack Installation - Guide for your platform. Review the - final configuration files and installed packages. - - - Create a clone of your automated configuration - infrastructure with changed package repository URLs. - Alter the configuration until it works. - - - Either approach is valid. Use the approach that matches your - experience. - An upgrade pre-testing system is excellent for getting the - configuration to work. However, it is important to note that the - historical use of the system and differences in user interaction - can affect the success of upgrades. - If possible, we highly recommend that you dump your - production database tables and test the upgrade in your development - environment using this data. Several MySQL bugs have been uncovered - during database migrations because of slight table differences between - a fresh installation and tables that migrated from one version to another. - This will have impact on large real datasets, which you do not want to - encounter during a production outage. - - Artificial scale testing can go only so far. After your - cloud is upgraded, you must pay careful attention to the - performance aspects of your cloud. -
- - - -
- Upgrade Levels - Upgrade levels are a feature added to OpenStack Compute since the - Grizzly release to provide version locking on the RPC - (Message Queue) communications between the various Compute services. - - This functionality is an important piece of the puzzle when - it comes to live upgrades and is conceptually similar to the - existing API versioning that allows OpenStack services of - different versions to communicate without issue. - Without upgrade levels, an X+1 version Compute service can - receive and understand X version RPC messages, but it can only - send out X+1 version RPC messages. For example, if a - nova-conductor - process has been upgraded to X+1 version, then the conductor service - will be able to understand messages from X version - nova-compute - processes, but those compute services will not be able to - understand messages sent by the conductor service. - During an upgrade, operators can add configuration options to - nova.conf which lock the version of RPC - messages and allow live upgrading of the services without - interruption caused by version mismatch. The configuration - options allow the specification of RPC version numbers if desired, - but release name alias are also supported. For example: - [upgrade_levels] -compute=X+1 -conductor=X+1 -scheduler=X+1 - will keep the RPC version locked across the specified services - to the RPC version used in X+1. As all instances of a particular - service are upgraded to the newer version, the corresponding line - can be removed from nova.conf. - Using this functionality, ideally one would lock the RPC version - to the OpenStack version being upgraded from on - nova-compute nodes, to - ensure that, for example X+1 version - nova-compute - processes will continue to work with X version - nova-conductor - processes while the upgrade completes. Once the upgrade of - nova-compute - processes is complete, the operator can move onto upgrading - nova-conductor - and remove the version locking for - nova-compute in - nova.conf. - -
-
-
- Upgrade process - - This section describes the process to upgrade a basic - OpenStack deployment based on the basic two-node architecture in the - - OpenStack Installation Guide. - All nodes must run a supported distribution of Linux with a recent kernel - and the current release packages. -
- Prerequisites - - - Perform some cleaning of the environment prior to - starting the upgrade process to ensure a consistent state. - For example, instances not fully purged from the system - after deletion might cause indeterminate behavior. - - - For environments using the OpenStack Networking - service (neutron), verify the release version of the database. For example: - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current" neutron - - -
-
- Perform a backup - - - Save the configuration files on all nodes. For example: - # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \ - do mkdir $i-kilo; \ - done -# for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \ - do cp -r /etc/$i/* $i-kilo/; \ - done - - You can modify this example script on each node to - handle different services. - - - - Make a full database backup of your production data. As of - Kilo, database downgrades are not supported, and the only method - available to get back to a prior database version will be to restore - from backup. - # mysqldump -u root -p --opt --add-drop-database --all-databases > icehouse-db-backup.sql - - Consider updating your SQL server configuration as - described in the - OpenStack Installation Guide. - - - -
-
- Manage repositories - - On all nodes: - - Remove the repository for the previous release packages. - - - Add the repository for the new release packages. - - - Update the repository database. - - -
-
- Upgrade packages on each node - Depending on your specific configuration, upgrading all - packages might restart or break services supplemental to your - OpenStack environment. For example, if you use the TGT iSCSI - framework for Block Storage volumes and the upgrade includes - new packages for it, the package manager might restart the - TGT iSCSI services and impact connectivity to volumes. - If the package manager prompts you to update configuration - files, reject the changes. The package manager appends a - suffix to newer versions of configuration files. Consider - reviewing and adopting content from these files. - - You may need to explicitly install the ipset - package if your distribution does not install it as a - dependency. -
-
- Update services - To update a service on each node, you generally modify one or more - configuration files, stop the service, synchronize the - database schema, and start the service. Some services require - different steps. We recommend verifying operation of each - service before proceeding to the next service. - The order you should upgrade services, and any changes from the - general upgrade process is described below: - - Controller node - - OpenStack Identity - Clear any expired tokens before - synchronizing the database. - - - OpenStack Image service - - - OpenStack Compute, including networking - components. - - - OpenStack Networking - - - OpenStack Block Storage - - - OpenStack dashboard - In typical environments, updating the - dashboard only requires restarting the Apache HTTP service. - - - - OpenStack Orchestration - - - OpenStack Telemetry - In typical environments, updating the - Telemetry service only requires restarting the service. - - - - OpenStack Compute - Edit the configuration file and restart the - service. - - - OpenStack Networking - Edit the configuration file and restart - the service. - - - - Compute nodes - - OpenStack Block Storage - Updating the Block Storage service - only requires restarting the service. - - - - Storage nodes - - OpenStack Networking - Edit the configuration file and restart - the service. - - -
-
- Final steps - On all distributions, you must perform some final tasks to - complete the upgrade process. - upgrading - final steps - - - Decrease DHCP timeouts by modifying - /etc/nova/nova.conf on the compute nodes - back to the original value for your environment. - Update all .ini files to match - passwords and pipelines as required for the OpenStack release in your - environment. - After migration, users see different results from - nova image-list and glance - image-list. To ensure users see the same images in - the list commands, edit the /etc/glance/policy.json - and /etc/nova/policy.json files to contain - "context_is_admin": "role:admin", which limits - access to private images for projects. - Verify proper operation of your environment. Then, notify your users - that their cloud is operating normally again. - -
-
- -
- Rolling back a failed upgrade - - Upgrades involve complex operations and can fail. Before - attempting any upgrade, you should make a full database backup - of your production data. As of Kilo, database downgrades are - not supported, and the only method available to get back to a - prior database version will be to restore from backup. - - This section provides guidance for rolling back to a previous - release of OpenStack. All distributions follow a similar procedure. - rollbacks - process for - - upgrading - rolling back failures - - - A common scenario is to take down production management services - in preparation for an upgrade, completed part of the upgrade process, - and discovered one or more problems not encountered during testing. - As a consequence, you must roll back your environment to the original - "known good" state. You also made sure that you did not make any state - changes after attempting the upgrade process; no new instances, networks, - storage volumes, and so on. Any of these new resources will be in a frozen - state after the databases are restored from backup. - - Within this scope, you must complete these steps to - successfully roll back your environment: - - - Roll back configuration files. - - - - Restore databases from backup. - - - - Roll back packages. - - - - You should verify that you - have the requisite backups to restore. Rolling back upgrades is - a tricky process because distributions tend to put much more - effort into testing upgrades than downgrades. Broken downgrades - take significantly more effort to troubleshoot and, resolve than - broken upgrades. Only you can weigh the risks of trying to push - a failed upgrade forward versus rolling it back. Generally, - consider rolling back as the very last option. - - The following steps described for Ubuntu have worked on at - least one production environment, but they might not work for - all environments. - - - To perform the rollback - - - Stop all OpenStack services. - - - Copy contents of configuration backup directories that you - created during the upgrade process back to - /etc/<service> directory. - - - Restore databases from the - RELEASE_NAME-db-backup.sql backup file - that you created with the mysqldump - command during the upgrade process: - - # mysql -u root -p < RELEASE_NAME-db-backup.sql - - - - Downgrade OpenStack packages. - - - Downgrading packages is by far the most complicated - step; it is highly dependent on the distribution and the - overall administration of the system. - - - - - Determine which OpenStack packages are installed on - your system. Use the dpkg - --get-selections command. Filter for - OpenStack packages, filter again to omit packages - explicitly marked in the deinstall state, - and save the final output to a file. For example, the - following command covers a controller node with - keystone, glance, nova, neutron, and cinder: - - # dpkg --get-selections | grep -e keystone -e glance -e nova -e neutron \ --e cinder | grep -v deinstall | tee openstack-selections -cinder-api install -cinder-common install -cinder-scheduler install -cinder-volume install -glance install -glance-api install -glance-common install -glance-registry install -neutron-common install -neutron-dhcp-agent install -neutron-l3-agent install -neutron-lbaas-agent install -neutron-metadata-agent install -neutron-plugin-openvswitch install -neutron-plugin-openvswitch-agent install -neutron-server install -nova-api install -nova-cert install -nova-common install -nova-conductor install -nova-consoleauth install -nova-novncproxy install -nova-objectstore install -nova-scheduler install -python-cinder install -python-cinderclient install -python-glance install -python-glanceclient install -python-keystone install -python-keystoneclient install -python-neutron install -python-neutronclient install -python-nova install -python-novaclient install - - - - Depending on the type of server, the contents and - order of your package list might vary from this - example. - - - - - You can determine the package versions available for - reversion by using the apt-cache - policy command. If you removed the Grizzly - repositories, you must first reinstall them and run - apt-get update: - - - - # apt-cache policy nova-common -nova-common: - Installed: 1:2013.2-0ubuntu1~cloud0 - Candidate: 1:2013.2-0ubuntu1~cloud0 - Version table: - *** 1:2013.2-0ubuntu1~cloud0 0 - 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ - precise-updates/havana/main amd64 Packages - 100 /var/lib/dpkg/status - 1:2013.1.4-0ubuntu1~cloud0 0 - 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ - precise-updates/grizzly/main amd64 Packages - 2012.1.3+stable-20130423-e52e6912-0ubuntu1.2 0 - 500 http://us.archive.ubuntu.com/ubuntu/ - precise-updates/main amd64 Packages - 500 http://security.ubuntu.com/ubuntu/ - precise-security/main amd64 Packages - 2012.1-0ubuntu2 0 - 500 http://us.archive.ubuntu.com/ubuntu/ - precise/main amd64 Packages - - This tells us the currently installed version of the - package, newest candidate version, and all versions - along with the repository that contains each version. - Look for the appropriate Grizzly - version—1:2013.1.4-0ubuntu1~cloud0 in - this case. The process of manually picking through this - list of packages is rather tedious and prone to errors. - You should consider using the following script to help - with this process: - - - - # for i in `cut -f 1 openstack-selections | sed 's/neutron/quantum/;'`; - do echo -n $i ;apt-cache policy $i | grep -B 1 grizzly | - grep -v Packages | awk '{print "="$1}';done | tr '\n' ' ' | - tee openstack-grizzly-versions -cinder-api=1:2013.1.4-0ubuntu1~cloud0 -cinder-common=1:2013.1.4-0ubuntu1~cloud0 -cinder-scheduler=1:2013.1.4-0ubuntu1~cloud0 -cinder-volume=1:2013.1.4-0ubuntu1~cloud0 -glance=1:2013.1.4-0ubuntu1~cloud0 -glance-api=1:2013.1.4-0ubuntu1~cloud0 -glance-common=1:2013.1.4-0ubuntu1~cloud0 -glance-registry=1:2013.1.4-0ubuntu1~cloud0 -quantum-common=1:2013.1.4-0ubuntu1~cloud0 -quantum-dhcp-agent=1:2013.1.4-0ubuntu1~cloud0 -quantum-l3-agent=1:2013.1.4-0ubuntu1~cloud0 -quantum-lbaas-agent=1:2013.1.4-0ubuntu1~cloud0 -quantum-metadata-agent=1:2013.1.4-0ubuntu1~cloud0 -quantum-plugin-openvswitch=1:2013.1.4-0ubuntu1~cloud0 -quantum-plugin-openvswitch-agent=1:2013.1.4-0ubuntu1~cloud0 -quantum-server=1:2013.1.4-0ubuntu1~cloud0 -nova-api=1:2013.1.4-0ubuntu1~cloud0 -nova-cert=1:2013.1.4-0ubuntu1~cloud0 -nova-common=1:2013.1.4-0ubuntu1~cloud0 -nova-conductor=1:2013.1.4-0ubuntu1~cloud0 -nova-consoleauth=1:2013.1.4-0ubuntu1~cloud0 -nova-novncproxy=1:2013.1.4-0ubuntu1~cloud0 -nova-objectstore=1:2013.1.4-0ubuntu1~cloud0 -nova-scheduler=1:2013.1.4-0ubuntu1~cloud0 -python-cinder=1:2013.1.4-0ubuntu1~cloud0 -python-cinderclient=1:1.0.3-0ubuntu1~cloud0 -python-glance=1:2013.1.4-0ubuntu1~cloud0 -python-glanceclient=1:0.9.0-0ubuntu1.2~cloud0 -python-quantum=1:2013.1.4-0ubuntu1~cloud0 -python-quantumclient=1:2.2.0-0ubuntu1~cloud0 -python-nova=1:2013.1.4-0ubuntu1~cloud0 -python-novaclient=1:2.13.0-0ubuntu1~cloud0 - - - - If you decide to continue this step manually, - don't forget to change neutron to - quantum where applicable. - - - - - Use the apt-get install command - to install specific versions of each package by - specifying - <package-name>=<version>. The - script in the previous step conveniently created a list - of package=version pairs for you: - - # apt-get install `cat openstack-grizzly-versions` - - This step completes the rollback procedure. You - should remove the upgrade release repository and run - apt-get update to prevent - accidental upgrades until you solve whatever issue - caused you to roll back your environment. - - - - -
-
diff --git a/doc/openstack-ops/ch_ops_upstream.xml b/doc/openstack-ops/ch_ops_upstream.xml deleted file mode 100644 index 90018412..00000000 --- a/doc/openstack-ops/ch_ops_upstream.xml +++ /dev/null @@ -1,543 +0,0 @@ - - - - - Upstream OpenStack - - OpenStack is founded on a thriving community that is a source of help - and welcomes your contributions. This chapter details some of the ways you - can interact with the others involved. - -
- Getting Help - - There are several avenues available for seeking assistance. The - quickest way is to help the community help you. Search the Q&A sites, - mailing list archives, and bug lists for issues similar to yours. If you - can't find anything, follow the directions for reporting bugs or use one - of the channels for support, which are listed below. - mailing lists - - OpenStack - - documentation - - help, resources for - - troubleshooting - - getting help - - OpenStack community - - getting help from - - - Your first port of call should be the official OpenStack - documentation, found on . You can get questions - answered on . - - Mailing lists are - also a great place to get help. The wiki page has more information about - the various lists. As an operator, the main lists you should be aware of - are: - - - - General - list - - - openstack@lists.openstack.org. The scope - of this list is the current state of OpenStack. This is a very - high-traffic mailing list, with many, many emails per day. - - - - - Operators - list - - - openstack-operators@lists.openstack.org. - This list is intended for discussion among existing OpenStack cloud - operators, such as yourself. Currently, this list is relatively low - traffic, on the order of one email a day. - - - - - Development - list - - - openstack-dev@lists.openstack.org. The - scope of this list is the future state of OpenStack. This is a - high-traffic mailing list, with multiple emails per day. - - - - - We recommend that you subscribe to the general list and the operator - list, although you must set up filters to manage the volume for the - general list. You'll also find links to the mailing list archives on the - mailing list wiki page, where you can search through the - discussions. - - Multiple IRC - channels are available for general questions and developer - discussions. The general discussion channel is #openstack on - irc.freenode.net. -
- -
- Reporting Bugs - - As an operator, you are in a very good position to report unexpected - behavior with your cloud. Since OpenStack is flexible, you may be the only - individual to report a particular issue. Every issue is important to fix, - so it is essential to learn how to easily submit a bug report. - maintenance/debugging - - reporting bugs - - bugs, reporting - - OpenStack community - - reporting bugs - - - All OpenStack projects use Launchpad for bug - tracking. You'll need to create an account on Launchpad before you can - submit a bug report. - - Once you have a Launchpad account, reporting a bug is as simple as - identifying the project or projects that are causing the issue. Sometimes - this is more difficult than expected, but those working on the bug triage - are happy to help relocate issues if they are not in the right place - initially: - - - - Report a bug in nova. - - - - Report a bug in python-novaclient. - - - - Report a bug in swift. - - - - Report a bug in python-swiftclient. - - - - Report a bug in glance. - - - - Report a bug in python-glanceclient. - - - - Report a bug in keystone. - - - - Report a bug in python-keystoneclient. - - - - Report a bug in neutron. - - - - Report a bug in python-neutronclient. - - - - Report a bug in cinder. - - - - Report a bug in python-cinderclient. - - - - Report a bug in manila. - - - - Report a bug in python-manilaclient. - - - - Report a bug in python-openstackclient. - - - - Report a bug in horizon. - - - - Report a bug with the documentation. - - - - Report a bug with the API documentation. - - - - To write a good bug report, the following process is essential. - First, search for the bug to make sure there is no bug already filed for - the same issue. If you find one, be sure to click on "This bug affects X - people. Does this bug affect you?" If you can't find the issue, then enter - the details of your report. It should at least include: - - - - The release, or milestone, or commit ID corresponding to the - software that you are running - - - - The operating system and version where you've identified the - bug - - - - Steps to reproduce the bug, including what went wrong - - - - Description of the expected results instead of what you - saw - - - - Portions of your log files so that you include only relevant - excerpts - - - - When you do this, the bug is created with: - - - - Status: New - - - - In the bug comments, you can contribute instructions on how to fix a - given bug, and set it to Triaged. Or you can directly - fix it: assign the bug to yourself, set it to In - progress, branch the code, implement the fix, and propose your - change for merging. But let's not get ahead of ourselves; there are bug - triaging tasks as well. - -
- Confirming and Prioritizing - - This stage is about checking that a bug is real and assessing its - impact. Some of these steps require bug supervisor rights (usually - limited to core teams). If the bug lacks information to properly - reproduce or assess the importance of the bug, the bug is set to: - - - - Status: Incomplete - - - - Once you have reproduced the issue (or are 100 percent confident - that this is indeed a valid bug) and have permissions to do so, - set: - - - - Status: Confirmed - - - - Core developers also prioritize the bug, based on its - impact: - - - - Importance: <Bug impact> - - - - The bug impacts are categorized as follows: - - - - - - Critical if the bug prevents a key - feature from working properly (regression) for all users (or without - a simple workaround) or results in data loss - - - - High if the bug prevents a key feature - from working properly for some users (or with a workaround) - - - - Medium if the bug prevents a secondary - feature from working properly - - - - Low if the bug is mostly cosmetic - - - - Wishlist if the bug is not really a bug - but rather a welcome change in behavior - - - - If the bug contains the solution, or a patch, set the bug status - to Triaged. -
- -
- Bug Fixing - - At this stage, a developer works on a fix. During that time, to - avoid duplicating the work, the developer should set: - - - - Status: In Progress - - - - Assignee: <yourself> - - - - When the fix is ready, the developer proposes a change and gets - the change reviewed. -
- -
- After the Change Is Accepted - - After the change is reviewed, accepted, and lands in master, it - automatically moves to: - - - - Status: Fix Committed - - - - When the fix makes it into a milestone or release branch, it - automatically moves to: - - - - Milestone: Milestone the bug was fixed in - - - - Status: Fix Released - - -
-
- -
- Join the OpenStack Community - - Since you've made it this far in the book, you should consider - becoming an official individual member of the community and join the OpenStack - Foundation. The OpenStack Foundation is an independent body - providing shared resources to help achieve the OpenStack mission by - protecting, empowering, and promoting OpenStack software and the community - around it, including users, developers, and the entire ecosystem. We all - share the responsibility to make this community the best it can possibly - be, and signing up to be a member is the first step to participating. Like - the software, individual membership within the OpenStack Foundation is - free and accessible to anyone. - OpenStack community - - joining - -
- -
- How to Contribute to the Documentation - - OpenStack documentation efforts encompass operator and administrator - docs, API docs, and user docs. - OpenStack community - - contributing to - - - The genesis of this book was an in-person event, but now that the - book is in your hands, we want you to contribute to it. OpenStack - documentation follows the coding principles of iterative work, with bug - logging, investigating, and fixing. - - Just like the code, is updated constantly using - the Gerrit review system, with source stored in git.openstack.org in the openstack-manuals repository - and the api-site - repository. - - To review the documentation before it's published, go to the - OpenStack Gerrit server at  and search for project:openstack/openstack-manuals - or project:openstack/api-site. - - See the How To Contribute - page on the wiki for more information on the steps you need to take - to submit your first documentation review or change. -
- -
- Security Information - - As a community, we take security very seriously and follow a - specific process for reporting potential issues. We vigilantly pursue - fixes and regularly eliminate exposures. You can report security issues - you discover through this specific process. The OpenStack Vulnerability - Management Team is a very small group of experts in vulnerability - management drawn from the OpenStack community. The team's job is - facilitating the reporting of vulnerabilities, coordinating security fixes - and handling progressive disclosure of the vulnerability information. - Specifically, the team is responsible for the following - functions: - vulnerability tracking/management - - security issues - - reporting/fixing vulnerabilities - - OpenStack community - - security information - - - - - Vulnerability management - - - All vulnerabilities discovered by community members (or users) - can be reported to the team. - - - - - Vulnerability tracking - - - The team will curate a set of vulnerability related issues in - the issue tracker. Some of these issues are private to the team and - the affected product leads, but once remediation is in place, all - vulnerabilities are public. - - - - - Responsible disclosure - - - As part of our commitment to work with the security community, - the team ensures that proper credit is given to security researchers - who responsibly report issues in OpenStack. - - - - - We provide two ways to report issues to the OpenStack Vulnerability - Management Team, depending on how sensitive the issue is: - - - - Open a bug in Launchpad and mark it as a "security bug." This - makes the bug private and accessible to only the Vulnerability - Management Team. - - - - If the issue is extremely sensitive, send an encrypted email to - one of the team's members. Find their GPG keys at OpenStack - Security. - - - - You can find the full list of security-oriented teams you can join - at Security Teams. The - vulnerability management process is fully documented at Vulnerability - Management. -
- -
- Finding Additional Information - - In addition to this book, there are many other sources of - information about OpenStack. The OpenStack website is a good - starting point, with OpenStack - Docs and OpenStack API Docs providing - technical documentation about OpenStack. The OpenStack wiki contains a lot - of general information that cuts across the OpenStack projects, including - a list of recommended - tools. Finally, there are a number of blogs aggregated - at Planet - OpenStack. - OpenStack community - - additional information - -
-
diff --git a/doc/openstack-ops/ch_ops_user_facing.xml b/doc/openstack-ops/ch_ops_user_facing.xml deleted file mode 100644 index 9661485a..00000000 --- a/doc/openstack-ops/ch_ops_user_facing.xml +++ /dev/null @@ -1,2739 +0,0 @@ - - - User-Facing Operations - - This guide is for OpenStack operators and does not seek to be an - exhaustive reference for users, but as an operator, you should have a basic - understanding of how to use the cloud facilities. This chapter looks at - OpenStack from a basic user perspective, which helps you understand your - users' needs and determine, when you get a trouble ticket, whether it is a - user issue or a service issue. The main concepts covered are images, - flavors, security groups, block storage, shared file system storage, and instances. - -
- Images - - - - OpenStack images can often be thought of as "virtual machine - templates." Images can also be standard installation media such as ISO - images. Essentially, they contain bootable file systems that are used to - launch instances. - user training - - images - - - -
- Adding Images - - Several pre-made images exist and can easily be imported into the - Image service. A common image to add is the CirrOS image, which is very - small and used for testing purposes. - images - - adding - To add this image, simply do: - - $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img -$ glance image-create --name='cirros image' --is-public=true \ - --container-format=bare --disk-format=qcow2 < cirros-0.3.4-x86_64-disk.img - - The glance image-create command provides a large set - of options for working with your image. For example, the - min-disk option is useful for images that require root - disks of a certain size (for example, large Windows images). To view - these options, do: - - $ glance help image-create - - The location option is important to note. It does not - copy the entire image into the Image service, but references an original - location where the image can be found. Upon launching an instance of - that image, the Image service accesses the image from the location - specified. - - The copy-from option copies the image from the - location specified into the /var/lib/glance/images - directory. The same thing is done when using the STDIN redirection with - <, as shown in the example. - - Run the following command to view the properties of existing - images: - - $ glance image-show <image-uuid> -
- - -
- Adding Signed Images - - To provide a chain of trust from an end user to the Image - service, and the Image service to Compute, an end user can import - signed images into the Image service that can be verified in Compute. - Appropriate Image service properties need to be set to enable signature - verification. Currently, signature verification is provided in Compute - only, but an accompanying feature in the Image service is targeted for - Mitaka. - - Prior to the steps below, an asymmetric keypair and certificate - must be generated. In this example, these are called private_key.pem and - new_cert.crt, respectively, and both reside in the current directory. Also - note that the image in this example is cirros-0.3.4-x86_64-disk.img, but - any image can be used. - - The following are steps needed to create the signature used for the signed images: - - - - Retrieve image for upload - $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img > cirros-0.3.4-x86_64-disk.img - - - - Use private key to create a signature of the image - - The following implicit values are being used to create the - signature in this example: - - Signature hash method = SHA-256 - Signature key type = RSA-PSS - - - - - The following options are currently supported: - - Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512 - Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, ECC_SECT571R1, ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, and RSA-PSS - - - - Generate signature of image and convert it to a base64 representation: - $ openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:\ - pss -out image-file.signature cirros-0.3.4-x86_64-disk.img - $ base64 image-file.signature > signature_64 - $ cat signature_64 -'c4br5f3FYQV6Nu20cRUSnx75R/VcW3diQdsUN2nhPw+UcQRDoGx92hwMgRxzFYeUyydRTWCcUS2ZLudPR9X7rM -THFInA54Zj1TwEIbJTkHwlqbWBMU4+k5IUIjXxHO6RuH3Z5f/SlSt7ajsNVXaIclWqIw5YvEkgXTIEuDPE+C4=' - - - Create context - $ python ->>> from keystoneclient.v3 import client ->>> keystone_client = client.Client(username='demo', - user_domain_name='Default', - password='password', - project_name='demo', - auth_url='http://localhost:5000/v3') - ->>> from oslo_context import context ->>> context = context.RequestContext(auth_token=keystone_client.auth_token, - tenant=keystone_client.project_id) - - - Encode certificate in DER format - ->>> from cryptography import x509 as cryptography_x509 ->>> from cryptography.hazmat import backends ->>> from cryptography.hazmat.primitives import serialization ->>> with open("new_cert.crt", "rb") as cert_file: ->>> cert = cryptography_x509.load_pem_x509_certificate( - cert_file.read(), - backend=backends.default_backend() - ) ->>> certificate_der = cert.public_bytes(encoding=serialization.Encoding.DER) - - - Upload Certificate in DER format to Castellan - ->>> from castellan.common.objects import x_509 ->>> from castellan import key_manager ->>> castellan_cert = x_509.X509(certificate_der) ->>> key_API = key_manager.API() ->>> cert_uuid = key_API.store(context, castellan_cert) ->>> cert_uuid -u'62a33f41-f061-44ba-9a69-4fc247d3bfce' - - - Upload Image to Image service, with Signature Metadata - - - The following signature properties are used: - - - img_signature uses the signature called signature_64 - - - img_signature_certificate_uuid uses the value from cert_uuid in - section 5 above - - - img_signature_hash_method matches 'SHA-256' in section 2 above - - - img_signature_key_type matches 'RSA-PSS' in section 2 above - - - - - - $ source openrc demo -$ export OS_IMAGE_API_VERSION=2 -$ glance image-create\ ---property name=cirrosSignedImage_goodSignature\ ---property is-public=true\ ---container-format bare\ ---disk-format qcow2\ ---property img_signature='c4br5f3FYQV6Nu20cRUSnx75R/VcW3diQdsUN2nhPw+UcQRDoGx92hwM -gRxzFYeUyydRTWCcUS2ZLudPR9X7rMTHFInA54Zj1TwEIbJTkHwlqbWBMU4+k5IUIjXxHO6RuH3Z5f/ -SlSt7ajsNVXaIclWqIw5YvEkgXTIEuDPE+C4='\ ---property img_signature_certificate_uuid='62a33f41-f061-44ba-9a69-4fc247d3bfce'\ ---property img_signature_hash_method='SHA-256'\ ---property img_signature_key_type='RSA-PSS'\ -< ~/cirros-0.3.4-x86_64-disk.img - - Signature verification will occur when Compute boots the signed image - - As of the Mitaka release, Compute supports instance signature validation. This is enabled by setting the verify_glance_signatures flag in nova.conf to TRUE. When enabled, Compute will automatically validate signed instances prior to its launch. - - - -
- -
- Sharing Images Between Projects - - In a multi-tenant cloud environment, users sometimes want to share - their personal images or snapshots with other projects. - projects - - sharing images between - - images - - sharing between projects - This can be done on the command line with the - glance tool by the owner of the image. - - To share an image or snapshot with another project, do the - following: - - - - Obtain the UUID of the image: - - $ glance image-list - - - - Obtain the UUID of the project with which you want to share - your image. Unfortunately, non-admin users are unable to use the - keystone command to do this. The easiest solution - is to obtain the UUID either from an administrator of the cloud or - from a user located in the project. - - - - Once you have both pieces of information, run the - glance command: - - $ glance member-create <image-uuid> <project-uuid> - - For example: - - $ glance member-create 733d1c44-a2ea-414b-aca7-69decf20d810 \ - 771ed149ef7e4b2b88665cc1c98f77ca - - Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access - to image 733d1c44-a2ea-414b-aca7-69decf20d810. - - -
- -
- Deleting Images - - To delete an image, - images - - deleting - just execute: - - $ glance image-delete <image uuid> - - - Deleting an image does not affect instances or snapshots that - were based on the image. - -
- -
- Other CLI Options - - A full set of options can be found using: - images - - CLI options for - - - $ glance help - - or the Command-Line Interface Reference - . -
- -
- The Image service and the Database - - The only thing that the Image service does not store in a database - is the image itself. The Image service database has two main - tables: - databases - - Image service - - Image service - - database tables - - - - - images - - - - image_properties - - - - Working directly with the database and SQL queries can provide you - with custom lists and reports of images. Technically, you can update - properties about images through the database, although this is not - generally recommended. -
- -
- Example Image service Database Queries - - One interesting example is modifying the table of images and the - owner of that image. This can be easily done if you simply display the - unique ID of the owner. - Image service - - database queries - This example goes one step further and displays the - readable name of the owner: - - mysql> select glance.images.id, - glance.images.name, keystone.tenant.name, is_public from - glance.images inner join keystone.tenant on - glance.images.owner=keystone.tenant.id; - - Another example is displaying all properties for a certain - image: - - mysql> select name, value from - image_properties where id = <image_id> -
-
- -
- Flavors - - Virtual hardware templates are called "flavors" in OpenStack, - defining sizes for RAM, disk, number of cores, and so on. The default - install provides five flavors. - - These are configurable by admin users (the rights may also be - delegated to other users by redefining the access controls for - compute_extension:flavormanage in - /etc/nova/policy.json on the nova-api server). - To get the list of available flavors on your system, run: - DAC (discretionary access control) - - flavor - - user training - - flavors - - - $ nova flavor-list -+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ -| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | -+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ -| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | -| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | -| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | -| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | -| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | -+-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ - - The nova flavor-create command allows authorized users - to create new flavors. Additional flavor manipulation commands can be - shown with the command: $ nova help | grep flavor - - Flavors define a number of parameters, resulting in the user having - a choice of what type of virtual machine to run—just like they would have - if they were purchasing a physical server. lists the elements that can be set. Note - in particular extra_specs, which can be - used to define free-form characteristics, giving a lot of flexibility - beyond just the size of RAM, CPU, and Disk. - base image - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Flavor parameters
Column Description
IDUnique ID (integer or UUID) for the flavor.
NameA descriptive name, such as xx.size_name, is conventional - but not required, though some third-party tools may rely on - it.
Memory_MBVirtual machine memory in megabytes.
DiskVirtual root disk size in gigabytes. This is an ephemeral - disk the base image is copied into. You don't use it when you boot - from a persistent volume. The "0" size is a special case that uses - the native base image size as the size of the ephemeral root - volume.
EphemeralSpecifies the size of a secondary ephemeral data disk. - This is an empty, unformatted disk and exists only for the life of - the instance.
SwapOptional swap space allocation for the - instance.
VCPUsNumber of virtual CPUs presented to the - instance.
RXTX_FactorOptional property that allows created servers to have a - different bandwidth - bandwidth - - capping - cap from that defined in the network they are - attached to. This factor is multiplied by the rxtx_base property of - the network. Default value is 1.0 (that is, the same as the attached - network).
Is_PublicBoolean value that indicates whether the flavor is - available to all users or private. Private flavors do not get the - current tenant assigned to them. Defaults to - True.
extra_specsAdditional optional restrictions on which compute nodes - the flavor can run on. This is implemented as key-value pairs that - must match against the corresponding key-value pairs on compute - nodes. Can be used to implement things like special resources (such - as flavors that can run only on compute nodes with GPU - hardware).
- -
- Private Flavors - - A user might need a custom flavor that is uniquely tuned for a - project she is working on. For example, the user might require 128 GB of - memory. If you create a new flavor as described above, the user would - have access to the custom flavor, but so would all other tenants in your - cloud. Sometimes this sharing isn't desirable. In this scenario, - allowing all users to have access to a flavor with 128 GB of memory - might cause your cloud to reach full capacity very quickly. To prevent - this, you can restrict access to the custom flavor using the - nova command: - - $ nova flavor-access-add <flavor-id> <project-id> - - To view a flavor's access list, do the following: - - $ nova flavor-access-list <flavor-id> - - - Best Practices - - Once access to a flavor has been restricted, no other projects - besides the ones granted explicit access will be able to see the - flavor. This includes the admin project. Make sure to add the admin - project in addition to the original project. - - It's also helpful to allocate a specific numeric range for - custom and private flavors. On UNIX-based systems, nonsystem accounts - usually have a UID starting at 500. A similar approach can be taken - with custom flavors. This helps you easily identify which flavors are - custom, private, and public for the entire cloud. - -
- - - How Do I Modify an Existing Flavor? - - The OpenStack dashboard simulates the ability to modify a flavor - by deleting an existing flavor and creating a new one with the same - name. - -
- -
- - - Security Groups - - A common new-user issue with OpenStack is failing to set an - appropriate security group when launching an instance. As a result, the - user is unable to contact the instance on the network. - security groups - - user training - - security groups - - - Security groups are sets of IP filter rules that are applied to an - instance's networking. They are project specific, and project members can - edit the default rules for their group and add new rules sets. All - projects have a "default" security group, which is applied to instances - that have no other security group defined. Unless changed, this security - group denies all incoming traffic. - -
- General Security Groups Configuration - - The nova.conf option - allow_same_net_traffic (which defaults to - true) globally controls whether the rules apply to - hosts that share a network. When set to true, hosts - on the same subnet are not filtered and are allowed to pass all types of - traffic between them. On a flat network, this allows all instances from - all projects unfiltered communication. With VLAN networking, this allows - access between instances within the same project. If - allow_same_net_traffic is set to false, - security groups are enforced for all connections. In this case, it is - possible for projects to simulate allow_same_net_traffic by - configuring their default security group to allow all traffic from their - subnet. - - - As noted in the previous chapter, the number of rules per - security group is controlled by the - quota_security_group_rules, and the number of allowed - security groups per project is controlled by the - quota_security_groups quota. - -
- -
- End-User Configuration of Security Groups - - Security groups for the current project can be found on the - OpenStack dashboard under Access & Security. To - see details of an existing group, select the edit - action for that security group. Obviously, modifying existing groups can - be done from this edit interface. There is a - Create Security Group button on the main - Access & Security page for creating new groups. - We discuss the terms used in these fields when we explain the - command-line equivalents. - -
- Setting with nova command - - From the command line, you can get a list of security groups for - the project you're acting in using the nova - command: - - - - $ nova secgroup-list -+---------+-------------+ -| Name | Description | -+---------+-------------+ -| default | default | -| open | all ports | -+---------+-------------+ - - To view the details of the "open" security group: - - $ nova secgroup-list-rules open -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| icmp | -1 | 255 | 0.0.0.0/0 | | -| tcp | 1 | 65535 | 0.0.0.0/0 | | -| udp | 1 | 65535 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - These rules are all "allow" type rules, as the default is deny. - The first column is the IP protocol (one of icmp, tcp, or udp), and the - second and third columns specify the affected port range. The fourth - column specifies the IP range in CIDR format. This example shows the - full port range for all protocols allowed from all IPs. - - When adding a new security group, you should pick a descriptive - but brief name. This name shows up in brief descriptions of the - instances that use it where the longer description field often does not. - Seeing that an instance is using security group http - is much easier to understand than bobs_group or - secgrp1. - - As an example, let's create a security group that allows web - traffic anywhere on the Internet. We'll call this group - global_http, which is clear and reasonably concise, - encapsulating what is allowed and from where. From the command line, - do: - - $ nova secgroup-create \ - global_http "allow web traffic from the Internet" -+-------------+-------------------------------------+ -| Name | Description | -+-------------+-------------------------------------+ -| global_http | allow web traffic from the Internet | -+-------------+-------------------------------------+ - - This creates the empty security group. To make it do what we want, - we need to add some rules: - - $ nova secgroup-add-rule <secgroup> <ip-proto> <from-port> <to-port> <cidr> -$ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0 -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| tcp | 80 | 80 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - Note that the arguments are positional, and the - from-port and to-port arguments - specify the allowed local port range connections. These arguments are - not indicating source and destination ports of the connection. More - complex rule sets can be built up through multiple invocations of - nova secgroup-add-rule. For example, if you want to - pass both http and https traffic, do this: - - $ nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0 -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| tcp | 443 | 443 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - Despite only outputting the newly added rule, this operation is - additive: - - $ nova secgroup-list-rules global_http -+-------------+-----------+---------+-----------+--------------+ -| IP Protocol | From Port | To Port | IP Range | Source Group | -+-------------+-----------+---------+-----------+--------------+ -| tcp | 80 | 80 | 0.0.0.0/0 | | -| tcp | 443 | 443 | 0.0.0.0/0 | | -+-------------+-----------+---------+-----------+--------------+ - - The inverse operation is called - secgroup-delete-rule, using the same format. Whole - security groups can be removed with - secgroup-delete. - - To create security group rules for a cluster of instances, you - want to use SourceGroups. - - SourceGroups are a special dynamic way of defining the CIDR of - allowed sources. The user specifies a SourceGroup (security group name) - and then all the users' other instances using the specified SourceGroup - are selected dynamically. This dynamic selection alleviates the need for - individual rules to allow each new member of the cluster. - - The code is structured like this: nova - secgroup-add-group-rule <secgroup> <source-group> - <ip-proto> <from-port> <to-port>. An example - usage is shown here: - - $ nova secgroup-add-group-rule cluster global-http tcp 22 22 - - The "cluster" rule allows SSH access from any other instance that - uses the global-http group. -
-
- Setting with neutron command - - If your environment is using Neutron, you can configure security groups settings using the neutron command. - Get a list of security groups for the project you are acting in, by using following command: - - $ neutron security-group-list -+--------------------------------------+---------+-------------+ -| id | name | description | -+--------------------------------------+---------+-------------+ -| 6777138a-deb7-4f10-8236-6400e7aff5b0 | default | default | -| 750acb39-d69b-4ea0-a62d-b56101166b01 | open | all ports | -+--------------------------------------+---------+-------------+ - - To view the details of the "open" security group: - - $ neutron security-group-show open -+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Field | Value | -+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| description | all ports | -| id | 750acb39-d69b-4ea0-a62d-b56101166b01 | -| name | open | -| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "607ec981611a4839b7b06f6dfa81317d", "port_range_max": null, "security_group_id": "750acb39-d69b-4e0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv4", "id": "361a1b62-95dd-46e1-8639-c3b2000aab60"} | -| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "udp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 65535, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": 1, "ethertype": "IPv4", "id": "496ba8b7-d96e-4655-920f-068a3d4ddc36"} | -| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "icmp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv4", "id": "50642a56-3c4e-4b31-9293-0a636759a156"} | -| | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "607ec981611a4839b7b06f6dfa81317d", "port_range_max": null, "security_group_id": "750acb39-d69b-4e0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv6", "id": "f46f35eb-8581-4ca1-bbc9-cf8d0614d067"} | -| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 65535, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": 1, "ethertype": "IPv4", "id": "fb6f2d5e-8290-4ed8-a23b-c6870813c921"} | -| tenant_id | 607ec981611a4839b7b06f6dfa81317d | -+----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - These rules are all "allow" type rules, as the default is deny. - This example shows the full port range for all protocols allowed from all IPs. - This section describes the most common security-group-rule parameters: - - - - direction - - - The direction in which the security group rule is applied. - Valid values are ingress or egress. - - - - - remote_ip_prefix - - - This attribute value matches the specified IP prefix as the - source IP address of the IP packet. - - - - - protocol - - - The protocol that is matched by the security group rule. - Valid values are null, tcp, - udp, icmp, - and icmpv6. - - - - - port_range_min - - - The minimum port number in the range that is matched - by the security group rule. If the protocol is TCP or UDP, - this value must be less than or equal to the - port_range_max attribute value. If the - protocol is ICMP or ICMPv6, this value must be an - ICMP or ICMPv6 type, respectively. - - - - - port_range_max - - - The maximum port number in the range that is matched - by the security group rule. - The port_range_min attribute constrains - the port_range_max attribute. If the - protocol is ICMP or ICMPv6, this value must be an ICMP or - ICMPv6 type, respectively. - - - - - ethertype - - - Must be IPv4 or IPv6, - and addresses represented in CIDR must match the ingress or egress rules. - - - - - When adding a new security group, you should pick a descriptive - but brief name. This name shows up in brief descriptions of the - instances that use it where the longer description field often does not. - Seeing that an instance is using security group http - is much easier to understand than bobs_group or - secgrp1. - - This example creates a security group that allows web - traffic anywhere on the Internet. We'll call this group - global_http, which is clear and reasonably concise, - encapsulating what is allowed and from where. From the command line, - do: - - $ neutron security-group-create \ - global_http --description "allow web traffic from the Internet" -Created a new security_group: -+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Field | Value | -+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| description | allow web traffic from the Internet | -| id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | -| name | global_http | -| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv4", "id": "b2e56b3a-890b-48d3-9380-8a9f6f8b1b36"} | -| | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv6", "id": "153d84ba-651d-45fd-9015-58807749efc5"} | -| tenant_id | 341f49145ec7445192dc3c2abc33500d | -+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - Immediately after create, the security group has only an allow egress rule. - To make it do what we want, we need to add some rules: - - $ neutron security-group-rule-create [-h] - [-f {html,json,json,shell,table,value,yaml,yaml}] - [-c COLUMN] [--max-width <integer>] - [--noindent] [--prefix PREFIX] - [--request-format {json,xml}] - [--tenant-id TENANT_ID] - [--direction {ingress,egress}] - [--ethertype ETHERTYPE] - [--protocol PROTOCOL] - [--port-range-min PORT_RANGE_MIN] - [--port-range-max PORT_RANGE_MAX] - [--remote-ip-prefix REMOTE_IP_PREFIX] - [--remote-group-id REMOTE_GROUP] - SECURITY_GROUP -$ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 80 --port-range-max 80 --remote-ip-prefix 0.0.0.0/0 global_http -Created a new security_group_rule: -+-------------------+--------------------------------------+ -| Field | Value | -+-------------------+--------------------------------------+ -| direction | ingress | -| ethertype | IPv4 | -| id | 88ec4762-239e-492b-8583-e480e9734622 | -| port_range_max | 80 | -| port_range_min | 80 | -| protocol | tcp | -| remote_group_id | | -| remote_ip_prefix | 0.0.0.0/0 | -| security_group_id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | -| tenant_id | 341f49145ec7445192dc3c2abc33500d | -+-------------------+--------------------------------------+ - - More complex rule sets can be built up through multiple invocations of - neutron security-group-rule-create. For example, if you want to - pass both http and https traffic, do this: - - $ neutron security-group-rule-create --direction ingress --ethertype ipv4 --protocol tcp --port-range-min 443 --port-range-max 443 --remote-ip-prefix 0.0.0.0/0 global_http -Created a new security_group_rule: -+-------------------+--------------------------------------+ -| Field | Value | -+-------------------+--------------------------------------+ -| direction | ingress | -| ethertype | IPv4 | -| id | c50315e5-29f3-408e-ae15-50fdc03fb9af | -| port_range_max | 443 | -| port_range_min | 443 | -| protocol | tcp | -| remote_group_id | | -| remote_ip_prefix | 0.0.0.0/0 | -| security_group_id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | -| tenant_id | 341f49145ec7445192dc3c2abc33500d | -+-------------------+--------------------------------------+ - - Despite only outputting the newly added rule, this operation is - additive: - - $ neutron security-group-show global_http -+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| Field | Value | -+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| description | allow web traffic from the Internet | -| id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | -| name | global_http | -| security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv6", "id": "153d84ba-651d-45fd-9015-58807749efc5"} | -| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 80, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": 80, "ethertype": "IPv4", "id": "88ec4762-239e-492b-8583-e480e9734622"} | -| | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv4", "id": "b2e56b3a-890b-48d3-9380-8a9f6f8b1b36"} | -| | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 443, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": 443, "ethertype": "IPv4", "id": "c50315e5-29f3-408e-ae15-50fdc03fb9af"} | -| tenant_id | 341f49145ec7445192dc3c2abc33500d | -+----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - - The inverse operation is called - security-group-rule-delete, specifying security-group-rule ID. - Whole security groups can be removed with - security-group-delete. - - To create security group rules for a cluster of instances, - use RemoteGroups. - - RemoteGroups are a dynamic way of defining the CIDR of - allowed sources. The user specifies a RemoteGroup (security group name) - and then all the users' other instances using the specified RemoteGroup - are selected dynamically. This dynamic selection alleviates the need for - individual rules to allow each new member of the cluster. - - The code is similar to the above example of security-group-rule-create. - To use RemoteGroup, specify --remote-group-id - instead of --remote-ip-prefix. - For example: - $ neutron security-group-rule-create --direction ingress \ - --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-group-id global_http cluster - - The "cluster" rule allows SSH access from any other instance that - uses the global-http group. -
-
-
- -
- - - Block Storage - - OpenStack volumes are persistent block-storage devices that may be - attached and detached from instances, but they can be attached to only one - instance at a time. Similar to an external hard drive, they do not provide - shared storage in the way a network file system or object store does. It - is left to the operating system in the instance to put a file system on - the block device and mount it, or not. - - block storage - - - storage - block storage - - - user training - block storage - - - - As with other removable disk technology, it is important that the - operating system is not trying to make use of the disk before removing it. - On Linux instances, this typically involves unmounting any file systems - mounted from the volume. The OpenStack volume service cannot tell whether - it is safe to remove volumes from an instance, so it does what it is told. - If a user tells the volume service to detach a volume from an instance - while it is being written to, you can expect some level of file system - corruption as well as faults from whatever process within the instance was - using the device. - - There is nothing OpenStack-specific in being aware of the steps - needed to access block devices from within the instance operating system, - potentially formatting them for first use and being cautious when removing - them. What is specific is how to create new volumes and attach and detach - them from instances. These operations can all be done from the - Volumes page of the dashboard or by using the - cinder command-line client. - - To add new volumes, you need only a name and a volume size in - gigabytes. Either put these into the Create Volume - web form or use the command line: - - $ cinder create --display-name test-volume 10 - - This creates a 10 GB volume named test-volume. To - list existing volumes and the instances they are connected to, if - any: - - $ cinder list -+------------+---------+--------------------+------+-------------+-------------+ -| ID | Status | Display Name | Size | Volume Type | Attached to | -+------------+---------+--------------------+------+-------------+-------------+ -| 0821...19f | active | test-volume | 10 | None | | -+------------+---------+--------------------+------+-------------+-------------+ - - OpenStack Block Storage also allows creating snapshots of - volumes. Remember that this is a block-level snapshot that is crash - consistent, so it is best if the volume is not connected to an instance - when the snapshot is taken and second best if the volume is not in use on - the instance it is attached to. If the volume is under heavy use, the - snapshot may have an inconsistent file system. In fact, by default, the - volume service does not take a snapshot of a volume that is attached to an - image, though it can be forced to. To take a volume snapshot, either - select Create Snapshot from the actions column next - to the volume name on the dashboard Volumes page, or - run this from the command line: - - usage: cinder snapshot-create [--force <True|False>] -[--display-name <display-name>] -[--display-description <display-description>] -<volume-id> -Add a new snapshot. -Positional arguments: <volume-id> ID of the volume to snapshot -Optional arguments: --force <True|False> Optional flag to indicate whether to - snapshot a volume even if its - attached to an instance. - (Default=False) ---display-name <display-name> Optional snapshot name. - (Default=None) ---display-description <display-description> -Optional snapshot description. (Default=None) - For more information about updating Block Storage volumes (for example, resizing or - transferring), see the OpenStack End User Guide. -
- Block Storage Creation Failures - - If a user tries to create a volume and the volume immediately goes - into an error state, the best way to troubleshoot is to grep the cinder - log files for the volume's UUID. First try the log files on the cloud - controller, and then try the storage node where the volume was attempted - to be created: - - # grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log -
-
- - -
- - - Shared File Systems Service - - Similar to Block Storage, the Shared File System is a persistent - storage, called share, that can be used in multi-tenant environments. - Users create and mount a share as a remote file system on any machine - that allows mounting shares, and has network access to share exporter. - This share can then be used for storing, sharing, and exchanging files. - The default configuration of the Shared File Systems service depends - on the back-end driver the admin chooses when starting the Shared File - Systems service. - For more information about existing back-end drivers, see section - "Share Backends" - of Shared File Systems service Developer Guide. For example, - in case of OpenStack Block Storage based back-end is used, the Shared - File Systems service cares about everything, including VMs, networking, - keypairs, and security groups. Other configurations require more - detailed knowledge of shares functionality to set up and tune specific - parameters and modes of shares functioning. - - - - Shares are a remote mountable file system, so users can mount a share - to multiple hosts, and have it accessed from multiple hosts by multiple - users at a time. With the Shared File Systems service, you can perform - a large number of operations with shares: - - - Create, update, delete and force-delete shares - - - Change access rules for shares, reset share state - - - Specify quotas for existing users or tenants - - - Create share networks - - - Define new share types - - - Perform operations with share snapshots: create, change name, - create a share from a snapshot, delete - - - Operate with consistency groups - - - Use security services - - - For more information on share management see section - - “Share management” of chapter “Shared File Systems” in - OpenStack Administrator Guide. - As to Security services, you should remember that different drivers - support different authentication methods, while generic driver does not - support Security Services at all (see section - - “Security services” of chapter “Shared File Systems” in - OpenStack Administrator Guide). - - - - You can create a share in a network, list shares, and - show information for, update, and delete a specified share. You can - also create snapshots of shares (see section - - “Share snapshots” of chapter “Shared File Systems” in OpenStack - Administrator Guide). - - - - There are default and specific share types that allow you to filter or - choose back-ends before you create a share. Functions and behaviour of - share type is similar to Block Storage volume type (see section - - “Share types” of chapter “Shared File Systems” in OpenStack - Administrator Guide). - - - - To help users keep and restore their data, Shared File Systems service - provides a mechanism to create and operate snapshots (see section - - “Share snapshots” of chapter “Shared File Systems” in OpenStack - Administrator Guide). - - - - A security service stores configuration information for clients for - authentication and authorization. Inside Manila a share network can be - associated with up to three security types (for detailed - information see section - - “Security services” of chapter “Shared File Systems” in - OpenStack Administrator Guide): - - - LDAP - - - Kerberos - - - Microsoft Active Directory - - - - - - Shared File Systems service differs from the principles - implemented in Block Storage. Shared File Systems service can work in - two modes: - - - Without interaction with share networks, in so called - "no share servers" mode. - - - Interacting with share networks. - - - Networking service is used by the Shared File Systems service to - directly operate with share servers. For switching interaction with - Networking service on, create a share specifying a share network. - To use "share servers" mode even being out of OpenStack, a network - plugin called StandaloneNetworkPlugin is used. In this case, - provide network information in the configuration: IP range, network - type, and segmentation ID. - Also you can add security services to a share network (see section - - “Networking” of chapter “Shared File Systems” in OpenStack - Administrator Guide). - - - - The main idea of consistency groups is to enable you to create - snapshots at the exact same point in time from multiple file system - shares. Those snapshots can be then used for restoring all shares that - were associated with the consistency group (see section - - “Consistency groups” of chapter “Shared File Systems” in - OpenStack Administrator Guide). - - - - Shared File System storage allows administrators to set limits and - quotas for specific tenants and users. Limits are the resource - limitations that are allowed for each tenant or user. Limits consist - of: - - - Rate limits - - - Absolute limits - - - Rate limits control the frequency at which users can issue specific API - requests. Rate limits are configured by administrators in a config file. - Also, administrator can specify quotas also known as max values of - absolute limits per tenant. Whereas users can see only the amount of - their consumed resources. - Administrator can specify rate limits or quotas for the following - resources: - - - Max amount of space awailable for all shares - Max number of shares - Max number of shared networks - Max number of share snapshots - Max total amount of all snapshots - Type and number of API calls that can be made in a - specific time interval - - - User can see his rate limits and absolute limits by running commands - manila rate-limits and manila absolute-limits - respectively. - For more details on limits and quotas see subsection - - "Quotas and limits" of "Share management" section of OpenStack - Administrator Guide document. - - - - This section lists several of the most important Use Cases that - demonstrate the main functions and abilities of Shared File Systems - service: - - - Create share - - - Operating with a share - - - Manage access to shares - - - Create snapshots - - - Create a share network - - - Manage a share network - - - - - - Shared File Systems service cannot warn you - beforehand if it is safe to write a specific large amount of data onto - a certain share or to remove a consistency group if it has a number of - shares assigned to it. In such a potentially erroneous situations, if a - mistake happens, you can expect some error message or even failing of - shares or consistency groups into an incorrect status. You can also - expect some level of system corruption if a user tries to unmount an - unmanaged share while a process is using it for data transfer. - - - -
- Create Share - - - In this section, we examine the process of creating a simple share. - It consists of several steps: - - - Check if there is an appropriate share type defined in the - Shared File Systems service - - - - If such a share type does not exist, an Admin should create - it using manila type-create command before other - users are able to use it - - - Using a share network is optional. However if you need one, - check if there is an appropriate network defined in Shared File - Systems service by using manila share-network-list - command. For the information on creating a share network, see - below in this chapter. - - - - Create a public share using manila create - - - Make sure that the share has been created successfully and is - ready to use (check the share status and see the share export - location) - - - Below is the same whole procedure described step by step and in more - detail. - - - - - Before you start, make sure that Shared File Systems service is - installed on your OpenStack cluster and is ready to use. - - - - By default, there are no share types defined in Shared File Systems - service, so you can check if a required one has been already created: - $ manila type-list -+------+--------+-----------+-----------+----------------------------------+----------------------+ -| ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | -+------+--------+-----------+-----------+----------------------------------+----------------------+ -| c0...| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| -+------+--------+-----------+-----------+----------------------------------+----------------------+ - - - If the share types list is empty or does not contain a type you - need, create the required share type using this command: - $ manila type-create netapp1 False --is_public True - This command will create a public share with the following parameters: - name = netapp1, spec_driver_handles_share_servers = False - - - You can now create a public share with - my_share_net network, default share type, NFS shared file systems - protocol, and 1 GB size: - $ manila create nfs 1 --name "Share1" --description "My first share" --share-type default --share-network my_share_net --metadata aim=testing --public -+-----------------------------+--------------------------------------+ -| Property | Value | -+-----------------------------+--------------------------------------+ -| status | None | -| share_type_name | default | -| description | My first share | -| availability_zone | None | -| share_network_id | None | -| export_locations | [] | -| share_server_id | None | -| host | None | -| snapshot_id | None | -| is_public | True | -| task_state | None | -| snapshot_support | True | -| id | aca648eb-8c03-4394-a5cc-755066b7eb66 | -| size | 1 | -| name | Share1 | -| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | -| created_at | 2015-09-24T12:19:06.925951 | -| export_location | None | -| share_proto | NFS | -| consistency_group_id | None | -| source_cgsnapshot_member_id | None | -| project_id | 20787a7ba11946adad976463b57d8a2f | -| metadata | {u'aim': u'testing'} | -+-----------------------------+--------------------------------------+ - - - - To confirm that creation has been successful, see the share in the - share list: - $ manila list -+----+-------+-----+------------+-----------+-------------------------------+----------------------+ -| ID | Name | Size| Share Proto| Share Type| Export location | Host | -+----+-------+-----+------------+-----------+-------------------------------+----------------------+ -| a..| Share1| 1 | NFS | c0086... | 10.254.0.3:/shares/share-2d5..| manila@generic1#GEN..| -+----+-------+-----+------------+-----------+-------------------------------+----------------------+ - - - - Check the share status and see the share export location. After - creation, the share status should become available: - $ manila show Share1 -+-----------------------------+-------------------------------------------+ -| Property | Value | -+-----------------------------+-------------------------------------------+ -| status | available | -| share_type_name | default | -| description | My first share | -| availability_zone | nova | -| share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | -| export_locations | 10.254.0.3:/shares/share-2d5e2c0a-1f84... | -| share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 | -| host | manila@generic1#GENERIC1 | -| snapshot_id | None | -| is_public | True | -| task_state | None | -| snapshot_support | True | -| id | aca648eb-8c03-4394-a5cc-755066b7eb66 | -| size | 1 | -| name | Share1 | -| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | -| created_at | 2015-09-24T12:19:06.000000 | -| share_proto | NFS | -| consistency_group_id | None | -| source_cgsnapshot_member_id | None | -| project_id | 20787a7ba11946adad976463b57d8a2f | -| metadata | {u'aim': u'testing'} | -+-----------------------------+-------------------------------------------+ - - The value is_public defines the level of visibility for the - share: whether other tenants can or cannot see the share. By default, - the share is private. Now you can mount the created share like a remote - file system and use it for your purposes. - - - See subsection - - “Share Management” of “Shared File Systems” section of - Administration Guide document for the details on share - management operations. - - - -
- - -
- Manage Access To Shares - - - Currently, you have a share and would like to control access to this - share for other users. For this, you have to perform a number of steps - and operations. Before getting to manage access to the share, pay - attention to the following important parameters. - To grant or deny access to a share, specify one of these supported - share access levels: - - - - rw: read and write (RW) access. This is the default - value. - - - - - ro: read-only (RO) access. - - - - - Additionally, you should also specify one of these supported - authentication methods: - - - - ip: authenticates an instance through its IP address. - A valid format is XX.XX.XX.XX orXX.XX.XX.XX/XX. - For example 0.0.0.0/0. - - - - - cert: authenticates an instance through a TLS - certificate. Specify the TLS identity as the IDENTKEY. A valid - value is any string up to 64 characters long in the common name - (CN) of the certificate. The meaning of a string depends on its - interpretation. - - - - - user: authenticates by a specified user or group - name. A valid value is an alphanumeric string that can contain - some special characters and is from 4 to 32 characters long. - - - - - - Do not mount a share without an access rule! This can lead to - an exception. - - - - - - Allow access to the share with IP access type and 10.254.0.4 IP address: - $ manila access-allow Share1 ip 10.254.0.4 --access-level rw -+--------------+--------------------------------------+ -| Property | Value | -+--------------+--------------------------------------+ -| share_id | 7bcd888b-681b-4836-ac9c-c3add4e62537 | -| access_type | ip | -| access_to | 10.254.0.4 | -| access_level | rw | -| state | new | -| id | de715226-da00-4cfc-b1ab-c11f3393745e | -+--------------+--------------------------------------+ - - - - Mount the Share: - $ sudo mount -v -t nfs 10.254.0.5:/shares/share-5789ddcf-35c9-4b64-a28a-7f6a4a574b6a /mnt/ - Then check if the share mounted successfully and according to the - specified access rules: - $ manila access-list Share1 -+--------------------------------------+-------------+------------+--------------+--------+ -| id | access type | access to | access level | state | -+--------------------------------------+-------------+------------+--------------+--------+ -| 4f391c6b-fb4f-47f5-8b4b-88c5ec9d568a | user | demo | rw | error | -| de715226-da00-4cfc-b1ab-c11f3393745e | ip | 10.254.0.4 | rw | active | -+--------------------------------------+-------------+------------+--------------+--------+ - - - - - Different share features are supported by different share drivers. - In these examples there was used generic (Cinder as a back-end) - driver that does not support user and - cert authentication methods. - - - - - - For the details of features supported by different drivers see - section - - “Manila share features support mapping” of Manila Developer Guide document. - - - -
- - - -
- Manage Shares - - - There are several other useful operations you would perform when working with shares. - - - -
- Update Share - - - To change the name of a share, or update its description, or level of - visibility for other tenants, use this command: - $ manila update Share1 --description "My first share. Updated" --is-public False - - Check the attributes of the updated Share1: - $ manila show Share1 -+-----------------------------+--------------------------------------------+ -| Property | Value | -+-----------------------------+--------------------------------------------+ -| status | available | -| share_type_name | default | -| description | My first share. Updated | -| availability_zone | nova | -| share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | -| export_locations | 10.254.0.3:/shares/share-2d5e2c0a-1f84-... | -| share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 | -| host | manila@generic1#GENERIC1 | -| snapshot_id | None | -| is_public | False | -| task_state | None | -| snapshot_support | True | -| id | aca648eb-8c03-4394-a5cc-755066b7eb66 | -| size | 1 | -| name | Share1 | -| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | -| created_at | 2015-09-24T12:19:06.000000 | -| share_proto | NFS | -| consistency_group_id | None | -| source_cgsnapshot_member_id | None | -| project_id | 20787a7ba11946adad976463b57d8a2f | -| metadata | {u'aim': u'testing'} | -+-----------------------------+--------------------------------------------+ - -
- - -
- Reset Share State - - - Sometimes a share may appear and then - hang in an erroneous or a transitional state. Unprivileged users do - not have the appropriate access rights to correct this situation. - However, having cloud administrator's permissions, you can reset the - share's state by using - $ manila reset-state [–state state] share_name - command to reset share state, where state indicates which state to - assign the share to. Options include: - available, error, creating, deleting, error_deleting - states. - - - - After running - $ manila reset-state Share2 --state deleting - check the share's status: - $ manila show Share2 -+-----------------------------+-------------------------------------------+ -| Property | Value | -+-----------------------------+-------------------------------------------+ -| status | deleting | -| share_type_name | default | -| description | share from a snapshot. | -| availability_zone | nova | -| share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | -| export_locations | [] | -| share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 | -| host | manila@generic1#GENERIC1 | -| snapshot_id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | -| is_public | False | -| task_state | None | -| snapshot_support | True | -| id | b6b0617c-ea51-4450-848e-e7cff69238c7 | -| size | 1 | -| name | Share2 | -| share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | -| created_at | 2015-09-25T06:25:50.000000 | -| export_location | 10.254.0.3:/shares/share-1dc2a471-3d47-...| -| share_proto | NFS | -| consistency_group_id | None | -| source_cgsnapshot_member_id | None | -| project_id | 20787a7ba11946adad976463b57d8a2f | -| metadata | {u'source': u'snapshot'} | -+-----------------------------+-------------------------------------------+ - -
- - - - -
- Delete Share - - - If you do not need a share any more, you can delete it using - manila delete share_name_or_ID command like: - $ manila delete Share2 - - - - - If you specified the consistency group while creating a share, - you should provide the --consistency-group parameter to delete - the share: - - - - - $ manila delete ba52454e-2ea3-47fa-a683-3176a01295e6 --consistency-group ffee08d9-c86c-45e5-861e-175c731daca2 - - - - Sometimes it appears that a share hangs in one of transitional states - (i.e. creating, deleting, managing, unmanaging, extending, and shrinking). - In that case, to delete it, you need - manila force-delete share_name_or_ID command and - administrative permissions to run it: - $ manila force-delete b6b0617c-ea51-4450-848e-e7cff69238c7 - - - - - For more details and additional information about other cases, features, - API commands etc, see subsection - - “Share Management” of “Shared File Systems” - section of Administration Guide document. - - - -
- -
- - - -
- Create Snapshots - - - The Shared File Systems service provides a mechanism of snapshots to - help users to restore their own data. To create a snapshot, use - manila snapshot-create command like: - $ manila snapshot-create Share1 --name Snapshot1 --description "Snapshot of Share1" -+-------------+--------------------------------------+ -| Property | Value | -+-------------+--------------------------------------+ -| status | creating | -| share_id | aca648eb-8c03-4394-a5cc-755066b7eb66 | -| name | Snapshot1 | -| created_at | 2015-09-25T05:27:38.862040 | -| share_proto | NFS | -| id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | -| size | 1 | -| share_size | 1 | -| description | Snapshot of Share1 | -+-------------+--------------------------------------+ - - - - Then, if needed, update the name and description of the created snapshot: - $ manila snapshot-rename Snapshot1 Snapshot_1 --description "Snapshot of Share1. Updated." - To make sure that the snapshot is available, run: - $ manila snapshot-show Snapshot1 -+-------------+--------------------------------------+ -| Property | Value | -+-------------+--------------------------------------+ -| status | available | -| share_id | aca648eb-8c03-4394-a5cc-755066b7eb66 | -| name | Snapshot1 | -| created_at | 2015-09-25T05:27:38.000000 | -| share_proto | NFS | -| id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | -| size | 1 | -| share_size | 1 | -| description | Snapshot of Share1 | -+-------------+--------------------------------------+ - - - - For more details and additional information on snapshots, see - subsection - - “Share Snapshots” of “Shared File Systems” section of - “Administration Guide” document. - - - - -
- - - -
- Create a Share Network - - - To control a share network, Shared File Systems service requires - interaction with Networking service to manage share servers on its own. - If the selected driver runs in a mode that requires such kind of - interaction, you need to specify the share network when a share is - created. For the information on share creation, see - earlier in this chapter. - Initially, check the existing share networks type list by: - $ manila share-network-list -+--------------------------------------+--------------+ -| id | name | -+--------------------------------------+--------------+ -+--------------------------------------+--------------+ - - - - If share network list is empty or does not contain a required network, - just create, for example, a share network with a private network and - subnetwork. - $ manila share-network-create --neutron-net-id 5ed5a854-21dc-4ed3-870a-117b7064eb21 --neutron-subnet-id 74dcfb5a-b4d7-4855-86f5-a669729428dc --name my_share_net --description "My first share network" -+-------------------+--------------------------------------+ -| Property | Value | -+-------------------+--------------------------------------+ -| name | my_share_net | -| segmentation_id | None | -| created_at | 2015-09-24T12:06:32.602174 | -| neutron_subnet_id | 74dcfb5a-b4d7-4855-86f5-a669729428dc | -| updated_at | None | -| network_type | None | -| neutron_net_id | 5ed5a854-21dc-4ed3-870a-117b7064eb21 | -| ip_version | None | -| nova_net_id | None | -| cidr | None | -| project_id | 20787a7ba11946adad976463b57d8a2f | -| id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | -| description | My first share network | -+-------------------+--------------------------------------+ - - The segmentation_id, cidr, ip_version, - and network_type share network attributes are - automatically set to the values determined by the network provider. - - - - Then check if the network became created by requesting the networks - list once again: - $ manila share-network-list -+--------------------------------------+--------------+ -| id | name | -+--------------------------------------+--------------+ -| 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net | -+--------------------------------------+--------------+ - - - - Finally, to create a share that uses this share network, get to Create - Share use case described earlier in this chapter. - - - See subsection - “Share Networks” of “Shared File Systems” section of - Administration Guide document for more details. - - - - -
- - - -
- Manage a Share Network - - - There is a pair of useful commands that help manipulate share networks. - To start, check the network list: - $ manila share-network-list -+--------------------------------------+--------------+ -| id | name | -+--------------------------------------+--------------+ -| 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net | -+--------------------------------------+--------------+ - If you configured the back-end with driver_handles_share_servers = True - (with the share servers) and had already some operations in the Shared - File Systems service, you can see manila_service_network - in the neutron list of networks. This network was created by the share - driver for internal usage. - $ neutron net-list -+--------------+------------------------+------------------------------------+ -| id | name | subnets | -+--------------+------------------------+------------------------------------+ -| 3b5a629a-e...| manila_service_network | 4f366100-50... 10.254.0.0/28 | -| bee7411d-d...| public | 884a6564-01... 2001:db8::/64 | -| | | e6da81fa-55... 172.24.4.0/24 | -| 5ed5a854-2...| private | 74dcfb5a-bd... 10.0.0.0/24 | -| | | cc297be2-51... fd7d:177d:a48b::/64 | -+--------------+------------------------+------------------------------------+ - - - - You also can see detailed information about the share network including - network_type, segmentation_id fields: - $ neutron net-show manila_service_network -+---------------------------+--------------------------------------+ -| Field | Value | -+---------------------------+--------------------------------------+ -| admin_state_up | True | -| id | 3b5a629a-e7a1-46a3-afb2-ab666fb884bc | -| mtu | 0 | -| name | manila_service_network | -| port_security_enabled | True | -| provider:network_type | vxlan | -| provider:physical_network | | -| provider:segmentation_id | 1068 | -| router:external | False | -| shared | False | -| status | ACTIVE | -| subnets | 4f366100-5108-4fa2-b5b1-989a121c1403 | -| tenant_id | 24c6491074e942309a908c674606f598 | -+---------------------------+--------------------------------------+ - You also can add and remove the security services to the share network. - - - - - For details, see subsection - - "Security Services" of “Shared File Systems” section of - Administration Guide document. - - - -
- -
- - - - -
- - - Instances - - Instances are the running virtual machines within an OpenStack - cloud. This section deals with how to work with them and their underlying - images, their network properties, and how they are represented in the - database. - user training - - instances - - -
- Starting Instances - - To launch an instance, you need to select an image, a flavor, and - a name. The name needn't be unique, but your life will be simpler if it - is because many tools will use the name in place of the UUID so long as - the name is unique. You can start an instance from the dashboard from - the Launch Instance button on the - Instances page or by selecting the - Launch Instance action next to an image or snapshot - on the Images page. - instances - - starting - - - On the command line, do this: - - $ nova boot --flavor <flavor> --image <image> <name> - - There are a number of optional items that can be specified. You - should read the rest of this section before trying to start an instance, - but this is the base command that later details are layered upon. - - To delete instances from the dashboard, select the - Delete instance action next to the instance on - the Instances page. - - - In releases prior to Mitaka, select the equivalent Terminate instance action. - . - - From the command line, do this: - - $ nova delete <instance-uuid> - - It is important to note that powering off an instance does not - terminate it in the OpenStack sense. -
- -
- Instance Boot Failures - - If an instance fails to start and immediately moves to an error - state, there are a few different ways to track down what has gone wrong. - Some of these can be done with normal user access, while others require - access to your log server or compute nodes. - instances - - boot failures - - - The simplest reasons for nodes to fail to launch are quota - violations or the scheduler being unable to find a suitable compute node - on which to run the instance. In these cases, the error is apparent when - you run a nova show on the faulted instance: - config drive - - - $ nova show test-instance - - -+------------------------+-----------------------------------------------------\ -| Property | Value / -+------------------------+-----------------------------------------------------\ -| OS-DCF:diskConfig | MANUAL / -| OS-EXT-STS:power_state | 0 \ -| OS-EXT-STS:task_state | None / -| OS-EXT-STS:vm_state | error \ -| accessIPv4 | / -| accessIPv6 | \ -| config_drive | / -| created | 2013-03-01T19:28:24Z \ -| fault | {u'message': u'NoValidHost', u'code': 500, u'created/ -| flavor | xxl.super (11) \ -| hostId | / -| id | 940f3b2f-bd74-45ad-bee7-eb0a7318aa84 \ -| image | quantal-test (65b4f432-7375-42b6-a9b8-7f654a1e676e) / -| key_name | None \ -| metadata | {} / -| name | test-instance \ -| security_groups | [{u'name': u'default'}] / -| status | ERROR \ -| tenant_id | 98333a1a28e746fa8c629c83a818ad57 / -| updated | 2013-03-01T19:28:26Z \ -| user_id | a1ef823458d24a68955fec6f3d390019 / -+------------------------+-----------------------------------------------------\ - - - In this case, looking at the fault message - shows NoValidHost, indicating that the scheduler was - unable to match the instance requirements. - - If nova show does not sufficiently explain the - failure, searching for the instance UUID in the - nova-compute.log on the compute node it was scheduled on or - the nova-scheduler.log on your scheduler hosts is a good - place to start looking for lower-level problems. - - Using nova show as an admin user will show the - compute node the instance was scheduled on as hostId. If - the instance failed during scheduling, this field is blank. -
- -
- Using Instance-Specific Data - - There are two main types of instance-specific data: metadata and - user data. - metadata - - instance metadata - - instances - - instance-specific data - - -
- Instance metadata - - For Compute, instance metadata is a collection of key-value - pairs associated with an instance. Compute reads and writes to these - key-value pairs any time during the instance lifetime, from inside and - outside the instance, when the end user uses the Compute API to do so. - However, you cannot query the instance-associated key-value pairs with - the metadata service that is compatible with the Amazon EC2 metadata - service. - - For an example of instance metadata, users can generate and - register SSH keys using the nova command: - - $ nova keypair-add mykey > mykey.pem - - This creates a key named mykey, which you - can associate with instances. The file mykey.pem - is the private key, which should be saved to a secure location because - it allows root access to instances the mykey - key is associated with. - - Use this command to register an existing key with - OpenStack: - - $ nova keypair-add --pub-key mykey.pub mykey - - - You must have the matching private key to access instances - associated with this key. - - - To associate a key with an instance on boot, add - --key_name mykey to your command line. For - example: - - $ nova boot --image ubuntu-cloudimage --flavor 2 --key_name mykey myimage - - When booting a server, you can also add arbitrary metadata so - that you can more easily identify it among other running instances. - Use the --meta option with a key-value pair, where you - can make up the string for both the key and the value. For example, - you could add a description and also the creator of the server: - - $ nova boot --image=test-image --flavor=1 \ - --meta description='Small test image' smallimage - - When viewing the server information, you can see the metadata - included on the metadata - line: - - - - $ nova show smallimage -+------------------------+-----------------------------------------+ -| Property | Value | -+------------------------+-----------------------------------------+ -| OS-DCF:diskConfig | MANUAL | -| OS-EXT-STS:power_state | 1 | -| OS-EXT-STS:task_state | None | -| OS-EXT-STS:vm_state | active | -| accessIPv4 | | -| accessIPv6 | | -| config_drive | | -| created | 2012-05-16T20:48:23Z | -| flavor | m1.small | -| hostId | de0...487 | -| id | 8ec...f915 | -| image | natty-image | -| key_name | | -| metadata | {u'description': u'Small test image'} | -| name | smallimage | -| private network | 172.16.101.11 | -| progress | 0 | -| public network | 10.4.113.11 | -| status | ACTIVE | -| tenant_id | e83...482 | -| updated | 2012-05-16T20:48:35Z | -| user_id | de3...0a9 | -+------------------------+-----------------------------------------+ -
- -
- Instance user data - - The user-data key is a special key in the metadata - service that holds a file that cloud-aware applications within the - guest instance can access. For example, cloudinit is an open - source package from Ubuntu, but available in most distributions, that - handles early initialization of a cloud instance that makes use of - this user data. - user data - - - This user data can be put in a file on your local system and - then passed in at instance creation with the flag --user-data - <user-data-file>. For example: - - $ nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file mydatainstance - - To understand the difference between user data and metadata, - realize that user data is created before an instance is started. User - data is accessible from within the instance when it is running. User - data can be used to store configuration, a script, or anything the - tenant wants. -
- -
- File injection - - Arbitrary local files can also be placed into the instance file - system at creation time by using the --file - <dst-path=src-path> option. You may store up to five - files. - file injection - - - For example, let's say you have a special - authorized_keys file named - special_authorized_keysfile that for some reason you want to put on - the instance instead of using the regular SSH key injection. In this - case, you can use the following command: - - $ nova boot --image ubuntu-cloudimage --flavor 1 \ - --file /root/.ssh/authorized_keys=special_authorized_keysfile authkeyinstance -
-
-
- -
- Associating Security Groups - - Security groups, as discussed earlier, are typically required to - allow network traffic to an instance, unless the default security group - for a project has been modified to be more permissive. - security groups - - user training - - security groups - - - Adding security groups is typically done on instance boot. When - launching from the dashboard, you do this on the Access & - Security tab of the Launch Instance - dialog. When launching from the command line, append - --security-groups with a comma-separated list of security - groups. - - It is also possible to add and remove security groups when an - instance is running. Currently this is only available through the - command-line tools. Here is an example: - - $ nova add-secgroup <server> <securitygroup> - - $ nova remove-secgroup <server> <securitygroup> -
- -
- Floating IPs - - Where floating IPs are configured in a deployment, each project will - have a limited number of floating IPs controlled by a quota. However, - these need to be allocated to the project from the central pool prior to - their use—usually by the administrator of the project. To allocate a - floating IP to a project, use the Allocate IP To - Project button on the Floating IPs tab - of the Access & Security page of the dashboard. - The command line can also be used: - address pool - - IP addresses - - floating - - user training - - floating IPs - - - $ nova floating-ip-create - - Once allocated, a floating IP can be assigned to running instances - from the dashboard either by selecting Associate Floating - IP from the actions drop-down next to the IP on the - Floating IPs tab of the - Access & Security page or by making this - selection next to the instance you want to associate it with on the - Instances page. The inverse action, - Dissociate Floating IP, is available from the - Floating IPs tab of the - Access & Security page and from the - Instances page. - - To associate or disassociate a floating IP with a server from the - command line, use the following commands: - - $ nova add-floating-ip <server> <address> - - $ nova remove-floating-ip <server> <address> -
- -
- Attaching Block Storage - - You can attach block storage to instances from the dashboard on the - Volumes page. Click the Manage - Attachments action next to the volume you want to - attach. - storage - - block storage - - block storage - - user training - - block storage - - - To perform this action from command line, run the following - command: - - $ nova volume-attach <server> <volume> <device> - - You can also specify block device - block device - mapping at instance boot time through the nova command-line - client with this option set: - - --block-device-mapping <dev-name=mapping> - - The block device mapping format is - <dev-name>=<id>:<type>:<size(GB)>:<delete-on-terminate>, - where: - - - - dev-name - - - A device name where the volume is attached in the system at - /dev/dev_name - - - - - id - - - The ID of the volume to boot from, as shown in the output of - nova volume-list - - - - - type - - - Either snap, which means that the volume - was created from a snapshot, or anything other than - snap (a blank string is valid). In the preceding - example, the volume was not created from a snapshot, so we leave - this field blank in our following example. - - - - - size (GB) - - - The size of the volume in gigabytes. It is safe to leave this - blank and have the Compute Service infer the size. - - - - - delete-on-terminate - - - A boolean to indicate whether the volume should be deleted - when the instance is terminated. True can be specified as - True or 1. False can be - specified as False or - 0. - - - - - The following command will boot a new instance and attach a volume - at the same time. The volume of ID 13 will be attached as - /dev/vdc. It is not a snapshot, does not specify a size, and - will not be deleted when the instance is terminated: - - $ nova boot --image 4042220e-4f5e-4398-9054-39fbd75a5dd7 \ - --flavor 2 --key-name mykey --block-device-mapping vdc=13:::0 \ - boot-with-vol-test - - If you have previously prepared block storage with a bootable file - system image, it is even possible to boot from persistent block storage. - The following command boots an image from the specified volume. It is - similar to the previous command, but the image is omitted and the volume - is now attached as /dev/vda: - - $ nova boot --flavor 2 --key-name mykey \ - --block-device-mapping vda=13:::0 boot-from-vol-test - - Read more detailed instructions for launching an instance from a - bootable volume in the OpenStack End User - Guide. - - To boot normally from an image and attach block storage, map to a - device other than vda. You can find instructions for launching an instance - and attaching a volume to the instance and for copying the image to the - attached volume in the OpenStack End User - Guide. -
- -
- - - Taking Snapshots - - The OpenStack snapshot mechanism allows you to create new images - from running instances. This is very convenient for upgrading base images - or for taking a published image and customizing it for local use. To - snapshot a running instance to an image using the CLI, do this: - base image - - snapshot - - user training - - snapshots - - - $ nova image-create <instance name or uuid> <name of new image> - - The dashboard interface for snapshots can be confusing because the - snapshots and images are displayed in the Images - page. However, an instance snapshot is an image. The - only difference between an image that you upload directly to the Image - Service and an image that you create by snapshot is that an image created - by snapshot has additional properties in the glance database. These - properties are found in the image_properties table and - include: - - - - - Name - - Value - - - - - - image_type - - snapshot - - - - instance_uuid - - <uuid of instance that was snapshotted> - - - - base_image_ref - - <uuid of original image of instance that was - snapshotted> - - - - image_location - - snapshot - - - - -
- Live Snapshots - - Live snapshots is a feature that allows users to snapshot the - running virtual machines without pausing them. These snapshots are - simply disk-only snapshots. Snapshotting an instance can now be - performed with no downtime (assuming QEMU 1.3+ and libvirt 1.0+ are - used). - live snapshots - - - - Disable live snapshotting - If you use libvirt version 1.2.2, - you may experience intermittent problems with live snapshot creation. - - - To effectively disable the libvirt live snapshotting, until the problem - is resolved, add the below setting to nova.conf. - [workarounds] - disable_libvirt_livesnapshot = True - - - - Ensuring Snapshots of Linux Guests Are Consistent - - The following section is from Sébastien Han's “OpenStack: Perform Consistent - Snapshots” blog entry. - - A snapshot captures the state of the file system, but not the - state of the memory. Therefore, to ensure your snapshot contains the - data that you want, before your snapshot you need to ensure - that: - - - - Running programs have written their contents to disk - - - - The file system does not have any "dirty" buffers: where - programs have issued the command to write to disk, but the - operating system has not yet done the write - - - - To ensure that important services have written their contents to - disk (such as databases), we recommend that you read the documentation - for those applications to determine what commands to issue to have - them sync their contents to disk. If you are unsure how to do this, - the safest approach is to simply stop these running services - normally. - - To deal with the "dirty" buffer issue, we recommend using the - sync command before snapshotting: - - # sync - - Running sync writes dirty buffers (buffered blocks - that have been modified but not written yet to the disk block) to - disk. - - Just running sync is not enough to ensure that the - file system is consistent. We recommend that you use the - fsfreeze tool, which halts new access to the file system, - and create a stable image on disk that is suitable for snapshotting. - The fsfreeze tool supports several file systems, - including ext3, ext4, and XFS. If your virtual machine instance is - running on Ubuntu, install the util-linux package to get - fsfreeze: - - - In the very common case where the underlying snapshot is - done via LVM, the filesystem freeze is automatically handled by LVM. - - - - # apt-get install util-linux - - If your operating system doesn't have a version of - fsfreeze available, you can use - xfs_freeze instead, which is available on Ubuntu in - the xfsprogs package. Despite the "xfs" in the name, xfs_freeze also - works on ext3 and ext4 if you are using a Linux kernel version 2.6.29 - or greater, since it works at the virtual file system (VFS) level - starting at 2.6.29. The xfs_freeze version supports the same - command-line arguments as fsfreeze. - - Consider the example where you want to take a snapshot of a - persistent block storage volume, detected by the guest operating - system as /dev/vdb and mounted on - /mnt. The fsfreeze command accepts two - arguments: - - - - -f - - - Freeze the system - - - - - -u - - - Thaw (unfreeze) the system - - - - - To freeze the volume in preparation for snapshotting, you would - do the following, as root, inside the instance: - - # fsfreeze -f /mnt - - You must mount the file system before you - run the fsfreeze command. - - When the fsfreeze -f command is issued, all - ongoing transactions in the file system are allowed to complete, new - write system calls are halted, and other calls that modify the file - system are halted. Most importantly, all dirty data, metadata, and log - information are written to disk. - - Once the volume has been frozen, do not attempt to read from or - write to the volume, as these operations hang. The operating system - stops every I/O operation and any I/O attempts are delayed until the - file system has been unfrozen. - - Once you have issued the fsfreeze command, it - is safe to perform the snapshot. For example, if your instance was - named mon-instance and you wanted to snapshot it to - an image named mon-snapshot, you could now run the - following: - - $ nova image-create mon-instance mon-snapshot - - When the snapshot is done, you can thaw the file system with the - following command, as root, inside of the instance: - - # fsfreeze -u /mnt - - If you want to back up the root file system, you can't simply - run the preceding command because it will freeze the prompt. Instead, - run the following one-liner, as root, inside the instance: - - # fsfreeze -f / && read x; fsfreeze -u / - - After this command it is common practice to call nova image-create from your - workstation, and once done press enter in your instance shell to unfreeze it. - Obviously you could automate this, but at least it will let you properly synchronize. - - - - Ensuring Snapshots of Windows Guests Are Consistent - - Obtaining consistent snapshots of Windows VMs is conceptually - similar to obtaining consistent snapshots of Linux VMs, although it - requires additional utilities to coordinate with a Windows-only - subsystem designed to facilitate consistent backups. - - Windows XP and later releases include a Volume Shadow Copy - Service (VSS) which provides a framework so that compliant - applications can be consistently backed up on a live filesystem. To - use this framework, a VSS requestor is run that signals to the VSS - service that a consistent backup is needed. The VSS service notifies - compliant applications (called VSS writers) to quiesce their data - activity. The VSS service then tells the copy provider to create - a snapshot. Once the snapshot has been made, the VSS service - unfreezes VSS writers and normal I/O activity resumes. - - QEMU provides a guest agent that can be run in guests running - on KVM hypervisors. This guest agent, on Windows VMs, coordinates with - the Windows VSS service to facilitate a workflow which ensures - consistent snapshots. This feature requires at least QEMU 1.7. The - relevant guest agent commands are: - - - - guest-file-flush - - Write out "dirty" buffers to disk, similar to the - Linux sync operation. - - - - - guest-fsfreeze - - Suspend I/O to the disks, similar to the Linux - fsfreeze -f operation. - - - - - guest-fsfreeze-thaw - - Resume I/O to the disks, similar to the Linux - fsfreeze -u operation. - - - - - To obtain snapshots of a Windows VM these commands can be - scripted in sequence: flush the filesystems, freeze the filesystems, - snapshot the filesystems, then unfreeze the filesystems. As with - scripting similar workflows against Linux VMs, care must be used - when writing such a script to ensure error handling is thorough and - filesystems will not be left in a frozen state. - -
-
- -
- - - Instances in the Database - - While instance information is stored in a number of database tables, - the table you most likely need to look at in relation to user instances is - the instances table. - instances - - database information - - databases - - instance information in - - user training - - instances - - - The instances table carries most of the information related to both - running and deleted instances. It has a bewildering array of fields; for - an exhaustive list, look at the database. These are the most useful fields - for operators looking to form queries: - - - - The deleted field is set to - 1 if the instance has been deleted and - NULL if it has not been deleted. This field is - important for excluding deleted instances from your queries. - - - - The uuid field is the UUID of the instance - and is used throughout other tables in the database as a foreign key. - This ID is also reported in logs, the dashboard, and command-line - tools to uniquely identify an instance. - - - - A collection of foreign keys are available to find relations to - the instance. The most useful of these—user_id and - project_id—are the UUIDs of the user who launched - the instance and the project it was launched in. - - - - The host field tells which compute node is - hosting the instance. - - - - The hostname field holds the name of the - instance when it is launched. The display-name is initially the same - as hostname but can be reset using the nova rename command. - - - - A number of time-related fields are useful for tracking when state - changes happened on an instance: - - - - created_at - - - - updated_at - - - - deleted_at - - - - scheduled_at - - - - launched_at - - - - terminated_at - - -
- -
- Good Luck! - - This section was intended as a brief introduction to some of the - most useful of many OpenStack commands. For an exhaustive list, please - refer to the - OpenStack Administrator Guide. We hope - your users remain happy and recognize your hard work! (For more hard work, - turn the page to the next chapter, where we discuss the system-facing - operations: maintenance, failures and debugging.) -
-
diff --git a/doc/openstack-ops/cover.png b/doc/openstack-ops/cover.png deleted file mode 100644 index d1b9c532..00000000 Binary files a/doc/openstack-ops/cover.png and /dev/null differ diff --git a/doc/openstack-ops/figures/Check_mark_23x20_02.png b/doc/openstack-ops/figures/Check_mark_23x20_02.png deleted file mode 100644 index e6e5d5a7..00000000 Binary files a/doc/openstack-ops/figures/Check_mark_23x20_02.png and /dev/null differ diff --git a/doc/openstack-ops/figures/Check_mark_23x20_02.svg b/doc/openstack-ops/figures/Check_mark_23x20_02.svg deleted file mode 100644 index 3051a2f9..00000000 --- a/doc/openstack-ops/figures/Check_mark_23x20_02.svg +++ /dev/null @@ -1,60 +0,0 @@ - - - - - - - - - image/svg+xml - - - - - - - - diff --git a/doc/openstack-ops/figures/network_packet_ping.svg b/doc/openstack-ops/figures/network_packet_ping.svg deleted file mode 100644 index f5dda8e2..00000000 --- a/doc/openstack-ops/figures/network_packet_ping.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2013-03-02 18:48ZCanvas 1Layer 1Compute Node nbr100Internetinstanceeth0eth0vnet1L2 Switchgateway12345 diff --git a/doc/openstack-ops/figures/neutron_packet_ping.svg b/doc/openstack-ops/figures/neutron_packet_ping.svg deleted file mode 100644 index 898794ff..00000000 --- a/doc/openstack-ops/figures/neutron_packet_ping.svg +++ /dev/null @@ -1,1734 +0,0 @@ - - - - - 2013-03-02 18:48Z - - - - image/svg+xml - - - - - - - - - - - - - - - - - - - - - - - - IP Link - Layer2 VLANTrunk - - Neutron Network Paths - - VLAN and GRE networks - - GRE networks - - VLAN networks - - - - - - Compute Node n - - - - br-int - - - - instance - - - - - eth0 - - - - - - tap - - - - - - - 1 - - - - - - - - - 2 - - - - - - - - - - 3 - - - - - br-tun - - - - - - 4b - - - - - - - 4a - - - - - patch-tun - - - - int-br-eth1 - - - - - - - eth1 - - - - - phy-br-eth1 - - - - - eth0 - - - - br-eth1 - - - - patch-int - - - - gre0 - - - - gre<N> - - - - - - - - - - - - - Network Node - - - - - dhcp-agent - - - - - - - 10 - - - - - - - 5b - - - - - - 5a - - - - - - - 8 - - - - - - br-eth1 - - - br-tun - - - - - eth1 - - - - - phy-br-eth1 - - - - patch-int - - - - gre<N> - - - - gre0 - - - - - eth2 - - - - - - - - qg-<n> - - - - - - eth0 - - - - - - - 9 - - - - - - - - - 6 - - - - br-int - - - - - - tap - - - - - - qr-<n> - - - - - phy-br-eth1 - - - - patch-tun - - - - br-ex - - - - netns qrouter-uuid - netns qdhcp-uuid - - - - - - - - - - - l3-agent - - - - - - - 7 - - - - - - - - Internet - - - - - - diff --git a/doc/openstack-ops/figures/os-ref-arch.svg b/doc/openstack-ops/figures/os-ref-arch.svg deleted file mode 100644 index 7fea7f19..00000000 --- a/doc/openstack-ops/figures/os-ref-arch.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2013-02-26 23:27ZCanvas 1Layer 1Compute Node 2nova-computenova-api-metadatanova-vncconsolenova-networketh1eth0Compute Node 1HypervisorAPI for metadatanoVNCnova-networkInternetCloud Controller NodeDatabaseMessage QueueAPI servicesSchedulerIdentityImageBlock StorageDashboardConsole accesseth0eth1Management Network 192.168.1.0/24Public Network 203.0.113.0/24Flat Network 10.1.0.0/16eth1eth0Block Storage NodeSCSI target (tgt)eth1Ephemeral Storage NodeNFSeth1cinder-volume diff --git a/doc/openstack-ops/figures/os_physical_network.svg b/doc/openstack-ops/figures/os_physical_network.svg deleted file mode 100644 index d4d83fcb..00000000 --- a/doc/openstack-ops/figures/os_physical_network.svg +++ /dev/null @@ -1,3 +0,0 @@ - - -2013-02-27 18:33ZCanvas 1Layer 1Compute Node neth0eth1Management Network192.168.1.0/24Flat Network10.1.0.0/16Public Network203.0.113.0/24br100instance ninstance 2instance 1 diff --git a/doc/openstack-ops/figures/osog_0001.png b/doc/openstack-ops/figures/osog_0001.png deleted file mode 100644 index 3d7c556a..00000000 Binary files a/doc/openstack-ops/figures/osog_0001.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_00in01.png b/doc/openstack-ops/figures/osog_00in01.png deleted file mode 100644 index 1a7c150c..00000000 Binary files a/doc/openstack-ops/figures/osog_00in01.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0101.png b/doc/openstack-ops/figures/osog_0101.png deleted file mode 100644 index 083d916a..00000000 Binary files a/doc/openstack-ops/figures/osog_0101.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0102.png b/doc/openstack-ops/figures/osog_0102.png deleted file mode 100644 index 33ac264b..00000000 Binary files a/doc/openstack-ops/figures/osog_0102.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0103.png b/doc/openstack-ops/figures/osog_0103.png deleted file mode 100644 index b5a2de15..00000000 Binary files a/doc/openstack-ops/figures/osog_0103.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0104.png b/doc/openstack-ops/figures/osog_0104.png deleted file mode 100644 index 3405b5f5..00000000 Binary files a/doc/openstack-ops/figures/osog_0104.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0105.png b/doc/openstack-ops/figures/osog_0105.png deleted file mode 100644 index a1971f47..00000000 Binary files a/doc/openstack-ops/figures/osog_0105.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0106.png b/doc/openstack-ops/figures/osog_0106.png deleted file mode 100644 index 57fc63af..00000000 Binary files a/doc/openstack-ops/figures/osog_0106.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_01in01.png b/doc/openstack-ops/figures/osog_01in01.png deleted file mode 100644 index 8656f98b..00000000 Binary files a/doc/openstack-ops/figures/osog_01in01.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_01in02.png b/doc/openstack-ops/figures/osog_01in02.png deleted file mode 100644 index 1565401e..00000000 Binary files a/doc/openstack-ops/figures/osog_01in02.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0201.png b/doc/openstack-ops/figures/osog_0201.png deleted file mode 100644 index 794c327e..00000000 Binary files a/doc/openstack-ops/figures/osog_0201.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0901.png b/doc/openstack-ops/figures/osog_0901.png deleted file mode 100644 index 8a11b8d8..00000000 Binary files a/doc/openstack-ops/figures/osog_0901.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_0902.png b/doc/openstack-ops/figures/osog_0902.png deleted file mode 100644 index 90a9db57..00000000 Binary files a/doc/openstack-ops/figures/osog_0902.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_1201.png b/doc/openstack-ops/figures/osog_1201.png deleted file mode 100644 index d0e3a3fd..00000000 Binary files a/doc/openstack-ops/figures/osog_1201.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_1202.png b/doc/openstack-ops/figures/osog_1202.png deleted file mode 100644 index ce1e475e..00000000 Binary files a/doc/openstack-ops/figures/osog_1202.png and /dev/null differ diff --git a/doc/openstack-ops/figures/osog_ac01.png b/doc/openstack-ops/figures/osog_ac01.png deleted file mode 100644 index 6caddef4..00000000 Binary files a/doc/openstack-ops/figures/osog_ac01.png and /dev/null differ diff --git a/doc/openstack-ops/figures/releasecyclegrizzlydiagram.png b/doc/openstack-ops/figures/releasecyclegrizzlydiagram.png deleted file mode 100644 index 26ae2250..00000000 Binary files a/doc/openstack-ops/figures/releasecyclegrizzlydiagram.png and /dev/null differ diff --git a/doc/openstack-ops/locale/ja.po b/doc/openstack-ops/locale/ja.po deleted file mode 100644 index ddef4076..00000000 --- a/doc/openstack-ops/locale/ja.po +++ /dev/null @@ -1,20708 +0,0 @@ -# Translators: -# Akihiro Motoki , 2013 -# Akira Yoshiyama , 2013 -# Andreas Jaeger , 2014-2015 -# Ying Chun Guo , 2013 -# doki701 , 2013 -# yfukuda , 2014 -# Masanori Itoh , 2013 -# Masanori Itoh , 2013 -# Masayuki Igawa , 2013 -# Masayuki Igawa , 2013 -# myamamot , 2014 -# *はたらくpokotan* <>, 2013 -# Tomoaki Nakajima <>, 2013 -# Yuki Shira , 2013 -# Shogo Sato , 2014 -# tsutomu.takekawa , 2013 -# Masanori Itoh , 2013 -# Toru Makabe , 2013 -# doki701 , 2013 -# Tom Fifield , 2014 -# Tomoyuki KATO , 2012-2015 -# Toru Makabe , 2013 -# tsutomu.takekawa , 2013 -# Ying Chun Guo , 2013 -# ykatabam , 2014 -# Yuki Shira , 2013 -# -# -# Akihiro Motoki , 2015. #zanata -# KATO Tomoyuki , 2015. #zanata -# OpenStack Infra , 2015. #zanata -# Shinichi Take , 2015. #zanata -# Akihiro Motoki , 2016. #zanata -# KATO Tomoyuki , 2016. #zanata -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2016-03-18 11:44+0000\n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" -"PO-Revision-Date: 2016-03-18 10:30+0000\n" -"Last-Translator: KATO Tomoyuki \n" -"Language: ja\n" -"Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Zanata 3.7.3\n" -"Language-Team: Japanese\n" - -msgid "\"Hey Alvaro, can you run a VLAN on top of a VLAN?\"" -msgstr "「Alvaro、VLAN 上に VLAN って作れるのかい?」" - -msgid "\"If you did, you'd add an extra 4 bytes to the packet\"" -msgstr "「もしやったら、パケットに余計に4バイト追加になるよ」" - -msgid "\"The Issue\"" -msgstr "「あの問題」" - -msgid "<delete-on-terminate>" -msgstr "<delete-on-terminate>" - -msgid "<dev-name>=<id>:<type>:<size(GB)>:" -msgstr "<dev-name>=<id>:<type>:<size(GB)>:" - -msgid "<instance id>/console.log" -msgstr "<instance id>/console.log" - -msgid "<uuid of instance that was snapshotted>" -msgstr "<スナップショットされたインスタンスの UUID>" - -msgid "<uuid of original image of instance that was snapshotted>" -msgstr "<スナップショットされたインスタンスの元イメージの UUID>" - -msgid "" -"(Optional) If you've logged in to your instance as the root user, you must " -"create a \"stack\" user; otherwise you'll run into permission issues. If " -"you've logged in as a user other than root, you can skip these steps:" -msgstr "" -"(オプション) インスタンスに root としてログインした場合、「stack」ユーザーを" -"作成する必要があります。そうしなければ、パーミッションの問題に遭遇するでしょ" -"う。root 以外のユーザーとしてログインしている場合、これらの手順をスキップでき" -"ます。" - -msgid "* schedule_run_instance" -msgstr "* schedule_run_instance" - -msgid "* select_destinations" -msgstr "* select_destinations" - -msgid "-f" -msgstr "-f" - -msgid "-u" -msgstr "-u" - -msgid "/var/lib/nova/instances" -msgstr "/var/lib/nova/instances" - -msgid "/var/lib/nova/instances/instance-" -msgstr "/var/lib/nova/instances/instance-" - -msgid "/var/log/apache2/" -msgstr "/var/log/apache2/" - -msgid "/var/log/cinder" -msgstr "/var/log/cinder" - -msgid "/var/log/cinder/cinder-volume.log" -msgstr "/var/log/cinder/cinder-volume.log" - -msgid "/var/log/glance" -msgstr "/var/log/glance" - -msgid "/var/log/keystone" -msgstr "/var/log/keystone" - -msgid "/var/log/libvirt/libvirtd.log" -msgstr "/var/log/libvirt/libvirtd.log" - -msgid "/var/log/neutron" -msgstr "/var/log/neutron" - -msgid "/var/log/nova" -msgstr "/var/log/nova" - -msgid "/var/log/rsyslog/c01.example.com/nova.log" -msgstr "/var/log/rsyslog/c01.example.com/nova.log" - -msgid "/var/log/rsyslog/c02.example.com/nova.log" -msgstr "/var/log/rsyslog/c02.example.com/nova.log" - -msgid "/var/log/rsyslog/nova.log" -msgstr "/var/log/rsyslog/nova.log" - -msgid "/var/log/syslog" -msgstr "/var/log/syslog" - -msgid "0 GB" -msgstr "0 GB" - -msgid "1" -msgstr "1" - -msgid "1 GB" -msgstr "1 GB" - -msgid "1 TB disk" -msgstr "1TBディスク" - -msgid "10" -msgstr "10" - -msgid "10 GB" -msgstr "10 GB" - -msgid "10 GB first disk, 30 GB second disk" -msgstr "1 番目のディスク 10 GB、2 番目のディスク 30 GB" - -msgid "100" -msgstr "100" - -msgid "10s of TBs of dataset storage" -msgstr "数十TBのデータセットストレージ" - -msgid "15" -msgstr "15" - -msgid "16 GB" -msgstr "16 GB" - -msgid "160 GB" -msgstr "160 GB" - -msgid "2" -msgstr "2" - -msgid "2 GB" -msgstr "2 GB" - -msgid "20" -msgstr "20" - -msgid "20 GB" -msgstr "20 GB" - -msgid "200 physical cores." -msgstr "物理コア 200 個" - -msgid "2010.1" -msgstr "2010.1" - -msgid "2011.1" -msgstr "2011.1" - -msgid "2011.2" -msgstr "2011.2" - -msgid "2011.3" -msgstr "2011.3" - -msgid "2011.3.1" -msgstr "2011.3.1" - -msgid "2012.1" -msgstr "2012.1" - -msgid "2012.1.1" -msgstr "2012.1.1" - -msgid "2012.1.2" -msgstr "2012.1.2" - -msgid "2012.1.3" -msgstr "2012.1.3" - -msgid "2012.2" -msgstr "2012.2" - -msgid "2012.2.1" -msgstr "2012.2.1" - -msgid "2012.2.2" -msgstr "2012.2.2" - -msgid "2012.2.3" -msgstr "2012.2.3" - -msgid "2012.2.4" -msgstr "2012.2.4" - -msgid "2013.1" -msgstr "2013.1" - -msgid "2013.1.1" -msgstr "2013.1.1" - -msgid "2013.1.2" -msgstr "2013.1.2" - -msgid "2013.1.3" -msgstr "2013.1.3" - -msgid "2013.1.4" -msgstr "2013.1.4" - -msgid "2013.1.5" -msgstr "2013.1.5" - -msgid "2013.2" -msgstr "2013.2" - -msgid "2013.2.1" -msgstr "2013.2.1" - -msgid "2013.2.2" -msgstr "2013.2.2" - -msgid "2013.2.3" -msgstr "2013.2.3" - -msgid "2013.2.4" -msgstr "2013.2.4" - -msgid "2014" -msgstr "2014" - -msgid "2014.1" -msgstr "2014.1" - -msgid "2014.1.1" -msgstr "2014.1.1" - -msgid "2014.1.2" -msgstr "2014.1.2" - -msgid "2014.1.3" -msgstr "2014.1.3" - -msgid "2014.2" -msgstr "2014.2" - -msgid "2015.1" -msgstr "2015.1" - -msgid "2015.2" -msgstr "2015.2" - -msgid "21" -msgstr "21" - -msgid "22" -msgstr "22" - -msgid "3" -msgstr "3" - -msgid "4" -msgstr "4" - -msgid "4 GB" -msgstr "4 GB" - -msgid "40 GB" -msgstr "40 GB" - -msgid "5" -msgstr "5" - -msgid "512 MB" -msgstr "512 MB" - -msgid "8" -msgstr "8" - -msgid "8 GB" -msgstr "8 GB" - -msgid "80 GB" -msgstr "80 GB" - -msgid "98" -msgstr "98" - -msgid "99" -msgstr "99" - -msgid "" -"/etc/cinder and /var/log/cinder follow the same " -"rules as other components.Block " -"Storagestorageblock storage" -msgstr "" -"/etc/cinder/var/log/cinder は他のコンポーネント" -"の場合と同じルールに従います。Block " -"Storagestorageblock storage" - -msgid "" -"/etc/glance and /var/log/glance follow the same " -"rules as their nova counterparts.Image servicebackup/recovery of" -msgstr "" -"/etc/glance/var/log/glanceはnovaの場合と同じルー" -"ルに従います。Image servicebackup/recovery of" - -msgid "" -"/etc/keystone and /var/log/keystone follow the " -"same rules as other components.Identitybackup/recovery" -msgstr "" -"/etc/keystone/var/log/keystone は他のコンポーネ" -"ントの場合と同じルールに従います。Identitybackup/recovery" - -msgid "" -"/etc/swift is very important to have backed up. This directory " -"contains the swift configuration files as well as the ring files and ring " -"builder files, which if lost, render the data on your " -"cluster inaccessible. A best practice is to copy the builder files to all " -"storage nodes along with the ring files. Multiple backup copies are spread " -"throughout your storage cluster.builder filesringsring buildersObject Storagebackup/recovery of" -msgstr "" -"/etc/swiftは非常に重要ですのでバックアップが必要です。このディレ" -"クトリには、swiftの設定ファイル以外に、RingファイルやRingビルダー" -"ファイルが置かれています。これらのファイルを消失した場合はクラス" -"ター上のデータにアクセスできなくなります。ベストプラクティスとしては、ビル" -"ダーファイルを全てのストレージノードにringファイルと共に置くことです。この方" -"法でストレージクラスター上にバックアップコピーが分散されて保存されます。" -"builder filesringsring buildersObject Storagebackup/recovery of" - -msgid "/var/lib/cinder should also be backed up." -msgstr "/var/lib/cinderもまたバックアップされるべきです。" - -msgid "" -"/var/lib/glance should also be backed up. Take special notice " -"of /var/lib/glance/images. If you are using a file-based back " -"end of glance, /var/lib/glance/images is where the images are " -"stored and care should be taken." -msgstr "" -"/var/lib/glanceもバックアップすべきです。 /var/lib/glance/" -"imagesには特段の注意が必要です。もし、ファイルベースのバックエンドを利" -"用しており、このディレクトリがイメージの保管ディレクトリならば特にです。" - -msgid "" -"/var/lib/keystone, although it should not contain any data " -"being used, can also be backed up just in case." -msgstr "" -"/var/lib/keystoneは、使用されるデータは含まれていないはずです" -"が、念のためバックアップします。" - -msgid "/var/lib/nova/instances contains two types of directories." -msgstr "" -"/var/lib/nova/instances には 2 種類のディレクトリがあります。" - -msgid "" -"/var/lib/nova is another important directory to back up. The " -"exception to this is the /var/lib/nova/instances subdirectory " -"on compute nodes. This subdirectory contains the KVM images of running " -"instances. You would want to back up this directory only if you need to " -"maintain backup copies of all instances. Under most circumstances, you do " -"not need to do this, but this can vary from cloud to cloud and your service " -"levels. Also be aware that making a backup of a live KVM instance can cause " -"that instance to not boot properly if it is ever restored from a backup." -msgstr "" -"/var/lib/nova がバックアップする他の重要なディレクトリです。これ" -"の例外がコンピュートノードにある /var/lib/nova/instances サブ" -"ディレクトリです。このサブディレクトリには実行中のインスタンスの KVM イメージ" -"が置かれます。このディレクトリをバックアップしたいと思うのは、すべてのインス" -"タンスのバックアップコピーを保持する必要がある場合だけでしょう。多くの場合に" -"おいて、これを実行する必要がありません。ただし、クラウドごとに異なり、サービ" -"スレベルによっても異なる可能性があります。稼働中の KVM インスタンスのバック" -"アップは、バックアップから復元したときでも、正しく起動しない可能性があること" -"に気をつけてください。" - -msgid "" -"/var/log/nova does not need to be backed up if you have all " -"logs going to a central area. It is highly recommended to use a central " -"logging server or back up the log directory." -msgstr "" -"/var/log/nova については、全てのログをリモートで集中管理している" -"のであれば、バックアップの必要はありません。ログ集約システムの導入か、ログ" -"ディレクトリのバックアップを強く推奨します" - -msgid "ro: read-only (RO) access." -msgstr "ro: 読み取り専用アクセス。" - -msgid "rw: read and write (RW) access. This is the default value." -msgstr "rw: 読み書きアクセス。デフォルト。" - -msgid "" -"Critical if the bug prevents a key feature from working " -"properly (regression) for all users (or without a simple workaround) or " -"results in data loss" -msgstr "" -"Critical このバグにより、目玉となる機能が全てのユーザで" -"正常に動作しない場合 (または簡単なワークアラウンドでは動作しない場合)、データ" -"ロスにつながるような場合" - -msgid "" -"High if the bug prevents a key feature from working " -"properly for some users (or with a workaround)" -msgstr "" -"High このバグにより、目玉となる機能が一部のユーザで正常" -"に動作しない場合 (または簡単なワークアラウンドで動作する場合)" - -msgid "" -"KVM as a hypervisor complements " -"the choice of Ubuntu—being a matched pair in terms of support, and also " -"because of the significant degree of attention it garners from the OpenStack " -"development community (including the authors, who mostly use KVM). It is " -"also feature complete, free from licensing charges and restrictions." -"kernel-based VM (KVM) hypervisorhypervisorsKVM" -msgstr "" -"KVMハイパーバイザー として " -"Ubuntu の選択を補完します。これらはサポート面で対応する一対であり、OpenStack " -"開発コミュニティ (主に KVM を使用する作成者) から集まる注目度が高いのも理由で" -"す。また、機能が完全で、ライセンスの料金や制限がありません。kernel-based VM (KVM) ハイパーバイザーハイパーバイザーKVM" - -msgid "" -"Live Migration is supported by way of shared storage, " -"with NFS as the distributed file system." -msgstr "" -"ライブマイグレーション は、共有ストレージを使用すること" -"によってサポートされます。分散ファイルシステムには NFS " -"を使用します。" - -msgid "Low if the bug is mostly cosmetic" -msgstr "Low このバグが多くの場合で軽微な場合" - -msgid "" -"Medium if the bug prevents a secondary feature from " -"working properly" -msgstr "" -"Medium このバグにより、ある程度重要な機能が正常に動作し" -"ない場合" - -msgid "" -"MySQL follows a similar trend. Despite its recent " -"change of ownership, this database is the most tested for use with OpenStack " -"and is heavily documented. We deviate from the default database, " -"SQLite, because SQLite is not an appropriate database " -"for production usage." -msgstr "" -"MySQL も同様の傾向に沿っています。最近所有権が移転したに" -"も関わらず、このデータベースは OpenStack での使用では最も検証されており、十分" -"に文書化されています。SQLite は本番環境での使用には適し" -"てないため、デフォルトのデータベースでは対象外とします。" - -msgid "" -"Wishlist if the bug is not really a bug but rather a " -"welcome change in behavior" -msgstr "" -"Wishlist 実際にはバグではないが、そのように動作を変更し" -"た方よいものの場合" - -msgid "" -"openstack-dev@lists.openstack.org. The scope of this " -"list is the future state of OpenStack. This is a high-traffic mailing list, " -"with multiple emails per day." -msgstr "" -"openstack-dev@lists.openstack.org。このリストは、" -"OpenStack の今後についての話題を扱います。流量が多く、1 日に複数のメールが流" -"れます。" - -msgid "" -"openstack-operators@lists.openstack.org. This list is " -"intended for discussion among existing OpenStack cloud operators, such as " -"yourself. Currently, this list is relatively low traffic, on the order of " -"one email a day." -msgstr "" -"openstack-operators@lists.openstack.org.。このメーリング" -"リストは、あなたのように、実際に OpenStack クラウドを運用している人々での議論" -"を目的としています。現在のところ、メールの流量は比較的少なく、1 日に 1 通くら" -"いです。" - -msgid "" -"openstack@lists.openstack.org. The scope of this list " -"is the current state of OpenStack. This is a very high-traffic mailing list, " -"with many, many emails per day." -msgstr "" -"openstack@lists.openstack.org。このリストは、OpenStack " -"の現行リリースに関する話題を取り扱います。流量が非常に多く、1 日にとてもたく" -"さんのメールが流れます。" - -msgid "cinder.conf:" -msgstr "cinder.conf:" - -msgid "" -"glance-api.conf and glance-registry.conf:" -msgstr "" -"glance-api.confglance-registry.conf:" - -msgid "keystone.conf:" -msgstr "keystone.conf:" - -msgid "nova.conf:" -msgstr "nova.conf:" - -msgid "" -"Availability zones and " -"host aggregates, which merely divide a single Compute deployment." -msgstr "" -"アベイラビリティゾーン " -"およびホストアグリゲート。コンピュートのデプロイメントの分割のみを行います。" - -msgid "" -"Block storage: You don't have to offer users block " -"storage if their use case only needs ephemeral storage on compute nodes, for " -"example." -msgstr "" -"ブロックストレージ: ユーザーのユースケースでコンピュー" -"トノード上などの一時的なストレージのみが必要とされる場合は、ブロックストレー" -"ジを提供する必要はありません。" - -msgid "" -"Dashboard: You probably want to offer a dashboard, " -"but your users may be more interested in API access only." -msgstr "" -"ダッシュボード: ダッシュボードの提供を考慮されているか" -"もしれませんが、ユーザーは API アクセスのみの方に対する関心の方が高い可能性が" -"あります。" - -msgid "" -"Floating IP address: Floating IP addresses are public " -"IP addresses that you allocate from a predefined pool to assign to virtual " -"machines at launch. Floating IP address ensure that the public IP address is " -"available whenever an instance is booted. Not every organization can offer " -"thousands of public floating IP addresses for thousands of instances, so " -"this feature is considered optional." -msgstr "" -"Floating IP アドレス: Floating IP アドレスとは、仮想マ" -"シンの起動時に事前定義されたプールから確保されるパブリック IP アドレスです。" -"Floating IP アドレスにより、インスタンスの起動時には常にパブリック IP アドレ" -"スが利用できます。すべての組織が何千ものインスタンスに何千ものパブリック " -"Floating IP アドレスを提供できるとは限らないので、この機能はオプションと考え" -"られます。" - -msgid "" -"Live migration: If you need to move running virtual " -"machine instances from one host to another with little or no service " -"interruption, you would enable live migration, but it is considered optional." -msgstr "" -"ライブマイグレーション: サービスをほとんどまたは全く停" -"止せずに実行中の仮想マシンをホスト間で移動する必要がある場合には、ライブマイ" -"グレーションを有効化することになりますが、この機能はオプションと考えられま" -"す。" - -msgid "" -"Object storage: You may choose to store machine " -"images on a file system rather than in object storage if you do not have the " -"extra hardware for the required replication and redundancy that OpenStack " -"Object Storage offers." -msgstr "" -"Object Storage: OpenStack Object Storage が提供するレ" -"プリケーションと冗長化に必要な追加のハードウェアがない場合には、マシンイメー" -"ジをファイルシステムに保存するように選択することが可能です。" - -msgid "Edit Project Members tab" -msgstr "プロジェクトメンバーの編集 タブ" - -msgid "" -"Multi-host is a high-availability " -"option for the network configuration, where the nova-network service is run on every compute node instead of running on only a " -"single node." -msgstr "" -"マルチホスト とは、ネットワーク設定の" -"高可用性オプションです。このオプションでは、nova-network " -"サービスが単一のノードだけではなく、各コンピュートノードで実行されます。" - -msgid "" -"instances losing IP address while running, due to No DHCPOFFER (http://www.gossamer-threads.com/lists/openstack/dev/14696)" -msgstr "" -"DHCPOFFERが送信されない事による、起動中のインスタンスのIPアドレス" -"の消失 (http://www.gossamer-threads.com/lists/openstack/dev/14696)" - -msgid "" -"Problem with Heavy Network IO and Dnsmasq (http://" -"www.gossamer-threads.com/lists/openstack/operators/18197)" -msgstr "" -"高負荷ネットワークIOとdnsmasqの問題 (http://www." -"gossamer-threads.com/lists/openstack/operators/18197)" - -msgid "" -"Multiple IRC " -"channels are available for general questions and developer " -"discussions. The general discussion channel is #openstack on irc." -"freenode.net." -msgstr "" -"一般的な質問用や開発者の議論用など、 複数の IRC チャネルがあります。一般の議論用の" -"チャネルは irc.freenode.net の #openstackです。" - -msgid "" -"Mailing " -"lists are also a great place to get help. The wiki page has more " -"information about the various lists. As an operator, the main lists you " -"should be aware of are:" -msgstr "" -"メーリング" -"リスト もアドバイスをもらうのに素晴らしい場所です。 Wiki ページにメー" -"リングリストの詳しい情報が載っています。運用者として確認しておくべき主要な" -"メーリングリストは以下です。" - -msgid "" -"Ubuntu " -"Cloud Archive or RDO*" -msgstr "" -"Ubuntu " -"Cloud Archive、RDO*" - -msgid "" -"Screen is a useful program for viewing many related " -"services at once. For more information, see the GNU screen quick reference." -msgstr "" -"Screen は、多くの関連するサービスを同時に見るための便利な" -"プログラムです。GNU screen quick reference を参照してください。" - -msgid "nova-api services" -msgstr "nova-api サービス" - -msgid "" -"nova-compute, nova-network, cinder " -"hosts" -msgstr "" -"nova-computenova-network、cinder ホス" -"ト" - -msgid "nova-manage service list" -msgstr "nova-manage サービス一覧" - -msgid "nova-scheduler services" -msgstr "nova-scheduler サービス" - -msgid "" -"Logging for horizon is configured in " -"/etc/openstack_dashboard/local_settings.py. Because horizon is a Django web " -"application, it follows the Django Logging " -"framework conventions." -msgstr "" -"horizon のロギング設定は /etc/openstack_dashboard/local_settings." -"py で行います。horizon は Django web アプリケーションですので、" -"Django Logging framework conventions に" -"従います。" - -msgid " (Apress)" -msgstr " (Apress)" - -msgid " (No Starch Press)" -msgstr " (No Starch Press)" - -msgid " (Packt Publishing)" -msgstr " (Packt Publishing)" - -msgid " (Pearson)" -msgstr " (Pearson)" - -msgid " (Prentice Hall)" -msgstr " (Prentice Hall)" - -msgid ", where:" -msgstr "、それぞれ、" - -msgid "" -" contains common " -"considerations to review when sizing hardware for the cloud controller " -"design.cloud controllershardware sizing considerationsActive Directorydashboard" -msgstr "" -" クラウドコントローラー設計の" -"ハードウェアサイジングにおける一般的な考慮事項クラウドコントローラーハードウェアサイジング" -"に関する考慮事項Active Directoryダッシュボード" - -msgid "" -" describes the networking " -"deployment options for both legacy nova-network options " -"and an equivalent neutron configuration.provisioning/deploymentnetwork deployment " -"options" -msgstr "" -" レガシーな nova-" -"networkおオプションとそれと同等なneutron構成オプションについて述べ" -"ます。プロビジョニング/デプロイメント" -"ネットワークでプロ忌めんとオプション" - -msgid "" -" explains the different storage " -"concepts provided by OpenStack.block " -"devicestorageoverview of concepts" -msgstr "" -" は、OpenStack で提供されているさまざま" -"なストレージのコンセプトについて説明しています。ブロックデバイスストレージoコンセプトの概要" - -msgid "" -" provides a comparison view of each " -"segregation method currently provided by OpenStack Compute.endpointsAPI endpoint" -msgstr "" -" では、OpenStack Compute が現在提供し" -"ている各分割メソッドの比較ビューを提供しています。エンドポイントAPI エンドポイント" - -msgid "" -" and include example configuration and considerations for both third-" -"party and OpenStackOpenStack " -"Networking (neutron)third-party component " -"configuration components:
Third-party component configuration
ComponentTuningAvailabilityScalability
MySQLbinlog-format = rowMaster/master " -"replication. However, both nodes are not used at the same time. Replication " -"keeps all nodes as close to being up to date as possible (although the " -"asynchronous nature of the replication means a fully consistent state is not " -"possible). Connections to the database only happen through a Pacemaker " -"virtual IP, ensuring that most problems that occur with master-master " -"replication can be avoided.Not heavily considered. Once load on the " -"MySQL server increases enough that scalability needs to be considered, " -"multiple masters or a master/slave setup can be used.
Qpidmax-connections=1000worker-threads=20connection-backlog=10, sasl security enabled with " -"SASL-BASIC authenticationQpid is added as a resource to the " -"Pacemaker software that runs on Controller nodes where Qpid is situated. " -"This ensures only one Qpid instance is running at one time, and the node " -"with the Pacemaker virtual IP will always be the node running Qpid.Not heavily considered. However, Qpid can be changed to run on all " -"controller nodes for scalability and availability purposes, and removed from " -"Pacemaker.
HAProxymaxconn 3000HAProxy is a software layer-7 load balancer used to front door all " -"clustered OpenStack API components and do SSL termination. HAProxy can be " -"added as a resource to the Pacemaker software that runs on the Controller " -"nodes where HAProxy is situated. This ensures that only one HAProxy instance " -"is running at one time, and the node with the Pacemaker virtual IP will " -"always be the node running HAProxy.Not considered. HAProxy has " -"small enough performance overheads that a single instance should scale " -"enough for this level of workload. If extra scalability is needed, " -"keepalived or other Layer-4 load balancing can be " -"introduced to be placed in front of multiple copies of HAProxy.
MemcachedMAXCONN=\"8192\" CACHESIZE=\"30457\"Memcached is a fast in-memory key-value cache software that " -"is used by OpenStack components for caching data and increasing performance. " -"Memcached runs on all controller nodes, ensuring that should one go down, " -"another instance of Memcached is available.Not considered. A single " -"instance of Memcached should be able to scale to the desired workloads. If " -"scalability is desired, HAProxy can be placed in front of Memcached (in raw " -"tcp mode) to utilize multiple Memcached instances for " -"scalability. However, this might cause cache consistency issues.
PacemakerConfigured to use corosync andcman as a " -"cluster communication stack/quorum manager, and as a two-node cluster.If more nodes need to be made cluster aware, " -"Pacemaker can scale to 64 nodes.
GlusterFSglusterfs performance profile \"virt\" enabled on " -"all volumes. Volumes are setup in two-node replication.Glusterfs is " -"a clustered file system that is run on the storage nodes to provide " -"persistent scalable data storage in the environment. Because all connections " -"to gluster use the gluster native mount points, the " -"gluster instances themselves provide availability and " -"failover functionality.The scalability of GlusterFS storage can be " -"achieved by adding in more storage volumes.
OpenStack component " -"configuration
ComponentNode typeTuningAvailabilityScalability
Dashboard (horizon)ControllerConfigured to use Memcached as a session store, neutron support is enabled, can_set_mount_point = FalseThe dashboard is run on all controller nodes, ensuring at least one " -"instance will be available in case of node failure. It also sits behind " -"HAProxy, which detects when the software fails and routes requests around " -"the failing instance.The dashboard is run on all controller nodes, " -"so scalability can be achieved with additional controller nodes. HAProxy " -"allows scalability for the dashboard as more nodes are added.
Identity (keystone)ControllerConfigured to use " -"Memcached for caching and PKI for tokens.Identity is run on all " -"controller nodes, ensuring at least one instance will be available in case " -"of node failure. Identity also sits behind HAProxy, which detects when the " -"software fails and routes requests around the failing instance.Identity is run on all controller nodes, so scalability can be " -"achieved with additional controller nodes. HAProxy allows scalability for " -"Identity as more nodes are added.
Image service (glance)Controller/var/lib/glance/images is a " -"GlusterFS native mount to a Gluster volume off the storage layer.The Image service is run on all controller nodes, ensuring at least " -"one instance will be available in case of node failure. It also sits behind " -"HAProxy, which detects when the software fails and routes requests around " -"the failing instance.The Image service is run on all controller " -"nodes, so scalability can be achieved with additional controller nodes. " -"HAProxy allows scalability for the Image service as more nodes are added.
Compute (nova)Controller, ComputeThe nova API, scheduler, objectstore, " -"cert, consoleauth, conductor, and vncproxy services are run on all " -"controller nodes, so scalability can be achieved with additional controller " -"nodes. HAProxy allows scalability for Compute as more nodes are added. The " -"scalability of services running on the compute nodes (compute, conductor) is " -"achieved linearly by adding in more compute nodes.
Block " -"Storage (cinder)ControllerConfigured to use Qpid, qpid_heartbeat = 10, configured to use a Gluster volume from the " -"storage layer as the back end for Block Storage, using the Gluster native " -"client.Block Storage API, scheduler, and volume services are run on " -"all controller nodes, ensuring at least one instance will be available in " -"case of node failure. Block Storage also sits behind HAProxy, which detects " -"if the software fails and routes requests around the failing instance.Block Storage API, scheduler and volume services are run on all " -"controller nodes, so scalability can be achieved with additional controller " -"nodes. HAProxy allows scalability for Block Storage as more nodes are added." -"
OpenStack Networking (neutron)Controller, " -"Compute, NetworkConfigured to use QPID, qpid_heartbeat = 10, kernel namespace support " -"enabled, tenant_network_type = vlan, " -"allow_overlapping_ips = true, " -"tenant_network_type = vlan, bridge_uplinks = br-" -"ex:em2, bridge_mappings = physnet1:br-exThe OpenStack Networking server service is run on all " -"controller nodes, so scalability can be achieved with additional controller " -"nodes. HAProxy allows scalability for OpenStack Networking as more nodes are " -"added. Scalability of services running on the network nodes is not currently " -"supported by OpenStack Networking, so they are not be considered. One copy " -"of the services should be sufficient to handle the workload. Scalability of " -"the ovs-agent running on compute nodes is achieved by " -"adding in more compute nodes as necessary.
" -msgstr "" -" にサードパーティーと OpenStackOpenStack Networking (neutron)third-party " -"component configuration のコンポーネント両方に関する" -"設定例と考慮事項があります。
サードパーティーコンポーネントの設定
" -"コンポーネントチューニング可用性拡張性
MySQLbinlog-format = rowマスター/マスターレプリケーション。しかしながら、両方のノードは同時に" -"使用されません。レプリケーションは、すべてのノードができる限り最新状態を維持" -"します (非同期のレプリケーションは、完全な状態が維持できないことを意味しま" -"す)。データベースへの接続は、Pacemaker の仮想 IP のみに行われます。これによ" -"り、マスター/マスターレプリケーションで発生するほとんどの問題を避けられます。" -"Not heavily considered. Once load on the MySQL server increases " -"enough that scalability needs to be considered, multiple masters or a master/" -"slave setup can be used.
Qpidmax-" -"connections=1000worker-threads=20connection-backlog=10, sasl security enabled with " -"SASL-BASIC authenticationQpid is added as a resource to the " -"Pacemaker software that runs on Controller nodes where Qpid is situated. " -"This ensures only one Qpid instance is running at one time, and the node " -"with the Pacemaker virtual IP will always be the node running Qpid.Not heavily considered. However, Qpid can be changed to run on all " -"controller nodes for scalability and availability purposes, and removed from " -"Pacemaker.
HAProxymaxconn 3000HAProxy is a software layer-7 load balancer used to front door all " -"clustered OpenStack API components and do SSL termination. HAProxy can be " -"added as a resource to the Pacemaker software that runs on the Controller " -"nodes where HAProxy is situated. This ensures that only one HAProxy instance " -"is running at one time, and the node with the Pacemaker virtual IP will " -"always be the node running HAProxy.Not considered. HAProxy has " -"small enough performance overheads that a single instance should scale " -"enough for this level of workload. If extra scalability is needed, " -"keepalived or other Layer-4 load balancing can be " -"introduced to be placed in front of multiple copies of HAProxy.
MemcachedMAXCONN=\"8192\" CACHESIZE=\"30457\"Memcached is a fast in-memory key-value cache software that " -"is used by OpenStack components for caching data and increasing performance. " -"Memcached runs on all controller nodes, ensuring that should one go down, " -"another instance of Memcached is available.Not considered. A single " -"instance of Memcached should be able to scale to the desired workloads. If " -"scalability is desired, HAProxy can be placed in front of Memcached (in raw " -"tcp mode) to utilize multiple Memcached instances for " -"scalability. However, this might cause cache consistency issues.
PacemakerConfigured to use corosync andcman as a " -"cluster communication stack/quorum manager, and as a two-node cluster.If more nodes need to be made cluster aware, " -"Pacemaker can scale to 64 nodes.
GlusterFSglusterfs performance profile \"virt\" enabled on " -"all volumes. Volumes are setup in two-node replication.Glusterfs is " -"a clustered file system that is run on the storage nodes to provide " -"persistent scalable data storage in the environment. Because all connections " -"to gluster use the gluster native mount points, the " -"gluster instances themselves provide availability and " -"failover functionality.The scalability of GlusterFS storage can be " -"achieved by adding in more storage volumes.
OpenStack component " -"configuration
ComponentNode typeTuningAvailabilityScalability
Dashboard (horizon)ControllerConfigured to use Memcached as a session store, neutron support is enabled, can_set_mount_point = FalseThe dashboard is run on all controller nodes, ensuring at least one " -"instance will be available in case of node failure. It also sits behind " -"HAProxy, which detects when the software fails and routes requests around " -"the failing instance.The dashboard is run on all controller nodes, " -"so scalability can be achieved with additional controller nodes. HAProxy " -"allows scalability for the dashboard as more nodes are added.
Identity (keystone)ControllerConfigured to use " -"Memcached for caching and PKI for tokens.Identity is run on all " -"controller nodes, ensuring at least one instance will be available in case " -"of node failure. Identity also sits behind HAProxy, which detects when the " -"software fails and routes requests around the failing instance.Identity is run on all controller nodes, so scalability can be " -"achieved with additional controller nodes. HAProxy allows scalability for " -"Identity as more nodes are added.
Image service (glance)Controller/var/lib/glance/images is a " -"GlusterFS native mount to a Gluster volume off the storage layer.The Image service is run on all controller nodes, ensuring at least " -"one instance will be available in case of node failure. It also sits behind " -"HAProxy, which detects when the software fails and routes requests around " -"the failing instance.The Image service is run on all controller " -"nodes, so scalability can be achieved with additional controller nodes. " -"HAProxy allows scalability for the Image service as more nodes are added.
Compute (nova)Controller, ComputeThe nova API, scheduler, objectstore, " -"cert, consoleauth, conductor, and vncproxy services are run on all " -"controller nodes, so scalability can be achieved with additional controller " -"nodes. HAProxy allows scalability for Compute as more nodes are added. The " -"scalability of services running on the compute nodes (compute, conductor) is " -"achieved linearly by adding in more compute nodes.
Block " -"Storage (cinder)ControllerConfigured to use Qpid, qpid_heartbeat = 10, configured to use a Gluster volume from the " -"storage layer as the back end for Block Storage, using the Gluster native " -"client.Block Storage API, scheduler, and volume services are run on " -"all controller nodes, ensuring at least one instance will be available in " -"case of node failure. Block Storage also sits behind HAProxy, which detects " -"if the software fails and routes requests around the failing instance.Block Storage API, scheduler and volume services are run on all " -"controller nodes, so scalability can be achieved with additional controller " -"nodes. HAProxy allows scalability for Block Storage as more nodes are added." -"
OpenStack Networking (neutron)Controller, " -"Compute, NetworkConfigured to use QPID, qpid_heartbeat = 10, kernel namespace support " -"enabled, tenant_network_type = vlan, " -"allow_overlapping_ips = true, " -"tenant_network_type = vlan, bridge_uplinks = br-" -"ex:em2, bridge_mappings = physnet1:br-exThe OpenStack Networking server service is run on all " -"controller nodes, so scalability can be achieved with additional controller " -"nodes. HAProxy allows scalability for OpenStack Networking as more nodes are " -"added. Scalability of services running on the network nodes is not currently " -"supported by OpenStack Networking, so they are not be considered. One copy " -"of the services should be sufficient to handle the workload. Scalability of " -"the ovs-agent running on compute nodes is achieved by " -"adding in more compute nodes as necessary.
" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/Check_mark_23x20_02.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/Check_mark_23x20_02.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0001.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0001.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_00in01.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_00in01.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0101.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0101.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0102.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0102.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0103.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0103.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0104.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0104.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0105.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0105.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0106.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0106.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_01in01.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_01in01.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_01in02.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_01in02.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0201.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0201.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0901.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0901.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0902.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_0902.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_1201.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_1201.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_1202.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_1202.png'; md5=THIS FILE DOESN'T EXIST" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -msgid "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_ac01.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" -"@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/" -"openstack-ops/figures/osog_ac01.png'; md5=THIS FILE DOESN'T EXIST" - -msgid "" -"A block device that can be partitioned, formatted, " -"and mounted (such as, /dev/vdc)" -msgstr "" -"パーティション分割、フォーマット、マウントが可能な ブロックデバイ" -"ス (/dev/vdc など)" - -msgid "" -"A management network (a separate network for use by " -"your cloud operators) typically consists of a separate switch and separate " -"NICs (network interface cards), and is a recommended option. This " -"segregation prevents system administration and the monitoring of system " -"access from being disrupted by traffic generated by guests.NICs (network interface cards)management networknetwork designmanagement network" -msgstr "" -" 管理用ネットワーク (クラウド管理者用の別のネットワー" -"ク) は一般的には別のスイッチ別のNIC(Network Interface Cards)で構成する事が推" -"奨されます。この構成ではゲストのトラフィックによって監視と管理のためのアクセ" -"スが妨げられることを防ぎます。NIC " -"(network interface cards)管理ネットワークネットワークデザイン管理ネットワーク" - -msgid "" -"A DHCP problem might be caused by a misbehaving dnsmasq process. First, " -"debug by checking logs and then restart the dnsmasq processes only for that " -"project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. " -"Once you have restarted targeted dnsmasq processes, the simplest way to rule " -"out dnsmasq causes is to kill all of the dnsmasq processes on the machine " -"and restart nova-network. As a last resort, do this as " -"root:" -msgstr "" -"DHCPの問題はdnsmasqの不具合が原因となりがちです。まず、ログを確認し、その後該" -"当するプロジェクト(テナント)のdnsmasqプロセスを再起動してください。VLANモード" -"においては、dnsmasqプロセスはテナントごとに存在します。すでに該当のdnsmasqプ" -"ロセスを再起動しているのであれば、もっともシンプルな解決法は、マシン上の全て" -"のdnsmasqプロセスをkillし、nova-networkを再起動することで" -"す。最終手段として、rootで以下を実行してください。" - -msgid "" -"A NetApp appliance is used in each region for both block storage and " -"instance storage. There are future plans to move the instances off the " -"NetApp appliance and onto a distributed file system such as Ceph or GlusterFS." -msgstr "" -"各リージョンでは、ブロックストレージとインスタンスストレージの両方でNetApp ア" -"プライアンスが使用されています。これらのインスタンスを NetApp アプライアンス" -"から Ceph または GlusterFS といった分散ファイルシステ" -"ム上に移動する計画があります。" - -msgid "" -"A basic type of alert monitoring is to simply check and see whether a " -"required process is running.monitoringprocess monitoringprocess monitoringlogging/" -"monitoringprocess monitoring " -"For example, ensure that the nova-api service is running on the " -"cloud controller:" -msgstr "" -"基本的なアラーム監視は、単に要求されたプロセスが稼働しているかどうかを確認す" -"ることです。monitoringprocess monitoringprocess monitoringlogging/monitoringprocess monitoring 例えば、" -"nova-api サービスがクラウドコントローラーで稼働しているかどうか" -"を確認します。" - -msgid "" -"A boolean to indicate whether the volume should be deleted when the instance " -"is terminated. True can be specified as True or " -"1. False can be specified as False or " -"0." -msgstr "" -"インスタンスが終了したときに、ボリュームが削除されるかどうかを指示する論理値" -"です。真は True または 1 として指定でき" -"ます。偽は False または 0 として指定で" -"きます。" - -msgid "" -"A breakdown of current features under development, with their target " -"milestone" -msgstr "現在の開発中の機能、それらの目標マイルストーンの詳細" - -msgid "" -"A brief overview of how to send REST API requests to endpoints for OpenStack " -"services" -msgstr "" -"OpenStack サービスのエンドポイントに REST API リクエストをどのように送信する" -"かについての概要が説明されています" - -msgid "" -"A cloud controller's hardware can be the same as a compute node, though you " -"may want to further specify based on the size and type of cloud that you run." -"hardwaredesign " -"considerationsdesign considerationshardware " -"considerations" -msgstr "" -"クラウドの大きさやタイプによってハードウェアを指定したいかもしれませんが、ク" -"ラウドコントローラーのハードウェアはコンピュートノードと同じ物を利用する事が" -"できます。ハードウェア設計上の考慮事項設計上の考慮事項ハードウェアの考慮" -"点" - -msgid "" -"A cloud with multiple sites where you can schedule VMs \"anywhere\" or on a " -"particular site." -msgstr "" -"複数サイトで構成されるクラウドで、仮想マシンを「任意のサイト」または特定のサ" -"イトにスケジューリングしたい場合" - -msgid "" -"A cloud with multiple sites, where you schedule VMs to a particular site and " -"you want a shared infrastructure." -msgstr "" -"複数サイトで構成されるクラウドで、仮想マシンを特定のサイトに対してスケジュー" -"リングでき、かつ共有インフラを利用したい場合" - -msgid "" -"A collection of foreign keys are available to find relations to the " -"instance. The most useful of these—user_id and " -"project_id—are the UUIDs of the user who launched the " -"instance and the project it was launched in." -msgstr "" -"外部キーはインスタンスの関連を見つけるために利用可能です。これらの中で最も有" -"用なものは、user_id および project_id " -"です。これらは、インスタンスを起動したユーザー、およびそれが起動されたプロ" -"ジェクトの UUID です。" - -msgid "" -"A common new-user issue with OpenStack is failing to set an appropriate " -"security group when launching an instance. As a result, the user is unable " -"to contact the instance on the network.security groupsuser trainingsecurity groups" -msgstr "" -"OpenStack の新しいユーザーがよく経験する問題が、インスタンスを起動するときに" -"適切なセキュリティグループを設定できず、その結果、ネットワーク経由でインスタ" -"ンスにアクセスできないというものです。" -"セキュリティグループユーザートレーニングセキュリティグループ" - -msgid "" -"A common scenario is to take down production management services in " -"preparation for an upgrade, completed part of the upgrade process, and " -"discovered one or more problems not encountered during testing. As a " -"consequence, you must roll back your environment to the original \"known good" -"\" state. You also made sure that you did not make any state changes after " -"attempting the upgrade process; no new instances, networks, storage volumes, " -"and so on. Any of these new resources will be in a frozen state after the " -"databases are restored from backup." -msgstr "" -"一般的なシナリオは、アップグレードの準備で本番の管理サービスを分解して、アッ" -"プグレード手順の一部分を完了して、テスト中には遭遇しなかった 1 つ以上の問題に" -"遭遇することです。環境を元の「万全な」状態にロールバックする必要があります。" -"続けて、アップグレードプロセスを試行した後、新しいインスタンス、ネットワー" -"ク、ストレージボリュームなど、何も状態を変更していないことを確実にしてくださ" -"い。これらの新しいリソースはすべて、データベースがバックアップからリストアさ" -"れた後、フリーズ状態になります。" - -msgid "" -"A common use of host aggregates is to provide information for use with the " -"nova-scheduler. For example, you might use a host " -"aggregate to group a set of hosts that share specific flavors or images." -msgstr "" -"ホストアグリゲートの一般的な用途は nova-scheduler で使用す" -"る情報を提供することです。例えば、ホストアグリゲートを使って、特定のフレー" -"バーやイメージを共有するホストの集合を作成することができます。" - -msgid "" -"A common way of dealing with the recovery from a full system failure, such " -"as a power outage of a data center, is to assign each service a priority, " -"and restore in order. shows an " -"example.service restorationmaintenance/" -"debuggingcomplete failures" -msgstr "" -"データセンターの電源障害など、完全なシステム障害からリカバリーする一般的な方" -"法は、各サービスに優先度を付け、順番に復旧していくことです。 に例を示します。service restorationmaintenance/debuggingcomplete " -"failures" - -msgid "A compute node" -msgstr "コンピュートノード" - -msgid "" -"A critical part of a cloud's scalability is the amount of effort that it " -"takes to run your cloud. To minimize the operational cost of running your " -"cloud, set up and use an automated deployment and configuration " -"infrastructure with a configuration management system, such as Puppet or " -"Chef. Combined, these systems greatly reduce manual effort and the chance " -"for operator error.cloud computingminimizing costs of" -msgstr "" -"クラウドのスケーラビリティにおける重要な部分の一つは、クラウドを運用するのに" -"必要な労力にあります。クラウドの運用コストを最小化するために、Puppet や Chef " -"などの設定管理システムを使用して、自動化されたデプロイメントおよび設定インフ" -"ラストラクチャーを設定、使用してください。これらのシステムを統合すると、工数" -"やオペレーターのミスを大幅に減らすことができます。クラウドコンピューティングコストの最小化" - -msgid "" -"A descriptive name, such as xx.size_name, is conventional but not required, " -"though some third-party tools may rely on it." -msgstr "" -"慣習として xx.size_name などの内容を表す名前を使用しますが、必須ではありませ" -"ん。いくつかのサードパーティツールはその名称に依存しているかもしれません。" - -msgid "" -"A device name where the volume is attached in the system at /dev/" -"dev_name" -msgstr "" -"そのボリュームはシステムで /dev/dev_name に接続されます。" - -msgid "A different API endpoint for every region." -msgstr "リージョン毎に別々のAPIエンドポイントが必要" - -msgid "" -"A distributed, shared file system. As of Gluster version 3.3, you can use " -"Gluster to consolidate your object storage and file storage into one unified " -"file and object storage solution, which is called Gluster For OpenStack " -"(GFO). GFO uses a customized version of swift that enables Gluster to be " -"used as the back-end storage." -msgstr "" -"分散型の共有ファイルシステム。Gluster バージョン 3.3 以降、Gluster を使用し" -"て、オブジェクトストレージとファイルストレージを1 つの統合ファイルとオブジェ" -"クトストレージソリューションにまとめることができるようになりました。これは" -"Gluster For OpenStack (GFO) と呼ばれます。GFO は、swift のカスタマイズバー" -"ジョンを使用しており、Gluster がバックエンドストレージを使用できるようになっ" -"ています。" - -msgid "" -"A feature was introduced in Essex to periodically check and see if there " -"were any _base files not in use. If there were, OpenStack " -"Compute would delete them. This idea sounds innocent enough and has some " -"good qualities to it. But how did this feature end up turned on? It was " -"disabled by default in Essex. As it should be. It was decided to be turned on in " -"Folsom (https://bugs.launchpad.net/nova/+bug/1029674). I cannot " -"emphasize enough that:" -msgstr "" -"Essex で、_base 下の任意のファイルが使用されていないかどうか定期" -"的にチェックして確認する機能が導入された。もしあれば、OpenStack Compute はそ" -"のファイルを削除する。このアイデアは問題がないように見え、品質的にも良いよう" -"だった。しかし、この機能を有効にすると最終的にどうなるのか?Essex ではこの機" -"能がデフォルトで無効化されていた。そうあるべきであったからだ。これは、Folsom で有効に" -"なることが決定された (https://bugs.launchpad.net/nova/+bug/1029674)。" -"私はそうあるべきとは思わない。何故なら" - -msgid "A few nights later, it happened again." -msgstr "数日後、それは再び起こった。" - -msgid "A file system" -msgstr "ファイルシステム" - -msgid "" -"A final example is if a user is hammering cloud resources repeatedly. " -"Contact the user and learn what he is trying to do. Maybe he doesn't " -"understand that what he's doing is inappropriate, or maybe there is an issue " -"with the resource he is trying to access that is causing his requests to " -"queue or lag." -msgstr "" -"最後の例は、ユーザーがクラウドのリソースに繰り返し悪影響を与える場合です。" -"ユーザーと連絡をとり、何をしようとしているのか理解します。ユーザー自身が実行" -"しようとしていることを正しく理解していない可能性があります。または、アクセス" -"しようとしているリソースに問題があり、リクエストがキューに入ったり遅れが発生" -"している場合もあります。" - -msgid "" -"A full set of options can be found using:imagesCLI options for" -msgstr "" -"すべてのオプションは、次のように確認できます。imagesCLI options for" - -msgid "" -"A good document describing the Object Storage architecture is found within " -"the developer " -"documentation—read this first. Once you understand the architecture, " -"you should know what a proxy server does and how zones work. However, some " -"important points are often missed at first glance." -msgstr "" -"オブジェクトストレージのアーキテクチャーについて詳しく説明したドキュメントは " -"the developer " -"documentationにあります。これをまず参照してください。アーキテクチャー" -"を理解したら、プロキシサーバーの役割やゾーンの機能について理解できるはずで" -"す。しかし、少し見ただけでは、頻繁に重要なポイントを逃してしまうことがありま" -"す。" - -msgid "" -"A highly-available environment can be put into place if you require an " -"environment that can scale horizontally, or want your cloud to continue to " -"be operational in case of node failure. This example architecture has been " -"written based on the current default feature set of OpenStack Havana, with " -"an emphasis on high availability.RDO " -"(Red Hat Distributed OpenStack)OpenStack Networking (neutron)component overview" -msgstr "" -"高可用性環境は、水平スケールが可能な環境が必要な場合や、ノードに障害が発生し" -"た場合にもクラウドの稼働を継続させたい場合に設置することができます。以下の" -"アーキテクチャー例では、 OpenStack Havana の現在のデフォルト機能セットをベー" -"スとし、高可用性に重点を置いて記載しました。RDO (Red Hat Distributed OpenStack)OpenStack Networking " -"(neutron)コンポーネント概要" - -msgid "" -"A hypervisor provides software to manage virtual machine access to the " -"underlying hardware. The hypervisor creates, manages, and monitors virtual " -"machines.DockerHyper-VESXi hypervisorESX hypervisorVMware APIQuick EMUlator (QEMU)Linux containers " -"(LXC)kernel-" -"based VM (KVM) hypervisorXen APIXenServer hypervisorhypervisorschoosingcompute nodeshypervisor choice OpenStack Compute supports many hypervisors to " -"various degrees, including: " -msgstr "" -"ハイパーバイザーは、仮想マシンの基盤ハードウェアへのアクセスを管理するソフト" -"ウェアを提供します。ハイパーバイザーは仮想マシンを作成、管理、監視します。" -"DockerHyper-VESXi ハイパーバイザーESX ハイパーバイ" -"ザーVMware APIQuick EMUlator " -"(QEMU)Linux コン" -"テナ (LXC)kernel-based VM (KVM) ハイパーバイザーXen APIXenServer ハイパーバイザーハイパーバイザー選択コンピュートノードハイパーバイザーの選択 OpenStack Compute は多くのハイパーバイザーを様々な度合" -"いでサポートしています。" - -msgid "A list of all features, including those not yet under development" -msgstr "まだ開発中ではないものを含む、すべての機能の一覧" - -msgid "" -"A list of terms used in this book is included, which is a subset of the " -"larger OpenStack glossary available online." -msgstr "" -"この本で使われている用語の一覧。オンライン上にある OpenStack 用語集のサブセッ" -"トです。" - -msgid "" -"A long requested service, to provide the ability to manipulate DNS entries " -"associated with OpenStack resources has gathered a following. The designate " -"project was also released." -msgstr "" -"長く要望されていたサービスです。配下を収集した OpenStack リソースを関連付けら" -"れた DNS エントリーを操作する機能を提供します。designate プロジェクトもリリー" -"スされました。" - -msgid "" -"A major quality push has occurred across drivers and plug-ins in Block " -"Storage, Compute, and Networking. Particularly, developers of Compute and " -"Networking drivers that require proprietary or hardware products are now " -"required to provide an automated external testing system for use during the " -"development process." -msgstr "" -"主要な品質は、Block Storage、Compute、Networking におけるドライバーやプラグイ" -"ンをまたがり発生しています。とくに、プロプライエタリーやハードウェア製品を必" -"要とする Compute と Networking のドライバー開発者は、開発プロセス中に使用する" -"ために、自動化された外部テストシステムを提供する必要があります。" - -msgid "" -"A much-requested answer to big data problems, a dedicated team has been " -"making solid progress on a Hadoop-as-a-Service project." -msgstr "" -"ビッグデータの問題に対する最も要望された回答です。専門チームが Hadoop-as-a-" -"Service プロジェクトに安定した進捗を実現しました。" - -msgid "A new service, nova-cells." -msgstr "新規サービス、nova-cells" - -msgid "" -"A note about DAIR's architecture: /var/lib/nova/instances is a shared NFS mount. This means that all compute nodes have " -"access to it, which includes the _base directory. Another " -"centralized area is /var/log/rsyslog on the cloud " -"controller. This directory collects all OpenStack logs from all compute " -"nodes. I wondered if there were any entries for the file that is reporting: " -msgstr "" -"DAIR のアーキテクチャーは /var/lib/nova/instances が共" -"有 NFS マウントであることに注意したい。これは、全てのコンピュートノードがその" -"ディレクトリにアクセスし、その中に _base ディレクトリが含まれる" -"ことを意味していた。その他の集約化エリアはクラウドコントローラーの " -"/var/log/rsyslog だ。このディレクトリは全コンピュート" -"ノードの全ての OpenStack ログが収集されていた。私は、 が報告" -"したファイルに関するエントリがあるのだろうかと思った。 " - -msgid "" -"A number of time-related fields are useful for tracking when state changes " -"happened on an instance:" -msgstr "" -"多くの時刻関連のフィールドは、いつ状態の変化がインスタンスに起こったかを追跡" -"する際に役に立ちます:" - -msgid "" -"A quick Google search turned up this: DHCP lease errors in VLAN mode (https://lists.launchpad.net/openstack/msg11696.html) which further " -"supported our DHCP theory." -msgstr "" -"ちょっと Google 検索した結果、これを見つけた。VLAN モードでの DHCPリースエ" -"ラー (https://lists.launchpad.net/openstack/msg11696.html) 。この情報" -"はその後の我々の DHCP 方針の支えになった。" - -msgid "" -"A quick way to check whether DNS is working is to resolve a hostname inside " -"your instance by using the host command. If DNS is working, you " -"should see:" -msgstr "" -"DNSが正しくホスト名をインスタンス内から解決できているか確認する簡単な方法は、" -"hostコマンドです。もしDNSが正しく動いていれば、以下メッセージが" -"確認できます。" - -msgid "" -"A range of publicly routable, IPv4 network addresses to be used by OpenStack " -"Networking for floating IPs. You may be restricted in your access to IPv4 " -"addresses; a large range of IPv4 addresses is not necessary." -msgstr "" -"OpenStack Networking が Floating IP に使用する、パブリックにルーティング可能" -"な IPv4 ネットワークアドレスの範囲。IPv4 アドレスへのアクセスは制限される可能" -"性があります。IPv4 アドレスの範囲を大きくする必要はありません。" - -msgid "" -"A report came in: VMs were launching slowly, or not at all. Cue the standard " -"checksnothing on the Nagios, but there was a spike in network towards the " -"current master of our RabbitMQ cluster. Investigation started, but soon the " -"other parts of the queue cluster were leaking memory like a sieve. Then the " -"alert came inthe master Rabbit server went down and connections failed over " -"to the slave." -msgstr "" -"報告が入った。VM の起動が遅いか、全く起動しない。標準のチェック項目は?" -"nagios 上は問題なかったが、RabbitMQ クラスタの現用系に向かうネットワークのみ" -"高負荷を示していた。捜査を開始したが、すぐに RabbitMQ クラスタの別の部分がざ" -"るのようにメモリリークを起こしていることを発見した。また警報か?RabbitMQ サー" -"バーの現用系はダウンしようとしていた。接続は待機系にフェイルオーバーした。" - -msgid "" -"A scalable storage solution that replicates data across commodity storage " -"nodes. Ceph was originally developed by one of the founders of DreamHost and " -"is currently used in production there." -msgstr "" -"商用ストレージノード全体でデータを複製する拡張性の高いストレージソリューショ" -"ン。Ceph は、DreamHost の創設者により開発され、現在は実稼動環境で使用されてい" -"ます。" - -msgid "A service to provide queues of messages and notifications was released." -msgstr "メッセージと通知のキューを提供するサービスが提供されました。" - -msgid "A shell where you can get some work done" -msgstr "作業を行うためのシェル" - -msgid "" -"A similar approach can be taken with public IP addresses, taking note that " -"large, flat ranges are preferred for use with guest instance IPs. Take into " -"account that for some OpenStack networking options, a public IP address in " -"the range of a guest instance public IP address is assigned to the " -"nova-compute host." -msgstr "" -"パブリックIPアドレスの場合でも同様のアプローチが取れます。但し、ゲストインス" -"タンス用のIPとして使用する場合には、大きなフラットなアドレスレンジの方が好ま" -"れることに注意した方がよいでしょう。また、OpenStack のネットワーク方式によっ" -"ては、ゲストインスタンス用のパブリックIPアドレスレンジのうち一つが " -"nova-compute ホストに割り当てられることも考慮する必要があ" -"ります。" - -msgid "" -"A similar pattern can be followed in other projects that use the driver " -"architecture. Simply create a module and class that conform to the driver " -"interface and plug it in through configuration. Your code runs when that " -"feature is used and can call out to other services as necessary. No project " -"core code is touched. Look for a \"driver\" value in the project's " -".conf configuration files in /etc/<project>" -" to identify projects that use a driver architecture." -msgstr "" -"ドライバ・アーキテクチャーを使う他のプロジェクトで、類似のパターンに従うこと" -"ができます。単純に、そのドライバーインタフェースに従うモジュールとクラスを作" -"成し、環境定義によって組み込んでください。あなたのコードはその機能が使われた" -"時に実行され、必要に応じて他のサービスを呼び出します。プロジェクトのコアコー" -"ドは一切修正しません。ドライバーアーキテクチャーを使っているプロジェクトを確" -"認するには、/etc/<project> に格納されている、プロジェクト" -"の .conf 設定ファイルの中で driver 変数を探してくださ" -"い。" - -msgid "" -"A single API endpoint for compute, or you require a " -"second level of scheduling." -msgstr "" -"コンピュート資源に対する単一の API エンドポイント、も" -"しくは2段階スケジューリングが必要な場合" - -msgid "A single-site cloud with equipment fed by separate power supplies." -msgstr "分離された電源供給ラインを持つ設備で構成される、単一サイトのクラウド。" - -msgid "" -"A snapshot captures the state of the file system, but not the state of the " -"memory. Therefore, to ensure your snapshot contains the data that you want, " -"before your snapshot you need to ensure that:" -msgstr "" -"スナップショットは、ファイルシステムの状態をキャプチャーしますが、メモリーの" -"状態をキャプチャーしません。そのため、スナップショットに期待するデータが含ま" -"れることを確実にするために、次のことを確実にする必要があります。" - -msgid "" -"A tangible example of this is the nova-compute process. " -"In order to manage the image cache with libvirt, nova-compute has a periodic process that scans the contents of the image cache. " -"Part of this scan is calculating a checksum for each of the images and " -"making sure that checksum matches what nova-compute " -"expects it to be. However, images can be very large, and these checksums can " -"take a long time to generate. At one point, before it was reported as a bug " -"and fixed, nova-compute would block on this task and stop " -"responding to RPC requests. This was visible to users as failure of " -"operations such as spawning or deleting instances." -msgstr "" -"これの具体的な例が nova-compute プロセスです。libvirt でイ" -"メージキャッシュを管理するために、nova-compute はイメージ" -"キャッシュの内容をスキャンする周期的なプロセスを用意します。このスキャンの中" -"で、各イメージのチェックサムを計算し、チェックサムが nova-compute が期待する値と一致することを確認します。しかしながら、イメージは非常" -"に大きく、チェックサムを生成するのに長い時間がかかる場合があります。このこと" -"がバグとして報告され修正される前の時点では、nova-compute " -"はこのタスクで停止し RPC リクエストに対する応答を停止してしまっていました。こ" -"の振る舞いは、インスタンスの起動や削除などの操作の失敗としてユーザーに見えて" -"いました。" - -msgid "" -"A tool such as collectd can be used to store this information. While " -"collectd is out of the scope of this book, a good starting point would be to " -"use collectd to store the result as a COUNTER data type. More information " -"can be found in collectd's documentation." -msgstr "" -"collectdのようなツールはこのような情報を保管することに利用できます。collectd" -"はこの本のスコープから外れますが、collectdでCOUNTERデータ形として結果を保存す" -"るのがよい出発点になります。より詳しい情報は collectd のドキュメント を参" -"照してください。" - -msgid "A typical user" -msgstr "一般的なユーザー" - -msgid "" -"A user might need a custom flavor that is uniquely tuned for a project she " -"is working on. For example, the user might require 128 GB of memory. If you " -"create a new flavor as described above, the user would have access to the " -"custom flavor, but so would all other tenants in your cloud. Sometimes this " -"sharing isn't desirable. In this scenario, allowing all users to have access " -"to a flavor with 128 GB of memory might cause your cloud to reach full " -"capacity very quickly. To prevent this, you can restrict access to the " -"custom flavor using the nova command:" -msgstr "" -"ユーザーが、取り組んでいるプロジェクト向けに独自にチューニングした、カスタム" -"フレーバーを必要とするかもしれません。例えば、ユーザーが 128 GB メモリーを必" -"要とするかもしれません。前に記載したとおり、新しいフレーバーを作成する場合、" -"ユーザーがカスタムフレーバーにアクセスできるでしょう。しかし、クラウドにある" -"他のすべてのクラウドもアクセスできます。ときどき、この共有は好ましくありませ" -"ん。この場合、すべてのユーザーが 128 GB メモリーのフレーバーにアクセスでき、" -"クラウドのリソースが非常に高速に容量不足になる可能性があります。これを防ぐた" -"めに、nova コマンドを使用して、カスタムフレーバーへのアク" -"セスを制限できます。" - -msgid "A user recently tried launching a CentOS instance on that node" -msgstr "" -"最近、あるユーザがそのノード上で CentOS のインスタンスを起動しようとした。" - -msgid "API endpoints" -msgstr "APIエンドポイント" - -msgid "Absolute limits" -msgstr "絶対制限" - -msgid "Accessed through…" -msgstr "アクセス方法" - -msgid "Accessible from…" -msgstr "アクセス可能な場所" - -msgid "Account quotas" -msgstr "アカウントのクォータ" - -msgid "" -"Acknowledging that many small-scale deployments see running Object Storage " -"just for the storage of virtual machine images as too costly, we opted for " -"the file back end in the OpenStack Image service (Glance). If your cloud " -"will include Object Storage, you can easily add it as a back end." -msgstr "" -"多くの小規模なデプロイメントでは、 Object Storage を仮想マシンイメージのスト" -"レージのみに使用するとコストがかかり過ぎることが分かっているため、OpenStack " -"Image service (Glance) 内のファイルバックエンドを選択しました。設計対象のクラ" -"ウド環境に Object Storage が含まれる場合には、バックエンドとして容易に追加で" -"きます。" - -msgid "Acknowledgments" -msgstr "謝辞" - -msgid "Actions which delete things should not be enabled by default." -msgstr "何かを削除する操作はデフォルトで有効化されるべきではない。" - -msgid "Adam Hyde" -msgstr "Adam Hyde" - -msgid "" -"Adam Powell in Racker IT supplied us with bandwidth each day and second " -"monitors for those of us needing more screens." -msgstr "" -"Rackspace IT部門 の Adam Powell は、私たちに毎日のネットワーク帯域を提供して" -"くれました。また、より多くのスクリーンが必要となったため、セカンドモニタを提" -"供してくれました。" - -msgid "" -"Adam facilitated this book sprint. He also founded the book sprint " -"methodology and is the most experienced book-sprint facilitator around. See " -" for more information. Adam " -"founded FLOSS Manuals—a community of some 3,000 individuals developing Free " -"Manuals about Free Software. He is also the founder and project manager for " -"Booktype, an open source project for writing, editing, and publishing books " -"online and in print." -msgstr "" -"Adam は今回の Book Sprint をリードしました。 Book Sprint メソッドを創設者でも" -"あり、一番経験豊富な Book Sprint のファシリテーターです。詳しい情報は を見てください。 3000人もの参加者" -"がいるフリーソフトウェアのフリーなマニュアルを作成するコミュニティである " -"FLOSS Manuals の創設者です。また、Booktype の創設者でプロジェクトマネージャー" -"です。 Booktype はオンラインで本の執筆、編集、出版を行うオープンソースプロ" -"ジェクトです。" - -msgid "" -"Add additional OpenStack Block Storage hosts (see )." -msgstr "" -"追加で OpenStack Block Storage ホストを増やす (see 参照)。" - -msgid "Add additional cloud controllers (see )." -msgstr "" -"追加でクラウドコントローラーを増やす ( を参" -"照)。" - -msgid "Add additional persistent storage to a virtual machine" -msgstr "永続的なストレージを仮想マシン(VM)へ追加する" - -msgid "Add additional persistent storage to a virtual machine (VM)" -msgstr "永続的なストレージを仮想マシン(VM)へ追加する" - -msgid "" -"Add all raw disks to one large RAID array, either hardware or software " -"based. You can partition this large array with the boot, root, swap, and LVM " -"areas. This option is simple to implement and uses all partitions. However, " -"disk I/O might suffer." -msgstr "" -"すべてのローディスクを 1 つの大きな RAID 配列に追加します。ここでは、ソフト" -"ウェアベースでもハードウェアベースでも構いません。この大きなRAID 配列を " -"boot、root、swap、LVM 領域に分割します。この選択肢はシンプルですべてのパー" -"ティションを利用することができますが、I/O性能に悪影響がでる可能性があります。" - -msgid "" -"Add an OpenStack Storage service (see the Object Storage chapter in the " -"OpenStack Installation Guide for your distribution)." -msgstr "" -"OpenStack Storage Service を追加する (お使いのディストリビューション向けの " -"OpenStack インストールガイド で Object Storage の章を参" -"照してください)" - -msgid "Add device snooper0 to bridge br-int:" -msgstr "" -"snooper0 デバイスを br-int ブリッジに追加します。" - -msgid "Add metadata to the container to allow the IP:" -msgstr "メタデータをコンテナーに追加して、IP を許可します。" - -msgid "Add the repository for the new release packages." -msgstr "新リリースのパッケージのリポジトリーを追加します。" - -msgid "Adding Cloud Controller Nodes" -msgstr "クラウドコントローラーノードの追加" - -msgid "Adding Custom Logging Statements" -msgstr "カスタムログの追加" - -msgid "Adding Images" -msgstr "イメージの追加" - -msgid "Adding Projects" -msgstr "プロジェクトの追加" - -msgid "Adding a Compute Node" -msgstr "コンピュートノードの追加" - -msgid "" -"Adding a new object storage node is different from adding compute or block " -"storage nodes. You still want to initially configure the server by using " -"your automated deployment and configuration-management systems. After that " -"is done, you need to add the local disks of the object storage node into the " -"object storage ring. The exact command to do this is the same command that " -"was used to add the initial disks to the ring. Simply rerun this command on " -"the object storage proxy server for all disks on the new object storage " -"node. Once this has been done, rebalance the ring and copy the resulting " -"ring files to the other storage nodes.Object Storageadding nodes" -msgstr "" -"新しいオブジェクトストレージノードの追加は、コンピュートノードやブロックスト" -"レージノードの追加とは異なります。サーバーの設定は、これまで通り自動配備シス" -"テムと構成管理システムを使って行えます。完了した後、オブジェクトストレージ" -"ノードのローカルディスクをオブジェクトストレージリングに追加する必要がありま" -"す。これを実行するコマンドは、最初にディスクをリングに追加するのに使用したコ" -"マンドと全く同じです。オブジェクトストレージプロキシサーバーにおいて、このコ" -"マンドを、新しいオブジェクトストレージノードにあるすべてのディスクに対して、" -"再実行するだけです。これが終わったら、リングの再バランスを行い、更新されたリ" -"ングファイルを他のストレージノードにコピーします。Object Storageadding nodes" - -msgid "Adding an Object Storage Node" -msgstr "オブジェクトストレージノードの追加" - -msgid "" -"Adding more public-facing control services or guest instance IPs should " -"always be part of your plan." -msgstr "" -"パブリック側に置かれるコントローラーサービスやゲストインスタンスのIPの追加" -"は、必ずアドレス計画の一部として入れておくべきです。" - -msgid "" -"Adding security groups is typically done on instance boot. When launching " -"from the dashboard, you do this on the Access & Security tab of the Launch Instance dialog. When " -"launching from the command line, append --security-groups with " -"a comma-separated list of security groups." -msgstr "" -"セキュリティグループの追加は、一般的にインスタンスの起動時に実行されます。" -"ダッシュボードから起動するとき、これはインスタンスの起動" -"ダイアログのアクセスとセキュリティタブにあります。コマン" -"ドラインから起動する場合には、--security-groups にセキュリティグ" -"ループのコンマ区切り一覧を指定します。" - -msgid "" -"Adding to a RAID array (RAID stands for redundant array of independent " -"disks), based on the number of disks you have available, so that you can add " -"capacity as your cloud grows. Some options are described in more detail " -"below." -msgstr "" -"使用可能なディスクの数をもとに、RAID 配列 (RAID は Redundant Array of " -"Independent Disks の略) に追加します。 こうすることで、クラウドが大きくなった" -"場合も容量を追加できます。オプションは、以下で詳しく説明しています。" - -msgid "" -"Additional optional restrictions on which compute nodes the flavor can run " -"on. This is implemented as key-value pairs that must match against the " -"corresponding key-value pairs on compute nodes. Can be used to implement " -"things like special resources (such as flavors that can run only on compute " -"nodes with GPU hardware)." -msgstr "" -"フレーバーを実行できるコンピュートノードに関する追加の制限。これはオプション" -"です。これは、コンピュートノードにおいて対応するキーバリューペアとして実装さ" -"れ、コンピュートノードでの対応するキーバリューペアと一致するものでなければい" -"けません。(GPU ハードウェアを持つコンピュートノードのみにおいて実行するフレー" -"バーのように) 特別なリソースのようなものを実装するために使用できます。" - -msgid "" -"Additionally, this instance in question was responsible for a very, very " -"large backup job each night. While \"The Issue\" (as we were now calling it) " -"didn't happen exactly when the backup happened, it was close enough (a few " -"hours) that we couldn't ignore it." -msgstr "" -"加えて、問題のインスタンスは毎晩非常に長いバックアップジョブを担っていた。" -"「あの問題」(今では我々はこの障害をこう呼んでいる)はバックアップが行われて" -"いる最中には起こらなかったが、(数時間たっていて)「あの問題」が起こるまであ" -"と少しのところだった。" - -msgid "" -"Admin establishing encrypted volume type, then " -"user selecting encrypted volume" -msgstr "" -"管理者が 暗号化ボリューム種別を設定し、利用者が" -"暗号化ボリュームを選択" - -msgid "Administrative Command-Line Tools" -msgstr "管理系コマンドラインツール" - -msgid "" -"Administrator configuration of size settings, known as flavors" -msgstr "フレーバー として知られる管理者のサイズ設定" - -msgid "Advanced Configuration" -msgstr "高度な設定" - -msgid "After a Cloud Controller or Storage Proxy Reboots" -msgstr "クラウドコントローラーまたはストレージプロキシの再起動後" - -msgid "After a Compute Node Reboots" -msgstr "コンピュートノードの再起動後" - -msgid "" -"After a cloud controller reboots, ensure that all required services were " -"successfully started. The following commands use ps and " -"grep to determine if nova, glance, and keystone are currently " -"running:" -msgstr "" -"クラウドコントローラーの再起動後、すべての必要なサービスが正常に起動されたこ" -"とを確認します。以下のコマンドは、psgrep を使用" -"して、nova、glance、keystone が現在動作していることを確認しています。" - -msgid "After a few minutes of troubleshooting, I saw the following details:" -msgstr "数分間のトラブル調査の後、以下の詳細が判明した。" - -msgid "" -"After digging into the nova (OpenStack Compute) code, I noticed two areas in " -"api/ec2/cloud.py potentially impacting my system:" -msgstr "" -"nova (OpenStack Compute) のコードを深堀りすると、私のシステムに影響を与える可" -"能性がある 2 つの領域を api/ec2/cloud.py で見つけまし" -"た。" - -msgid "" -"After finding the instance ID we headed over to /var/lib/nova/" -"instances to find the console.log: " -"" -msgstr "" -"インスタンスIDの発見後、console.log を探すため " -"/var/lib/nova/instances にアクセスした。" - -msgid "" -"After learning about scalability in computing from particle physics " -"experiments, such as ATLAS at the Large Hadron Collider (LHC) at CERN, Tom " -"worked on OpenStack clouds in production to support the Australian public " -"research sector. Tom currently serves as an OpenStack community manager and " -"works on OpenStack documentation in his spare time." -msgstr "" -"CERN の Large Hadron Collider (LHC) で ATLAS のような素粒子物理学実験でコン" -"ピューティングのスケーラビリティの経験を積んだ後、現在はオーストラリアの公的" -"な研究部門を支援するプロダクションの OpenStack クラウドに携わっていました。現" -"在は OpenStack のコミュニティマネージャーを務めており、空いた時間で " -"OpenStack ドキュメントプロジェクトに参加しています。" - -msgid "" -"After migration, users see different results from and " -". To ensure users see the same images in the list commands, " -"edit the /etc/glance/policy.json and /etc/" -"nova/policy.json files to contain \"context_is_admin\": " -"\"role:admin\", which limits access to private images for projects." -msgstr "" -"移行後、ユーザーは から異なる結果を見る" -"ことになります。ユーザーが一覧コマンドにおいて同じイメージをきちんと見るため" -"に、/etc/glance/policy.json/etc/nova/" -"policy.json ファイルを編集して、\"context_is_admin\": " -"\"role:admin\" を含めます。これは、プロジェクトのプライベートイメージ" -"へのアクセスを制限します。" - -msgid "" -"After reproducing the problem several times, I came to the unfortunate " -"conclusion that this cloud did indeed have a problem. Even worse, my time " -"was up in Kelowna and I had to return back to Calgary." -msgstr "" -"何度か問題が再現した後、私はこのクラウドが実は問題を抱えているという不幸な結" -"論に至った。更に悪いことに、私がケロウナから出発する時間になっており、カルガ" -"リーに戻らなければならなかった。" - -msgid "" -"After restarting the instance, everything was back up and running. We " -"reviewed the logs and saw that at some point, network communication stopped " -"and then everything went idle. We chalked this up to a random occurrence." -msgstr "" -"インスタンスの再起動後、全ては元通りに動くようになった。我々はログを見直し、" -"問題の箇所(ネットワーク通信が止まり、全ては待機状態になった)を見た。我々は" -"ランダムな事象の原因はこのインスタンスだと判断した。" - -msgid "" -"After running check the share's status: " -msgstr " の実行後、共有の状態を確認します。" - -msgid "" -"After that, use the nova command to reboot all instances " -"that were on c01.example.com while regenerating their XML files at the same " -"time:" -msgstr "" -"その後、nova コマンドを使って、c01.example.com にあったす" -"べてのインスタンスを再起動します。起動する際にインスタンスの XML ファイルを再" -"生成します:" - -msgid "After the Change Is Accepted" -msgstr "変更が受理された後" - -msgid "" -"After the change is reviewed, accepted, and lands in master, it " -"automatically moves to:" -msgstr "" -"変更がレビューされ、受理されて、マスターブランチにマージされると、バグの状態" -"は自動的に以下のようになります。" - -msgid "" -"After the compute node is successfully running, you must deal with the " -"instances that are hosted on that compute node because none of them are " -"running. Depending on your SLA with your users or customers, you might have " -"to start each instance and ensure that they start correctly." -msgstr "" -"コンピュートノードが正常に実行された後、そのコンピュートノードでホストされて" -"いるインスタンスはどれも動作していないので、そのコンピュートノードにおいてホ" -"ストされているインスタンスを処理する必要があります。ユーザーや顧客に対する " -"SLA によっては、各インスタンスを開始し、正常に起動していることを確認する必要" -"がある場合もあるでしょう。" - -msgid "After the dnsmasq processes start again, check whether DNS is working." -msgstr "dnsmasq再起動後に、DNSが動いているか確認します。" - -msgid "" -"After the packet is on this NIC, it transfers to the compute node's default " -"gateway. The packet is now most likely out of your control at this point. " -"The diagram depicts an external gateway. However, in the default " -"configuration with multi-host, the compute host is the gateway." -msgstr "" -"パケットはこのNICに送られた後、コンピュートノードのデフォルトゲートウェイに転" -"送されます。パケットはこの時点で、おそらくあなたの管理範囲外でしょう。図には" -"外部ゲートウェイを描いていますが、マルチホストのデフォルト構成では、コン" -"ピュートホストがゲートウェイです。" - -msgid "" -"After this command it is common practice to call from your " -"workstation, and once done press enter in your instance shell to unfreeze " -"it. Obviously you could automate this, but at least it will let you properly " -"synchronize." -msgstr "" -"このコマンドの後、お使いの端末から を呼び出すことが一般的な" -"慣習です。実行した後、インスタンスの中で Enter キーを押して、フリーズ解除しま" -"す。もちろん、これを自動化できますが、少なくとも適切に同期できるようになるで" -"しょう。" - -msgid "" -"After you consider these factors, you can determine how many cloud " -"controller cores you require. A typical eight core, 8 GB of RAM server is " -"sufficient for up to a rack of compute nodes — given the above caveats." -msgstr "" -"これらの要素を検討した後、クラウドコントローラにどのくらいのコア数が必要なの" -"か決定することができます。上記で説明した留意事項の下、典型的には、ラック 1 本" -"分のコンピュートノードに対して8 コア、メモリ 8GB のサーバで充分です。" - -msgid "" -"After you establish that the instance booted properly, the task is to figure " -"out where the failure is." -msgstr "" -"インスタンスが正しく起動した後、この手順でどこが問題かを切り分けることができ" -"ます。" - -msgid "" -"After you have migrated all instances, ensure that the nova-compute service has stopped:" -msgstr "" -"すべてのインスタンスをマイグレーションした後、nova-compute サー" -"ビスが停止していることを確認します:" - -msgid "" -"After you have the list, you can use the nova command to start each instance:" -msgstr "" -"一覧を取得した後、各インスタンスを起動するために nova コマンドを使用できま" -"す。" - -msgid "" -"Again, it turns out that the image was a snapshot. The three other instances " -"that successfully started were standard cloud images. Was it a problem with " -"snapshots? That didn't make sense." -msgstr "" -"再度、イメージがスナップショットであることが判明した。無事に起動した他の3イ" -"ンスタンスは標準のクラウドイメージであった。これはスナップショットの問題か?" -"それは意味が無かった。" - -msgid "" -"Again, the right answer depends on your environment. You have to make your " -"decision based on the trade-offs between space utilization, simplicity, and " -"I/O performance." -msgstr "" -"ここでも、環境によって適したソリューションが変わります。スペース使用状況、シ" -"ンプルさ、I/O パフォーマンスの長所、短所をベースに意思決定していく必要があり" -"ます。" - -msgid "Ah-hah! So OpenStack was deleting it. But why?" -msgstr "あっはっは!じゃぁ、OpenStack が削除したのか。でも何故?" - -msgid "" -"All AMQP—Advanced Message Queue Protocol—messages for services are received " -"and sent according to the queue brokerAdvanced Message Queuing Protocol (AMQP)" -msgstr "" -"サービスのためのすべてのAMQP(Advavnced Message Queue Protocol)メッセージは" -"キューブローカーによって送受信されます。Advanced Message Queuing Protocol (AMQP)" - -msgid "" -"All OpenStack projects use Launchpad for bug tracking. You'll need to create an account on " -"Launchpad before you can submit a bug report." -msgstr "" -"全ての OpenStack プロジェクトでは、バグ管理に Launchpad を使っています。バグ報告を行う前に、" -"Launchpad のアカウントを作る必要があります。" - -msgid "" -"All files and directories in /var/lib/nova/instances are " -"uniquely named. The files in _base are uniquely titled for the glance image " -"that they are based on, and the directory names instance-xxxxxxxx are uniquely titled for that particular instance. For example, if you " -"copy all data from /var/lib/nova/instances on one compute node " -"to another, you do not overwrite any files or cause any damage to images " -"that have the same unique name, because they are essentially the same file." -msgstr "" -"/var/lib/nova/instances にあるすべてのファイルとディレクトリは一" -"意に名前が付けられています。_base にあるファイルは元となった glance イメージ" -"に対応する一意に名前が付けられています。また、instance-xxxxxxxx " -"という名前が付けられたディレクトリは特定のインスタンスに対して一意にタイトル" -"が付けられています。たとえば、あるコンピュートノードにある /var/lib/" -"nova/instances のすべてのデータを他のノードにコピーしたとしても、ファ" -"イルを上書きすることはありませんし、また同じ一意な名前を持つイメージにダメー" -"ジを与えることもありません。同じ一意な名前を持つものは本質的に同じファイルだ" -"からです。" - -msgid "" -"All in all, just issue the \"reboot\" command. The operating system cleanly " -"shuts down services and then automatically reboots. If you want to be very " -"thorough, run your backup jobs just before you reboot.maintenance/debuggingrebooting " -"followingstoragestorage proxy maintenancerebootcloud controller or storage proxycloud controllersrebooting" -msgstr "" -"大体の場合、単に「再起動」コマンドを発行します。オペレーティングシステムが" -"サービスを正常にシャットダウンして、その後、自動的に再起動します。万全を期し" -"たい場合、再起動する前にバックアップジョブを実行します。maintenance/debuggingrebooting " -"followingstoragestorage proxy maintenancerebootcloud controller or storage proxycloud controllersrebooting" - -msgid "" -"All interfaces on the br-tun are internal to Open vSwitch. To " -"monitor traffic on them, you need to set up a mirror port as described above " -"for patch-tun in the br-int bridge." -msgstr "" -"br-tun にあるすべてのインターフェースは、Open vSwitch 内部のもの" -"です。それらの通信を監視する場合、br-int にある patch-" -"tun 向けに上で説明したようなミラーポートをセットアップする必要がありま" -"す。" - -msgid "All nodes" -msgstr "全ノード" - -msgid "All nova services" -msgstr "すべての Nova サービス" - -msgid "" -"All of the alert types mentioned earlier can also be used for trend " -"reporting. Some other trend examples include:trendingreport examples" -msgstr "" -"これまでに示した全てのアラートタイプは、トレンドレポートに利用可能です。その" -"他のトレンドの例は以下の通りです。trendingreport examples" - -msgid "" -"All of the code for OpenStack lives in /opt/stack. Go to the " -"swift directory in the shell screen and edit your middleware " -"module." -msgstr "" -"すべての OpenStack のコードは /opt/stack にあります。" -"shell セッションの screen の中で swift ディレクトリに移動し、あ" -"なたのミドルウェアモジュールを編集してください。" - -msgid "" -"All public access, whether direct, through a command-line client, or through " -"the web-based dashboard, uses the API service. Find the API reference at " -".API (application programming interface)design considerationsdesign considerationsAPI " -"support" -msgstr "" -"すべてのパブリックアクセス(直接アクセス、コマンドライン、ウェブベースダッ" -"シュボード)はすべてAPIサービスを使用します。APIリファレンスはです。API (application programming interface)設計" -"上の考慮事項設" -"計上の考慮事項API サポート" - -msgid "" -"All servers running OpenStack components should be able to access an " -"appropriate NTP server. You may decide to set up one locally or use the " -"public pools available from the Network Time Protocol project." -msgstr "" -"OpenStack コンポーネントが稼働しているすべてのサーバは適切なNTPサーバにアクセ" -"ス可能であるべきです。 " -"Network Time Protocol projectが提供しているパブリックサーバかローカル" -"にNTPサーバを構築するか決定する必要があるでしょう。" - -msgid "" -"All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The " -"OpenStack version in use is typically the current stable version, with 5 to " -"10 percent back-ported code from trunk and modifications." -msgstr "" -"全サイトは Ubuntu 14.04 をベースにしており、ハイパーバイザとして KVM を使用し" -"ています。使用している OpenStack のバージョンは基本的に安定バージョンであり、" -"5~10%のコードが開発コードからバックポートされたか、修正されています。" - -msgid "" -"All translation of GRE tunnels to and from internal VLANs happens on this " -"bridge." -msgstr "このブリッジで GRE トンネルと内部 VLAN の相互変換が行われます。" - -msgid "" -"All vulnerabilities discovered by community members (or users) can be " -"reported to the team." -msgstr "" -"コミュニティメンバー (やユーザー) が発見した全ての脆弱性はこのチームに報告さ" -"れます。" - -msgid "Allow DHCP client traffic." -msgstr "DHCP クライアント通信の許可。" - -msgid "Allow IPv6 ICMP traffic to allow RA packets." -msgstr "RA パケットを許可するための IPv6 ICMP 通信の許可。" - -msgid "Allow traffic from defined IP/MAC pairs." -msgstr "定義済み IP/MAC ペアからの通信許可。" - -msgid "Allows you to fetch images from Amazon S3." -msgstr "Amazon S3からイメージを取得する事を許可します。" - -msgid "" -"Allows you to fetch images from a web server. You cannot write images by " -"using this mode." -msgstr "" -"イメージをウェブサーバから取得する事を許可します。このモードではイメージの書" -"き込みはできません。" - -msgid "Allows you to store images as objects." -msgstr "イメージをオブジェクトとして格納する事を許可します" - -msgid "" -"Almost all OpenStack components have an underlying database to store " -"persistent information. Usually this database is MySQL. Normal MySQL " -"administration is applicable to these databases. OpenStack does not " -"configure the databases out of the ordinary. Basic administration includes " -"performance tweaking, high availability, backup, recovery, and repairing. " -"For more information, see a standard MySQL administration guide.databasesmaintenance/" -"debuggingmaintenance/debuggingdatabases" -msgstr "" -"ほとんどすべての OpenStack コンポーネントは、永続的な情報を保存するために内部" -"でデータベースを使用しています。このデータベースは通常 MySQL です。通常の " -"MySQL の管理方法がこれらのデータベースに適用できます。OpenStack は特別な方法" -"でデータベースを設定しているわけではありません。基本的な管理として、パフォー" -"マンス調整、高可用性、バックアップ、リカバリーおよび修理などがあります。さら" -"なる情報は標準の MySQL 管理ガイドを参照してください。databasesmaintenance/debuggingmaintenance/" -"debuggingdatabases" - -msgid "" -"Also check that all services are functioning. The following set of commands " -"sources the openrc file, then runs some basic glance, nova, and " -"openstack commands. If the commands work as expected, you can be confident " -"that those services are in working condition:" -msgstr "" -"また、すべてのサービスが正しく機能していることを確認します。以下のコマンド群" -"は、openrc ファイルを読み込みます。そして、いくつかの基本的な " -"glanec、nova、openstack コマンドを実行します。コマンドが期待したとおりに動作" -"すれば、それらのサービスが動作状態にあると確認できます。" - -msgid "Also check that it is functioning:" -msgstr "また、正しく機能していることを確認します。" - -msgid "Also ensure that it has successfully connected to the AMQP server:" -msgstr "AMQP サーバーに正常に接続できることも確認します。" - -msgid "" -"Also, in practice, the nova-compute services on the " -"compute nodes do not always reconnect cleanly to rabbitmq hosted on the " -"controller when it comes back up after a long reboot; a restart on the nova " -"services on the compute nodes is required." -msgstr "" -"実際には、コンピュートノードの nova-compute サービスが、コ" -"ントローラー上で動作している rabbitmq に正しく再接続されない場合があります。" -"時間のかかるリブートから戻ってきた場合や、コンピュートノードのnova サービスを" -"再起動する必要がある場合です。" - -msgid "" -"Also, you need to decide whether you want to support object storage in your " -"cloud. The two common use cases for providing object storage in a compute " -"cloud are:" -msgstr "" -"クラウド内でオブジェクトストレージの利用を検討する必要があります。コンピュー" -"トクラウドで提供されるオブジェクトストレージの一般的な利用方法は以下の二つで" -"す。" - -msgid "Alter the configuration until it works." -msgstr "正常に動作するまで設定を変更する。" - -msgid "" -"Alternatively, if you want someone to help guide you through the decisions " -"about the underlying hardware or your applications, perhaps adding in a few " -"features or integrating components along the way, consider contacting one of " -"the system integrators with OpenStack experience, such as Mirantis or " -"Metacloud." -msgstr "" -"代わりに、ベースとするハードウェアやアプリケーション、いくつかの新機能の追" -"加、コンポーネントをくみ上げる方法を判断するために、誰かに支援してほしい場" -"合、Mirantis や Metacloud などの OpenStack の経験豊富なシステムインテグレー" -"ターに連絡することを検討してください。" - -msgid "" -"Alternatively, it is possible to configure VLAN-based networks to use " -"external routers rather than the l3-agent shown here, so long as the " -"external router is on the same VLAN:" -msgstr "" -"これとは別に、外部ルーターが同じ VLAN にあれば、ここの示されている L3 エー" -"ジェントの代わりに外部ルーターを使用するよう、VLAN ベースのネットワークを設定" -"できます。" - -msgid "" -"Although the title of this story is much more dramatic than the actual " -"event, I don't think, or hope, that I'll have the opportunity to use " -"\"Valentine's Day Massacre\" again in a title." -msgstr "" -"この物語のタイトルは実際の事件よりかなりドラマティックだが、私はタイトル中に" -"「バレンタインデーの大虐殺」を使用する機会が再びあるとは思わない(し望まな" -"い)。" - -msgid "" -"Although this method is not documented or supported, you can use it when " -"your compute node is permanently offline but you have instances locally " -"stored on it." -msgstr "" -"この方法はドキュメントに書かれておらず、サポートされていない方法ですが、コン" -"ピュートノードが完全にオフラインになってしまったが、インスタンスがローカルに" -"保存されているときに、この方法を使用できます。" - -msgid "" -"Among object, container, and " -"account servers" -msgstr "" -"object servercontainer serveraccount server 間" - -msgid "Among the log statements you'll see the lines:" -msgstr "ログの中に以下の行があるでしょう。" - -msgid "Amount of available physical storage" -msgstr "利用可能な物理ディスクの総量で決まる" - -msgid "" -"An IP address plan might be broken down into the following sections:" -"IP addressessections of" -msgstr "" -"IPアドレス計画としては次のような用途で分類されるでしょう:IPアドレス分類" - -msgid "" -"An OpenStack cloud does not have much value without users. This chapter " -"covers topics that relate to managing users, projects, and quotas. This " -"chapter describes users and projects as described by version 2 of the " -"OpenStack Identity API." -msgstr "" -"OpenStack クラウドは、ユーザーなしでは特に価値はありません。本章では、ユー" -"ザー、プロジェクト、クォータの管理に関するトピックを記載します。また、" -"OpenStack Identity API のバージョン 2 で説明されているように、ユーザーとプロ" -"ジェクトについても説明します。" - -msgid "" -"An OpenStack installation can potentially have many subnets (ranges of IP " -"addresses) and different types of services in each. An IP address plan can " -"assist with a shared understanding of network partition purposes and " -"scalability. Control services can have public and private IP addresses, and " -"as noted above, there are a couple of options for an instance's public " -"addresses.IP addressesaddress planningnetwork designIP address " -"planning" -msgstr "" -"OpenStackのインストールでは潜在的に多くのサブネット(IPアドレスの範囲) とそれ" -"ぞれに異なるタイプのサービスを持つ可能性があります。あるIPアドレスプランは、" -"ネットワーク分割の目的とスケーラビリティの共有された理解を手助けします。コン" -"トロールサービスはパブリックとプライベートIPアドレスを持つ事ができ、上記の通" -"り、インスタンスのパブリックアドレスとして2種類のオプションが存在します。" -"IPアドレスアドレ" -"ス計画ネット" -"ワーク設計IPアドレス計画" - -msgid "" -"An academic turned software-developer-slash-operator, Lorin worked as the " -"lead architect for Cloud Services at Nimbis Services, where he deploys " -"OpenStack for technical computing applications. He has been working with " -"OpenStack since the Cactus release. Previously, he worked on high-" -"performance computing extensions for OpenStack at University of Southern " -"California's Information Sciences Institute (USC-ISI)." -msgstr "" -"アカデミック出身のソフトウェア開発者・運用者である彼は、Nimbis Services でク" -"ラウドサービスの Lead Architect として働いていました。Nimbis Service では彼は" -"技術計算アプリケーション用の OpenStack を運用しています。 Cactus リリース以" -"来 OpenStack に携わっています。以前は、University of Southern California's " -"Information Sciences Institute (USC-ISI) で OpenStack の high-performance " -"computing 向けの拡張を行いました。" - -msgid "" -"An administrative super user, which has full permissions across all projects " -"and should be used with great care" -msgstr "" -"すべてのプロジェクトにわたり全権限を持つ管理ユーザー。非常に注意して使用する" -"必要があります。" - -msgid "" -"An advanced use of this general concept allows different flavor types to run " -"with different CPU and RAM allocation ratios so that high-intensity " -"computing loads and low-intensity development and testing systems can share " -"the same cloud without either starving the high-use systems or wasting " -"resources on low-utilization systems. This works by setting " -"metadata in your host aggregates and matching " -"extra_specs in your flavor types." -msgstr "" -"この一般的なコンセプトを高度なレベルで使用すると、集中度の高いコンピュート" -"ロードや負荷の低い開発やテストシステムが使用量の多いシステムのリソースが不足" -"したり、使用量の低いシステムでリソースを無駄にしたりしないで、同じクラウドを" -"共有できるように、異なるフレーバーの種別が、異なる CPU および RAM 割当の比率" -"で実行できるようになります。 これは、ホストアグリゲートに " -"metadata を設定して、フレーバー種別の" -"extra_specs と一致させると機能します。" - -msgid "" -"An alternative to enabling the RabbitMQ web management interface is to use " -"the rabbitmqctl " -"commands. For example, rabbitmqctl list_queues| grep cinder displays any messages left in the queue. If there are messages, " -"it's a possible sign that cinder services didn't connect properly to " -"rabbitmq and might have to be restarted." -msgstr "" -"RabbitMQ Web 管理インターフェイスを有効にするもう一つの方法としては、 " -"rabbitmqctl コマン" -"ドを利用します。例えば rabbitmqctl list_queues| grep cinder は、キューに残っているメッセージを表示します。メッセージが存在する場" -"合、CinderサービスがRabbitMQに正しく接続できてない可能性があり、再起動が必要" -"かもしれません。" - -msgid "" -"An asterisk (*) indicates when the example architecture deviates from the " -"settings of a default installation. We'll offer explanations for those " -"deviations next.objectsobject storagestorageobject storagemigrationlive migrationIP addressesfloatingfloating IP addressstorageblock storageblock storagedashboardlegacy networking " -"(nova)features supported by" -msgstr "" -"アスタリスク (*) は、アーキテクチャーの例がデフォルトインストールの設定から逸" -"脱している箇所を示しています。この逸脱については、次のセクションで説明しま" -"す。オブジェクトObject StorageストレージObject Storageマイグレーション" -"ライブマイグレー" -"ションIP アドレ" -"スfloatingfloating IP addressストレージブロックストレージ" -"ブロックスト" -"レージダッシュ" -"ボードレガシー" -"ネットワーク (nova)サポート対象機能" - -msgid "" -"An attempt was made to deprecate nova-network during the " -"Havana release, which was aborted due to the lack of equivalent " -"functionality (such as the FlatDHCP multi-host high-availability mode " -"mentioned in this guide), lack of a migration path between versions, " -"insufficient testing, and simplicity when used for the more straightforward " -"use cases nova-network traditionally supported. Though " -"significant effort has been made to address these concerns, nova-" -"network was not be deprecated in the Juno release. In addition, to " -"a limited degree, patches to nova-network have again " -"begin to be accepted, such as adding a per-network settings feature and SR-" -"IOV support in Juno.Junonova network deprecation" -msgstr "" -"Havana リリース中に nova-network を廃止しようという試みが" -"ありました。これは、このガイドで言及した FlatDHCP マルチホスト高可用性モード" -"などの同等機能の不足、バージョン間の移行パスの不足、不十分なテスト、伝統的に" -"サポートされる nova-network のより簡単なユースケースに使用" -"する場合のシンプルさ、などの理由により中断されました。甚大な努力によりこれら" -"の心配事を解決してきましたが、nova-network は Juno リリー" -"スにおいて廃止されませんでした。さらに、Juno においてネットワークごとの設定機" -"能や SR-IOV の追加などの限定された範囲で、nova-network へ" -"のパッチが再び受け入れられてきました。Junonova network deprecation" - -msgid "" -"An authorization policy can be composed by one or more rules. If more rules " -"are specified, evaluation policy is successful if any of the rules evaluates " -"successfully; if an API operation matches multiple policies, then all the " -"policies must evaluate successfully. Also, authorization rules are " -"recursive. Once a rule is matched, the rule(s) can be resolved to another " -"rule, until a terminal rule is reached. These are the rules defined:" -msgstr "" -"認可ポリシーは、一つまたは複数のルールにより構成できます。複数のルールを指定" -"すると、いずれかのルールが成功と評価されれば、評価エンジンが成功になります。" -"API 操作が複数のポリシーに一致すると、すべてのポリシーが成功と評価される必要" -"があります。認可ルールは再帰的にもできます。あるルールにマッチした場合、これ" -"以上展開できないルールに達するまで、そのルールは別のルールに展開されます。以" -"下のルールが定義できます。" - -msgid "" -"An automated deployment system installs and configures operating systems on " -"new servers, without intervention, after the absolute minimum amount of " -"manual work, including physical racking, MAC-to-IP assignment, and power " -"configuration. Typically, solutions rely on wrappers around PXE boot and " -"TFTP servers for the basic operating system install and then hand off to an " -"automated configuration management system.deploymentprovisioning/deploymentprovisioning/deploymentautomated deployment" -msgstr "" -"自動のデプロイメントシステムは、物理ラッキング、MAC から IP アドレスの割当、" -"電源設定など、必要最小限の手作業のみで、介入なしに新規サーバー上にオペレー" -"ティングシステムのインストールと設定を行います。ソリューションは通常、PXE " -"ブートや TFTP サーバー関連のラッパーに依存して基本のオペレーティングシステム" -"をインストールして、次に自動設定管理システムに委譲されます。デプロイメントプロビジョニング/デプロイ" -"メントプロビジョニン" -"グ/デプロイメント自動デプロイメント" - -msgid "An external server outside of the cloud" -msgstr "クラウド外部のサーバー" - -msgid "" -"An hour later I received the same alert, but for another compute node. Crap. " -"OK, now there's definitely a problem going on. Just like the original node, " -"I was able to log in by SSH. The bond0 NIC was DOWN but the 1gb NIC was " -"active." -msgstr "" -"1時間後、私は同じ警告を受信したが、別のコンピュートノードだった。拍手。OK、" -"問題は間違いなく現在進行中だ。元のノードと全く同様に、私は SSH でログインする" -"ことが出来た。bond0 NIC は DOWN だったが、1Gb NIC は有効だった。" - -msgid "" -"An initial idea was to just increase the lease time. If the instance only " -"renewed once every week, the chances of this problem happening would be " -"tremendously smaller than every minute. This didn't solve the problem, " -"though. It was just covering the problem up." -msgstr "" -"最初のアイデアは、単にリース時間を増やすことだった。もしインスタンスが毎週1" -"回だけIPアドレスを更新するのであれば、毎分更新する場合よりこの問題が起こる可" -"能性は極端に低くなるだろう。これはこの問題を解決しないが、問題を単に取り繕う" -"ことはできる。" - -msgid "An instance running on that compute node" -msgstr "コンピュートノード内のインスタンス" - -msgid "" -"An integral part of a configuration-management system is the item that it " -"controls. You should carefully consider all of the items that you want, or " -"do not want, to be automatically managed. For example, you may not want to " -"automatically format hard drives with user data." -msgstr "" -"設定管理システムの不可欠な部分は、このシステムが制御する項目です。自動管理を" -"する項目、しない項目をすべて慎重に検討していく必要があります。例えば、ユー" -"ザーデータが含まれるハードドライブは自動フォーマットは必要ありません。" - -msgid "" -"An integrated OpenStack project (code-named ceilometer) collects metering " -"and event data relating to OpenStack services. Data collected by the " -"Telemetry module could be used for billing. Depending on deployment " -"configuration, collected data may be accessible to users based on the " -"deployment configuration. The Telemetry service provides a REST API " -"documented at . You can read more about the module in the " -"OpenStack Cloud Administrator Guide or in the developer documentation.monitoringmetering and telemetrytelemetry/meteringmetering/telemetryceilometerlogging/" -"monitoringceilometer project" -msgstr "" -"OpenStack の統合プロジェクト (コード名 ceilometer) は、OpenStack のサービスに" -"関連するメータリングとイベントデータを収集します。Telemetry モジュールにより" -"収集されるデータは、課金のために使用できます。環境の設定によっては、ユーザー" -"が設定に基づいて収集したデータにアクセスできるかもしれません。Telemetry サー" -"ビスは、 にドキュメント化されている REST API を提供します。このモジュール" -"の詳細は、OpenStack Cloud Administrator Guide や developer " -"documentation にあります。monitoringmetering and telemetrytelemetry/" -"meteringmetering/telemetryceilometerlogging/monitoringceilometer " -"project" - -msgid "" -"An upgrade pre-testing system is excellent for getting the configuration to " -"work. However, it is important to note that the historical use of the system " -"and differences in user interaction can affect the success of upgrades." -msgstr "" -"アップグレード前テストシステムは、設定を動作させるために優れています。しかし" -"ながら、システムの歴史的な使用法やユーザー操作における違いにより、アップグ" -"レードの成否に影響することに注意することが重要です。" - -msgid "And finally, you can disassociate the floating IP:" -msgstr "最後に、floating IPを開放します。" - -msgid "" -"And the best part: the same user had just tried creating a CentOS instance. " -"What?" -msgstr "" -"そして、最も重要なこと。同じユーザが CentOS インスタンスを作成しようとしたば" -"かりだった。何だと?" - -msgid "Anne Gentle" -msgstr "Anne Gentle" - -msgid "" -"Anne is the documentation coordinator for OpenStack and also served as an " -"individual contributor to the Google Documentation Summit in 2011, working " -"with the Open Street Maps team. She has worked on book sprints in the past, " -"with FLOSS Manuals’ Adam Hyde facilitating. Anne lives in Austin, Texas." -msgstr "" -"Anne は OpenStack のドキュメントコーディネーターで、2011年の Google Doc " -"Summit では individual contributor (個人コントリビュータ) を努め Open " -"Street Maps チームとともに活動しました。Adam Hyde が進めていた FLOSS Manuals " -"の以前の doc sprint にも参加しています。テキサス州オースティンに住んでいま" -"す。" - -msgid "" -"Another common concept across various OpenStack projects is that of periodic " -"tasks. Periodic tasks are much like cron jobs on traditional Unix systems, " -"but they are run inside an OpenStack process. For example, when OpenStack " -"Compute (nova) needs to work out what images it can remove from its local " -"cache, it runs a periodic task to do this.periodic tasksconfiguration optionsperiodic task " -"implementation" -msgstr "" -"様々な OpenStack プロジェクトに共通する別の考え方として、周期的タスク " -"(periodic task) があります。周期的タスクは伝統的な Unix システムの cron ジョ" -"ブに似ていますが、OpenStack プロセスの内部で実行されます。例えば、OpenStack " -"Compute (nova) はローカルキャッシュからどのイメージを削除できるかを決める必要" -"がある際に、これを行うために周期的タスクを実行します。periodic tasksconfiguration optionsperiodic " -"task implementation" - -msgid "" -"Another example is a user consuming a very large amount of " -"bandwidthbandwidthrecognizing DDOS attacks. Again, " -"the key is to understand what the user is doing. If she naturally needs a " -"high amount of bandwidth, you might have to limit her transmission rate as " -"to not affect other users or move her to an area with more bandwidth " -"available. On the other hand, maybe her instance has been hacked and is part " -"of a botnet launching DDOS attacks. Resolution of this issue is the same as " -"though any other server on your network has been hacked. Contact the user " -"and give her time to respond. If she doesn't respond, shut down the instance." -msgstr "" -"別の例は、あるユーザーが非常に多くの帯域を消費することですbandwidthrecognizing DDOS " -"attacks。繰り返しですが、ユーザーが実行していることを" -"理解することが重要です。必ず多くの帯域を使用する必要があれば、他のユーザーに" -"影響を与えないように通信帯域を制限する、または、より多くの帯域を利用可能な別" -"の場所に移動させる必要があるかもしれません。一方、ユーザーのインスタンスが侵" -"入され、DDOS 攻撃を行っているボットネットの一部になっているかもしれません。こ" -"の問題の解決法は、ネットワークにある他のサーバーが侵入された場合と同じです。" -"ユーザーに連絡し、対応する時間を与えます。もし対応しなければ、そのインスタン" -"スを停止します。" - -msgid "Another example is displaying all properties for a certain image:" -msgstr "" -"もう一つの例は、特定のイメージに関するすべてのプロパティを表示することです。" - -msgid "" -"Another fact that's often forgotten is that when a new file is being " -"uploaded, the proxy server must write out as many streams as there are " -"replicas—giving a multiple of network traffic. For a three-replica cluster, " -"10 Gbps in means 30 Gbps out. Combining this with the previous high " -"bandwidth bandwidthprivate vs. public network recommendations demands of replication is what results in the recommendation that " -"your private network be of significantly higher bandwidth than your public " -"need be. Oh, and OpenStack Object Storage communicates internally with " -"unencrypted, unauthenticated rsync for performance—you do want the private " -"network to be private." -msgstr "" -"また、新規ファイルがアップロードされると、プロキシサーバーはレプリカの数だけ" -"書き込みが行われ、複数のネットワークトラフィックが発生するという点も頻繁に忘" -"れられています。 レプリカが3つのクラスターでは、受信トラフィックが 10 Gbps " -"とすると、送信トラフィックは 30 Gbps ということになります。これを以前のレプリ" -"カの帯域幅の需要がbandwidthプライベート vs. パブリックネットワークの推奨事項 高いことと合わせて考えると、パブリックネットワークよ" -"りもプライベートネットワークのほうがはるかに高い帯域幅が必要であることが分か" -"ります。OpenStack Object Storage はパフォーマンスを保つために内部では、暗号化" -"なし、認証なしの rsync と通信します (プライベートネットワークをプライベートに" -"保つため)。" - -msgid "" -"Any time an instance shuts down unexpectedly, it might have problems on " -"boot. For example, the instance might require an fsck on the " -"root partition. If this happens, the user can use the dashboard VNC console " -"to fix this." -msgstr "" -"予期せずシャットダウンしたときは、ブートに問題があるかもしれません。たとえ" -"ば、インスタンスがルートパーティションにおいて fsck を実行する必" -"要があるかもしれません。もしこうなっても、これを修復するためにダッシュボード " -"VNC コンソールを使用できます。" - -msgid "Anywhere" -msgstr "どこからでも" - -msgid "" -"Apache Qpid offers 100 percent compatibility with the Advanced Message " -"Queuing Protocol Standard, and its broker is available for both C++ and Java." -msgstr "" -"Apache Qpid は、Advanced Message Queuing Protocol の標準との 100% の互換性を" -"提供し、ブローカーは C++ と Java の両方で利用可能です。" - -msgid "Application Programming Interface (API)" -msgstr "Application Programming Interface (API)" - -msgid "Apr 11, 2013" -msgstr "2013年4月11日" - -msgid "Apr 15, 2011" -msgstr "2011年4月15日" - -msgid "Apr 17, 2014" -msgstr "2014年4月17日" - -msgid "Apr 3, 2014" -msgstr "2014年4月3日" - -msgid "Apr 30, 2015" -msgstr "2015年4月" - -msgid "Apr 4, 2013" -msgstr "2013年4月4日" - -msgid "Apr 5, 2012" -msgstr "2012年4月5日" - -msgid "" -"Arbitrary local files can also be placed into the instance file system at " -"creation time by using the --file <dst-path=src-path> " -"option. You may store up to five files.file injection" -msgstr "" -"--file <dst-path=src-path> 使用することにより、任意のロー" -"カルファイルを生成時にインスタンスのファイルシステムの中に置けます。5 ファイ" -"ルまで保存できます。file injection" - -msgid "Architecture" -msgstr "アーキテクチャー" - -msgid "Architecture Examples" -msgstr "アーキテクチャー例" - -msgid "" -"Armed with a patched qemu and a way to reproduce, we set out to see if we've " -"finally solved The Issue. After 48 hours straight of hammering the instance " -"with bandwidth, we were confident. The rest is history. You can search the " -"bug report for \"joe\" to find my comments and actual tests." -msgstr "" -"パッチを当てた qemu と再現方法を携えて、我々は「あの問題」を最終的に解決した" -"かを確認する作業に着手した。インスタンスにネットワーク負荷をかけてから丸48時" -"間後、我々は確信していた。その後のことは知っての通りだ。あなたは、joe へのバ" -"グ報告を検索し、私のコメントと実際のテストを見つけることができる。" - -msgid "" -"Armed with your IP address layout and numbers and knowledge about the " -"topologies and services you can use, it's now time to prepare the network " -"for your installation. Be sure to also check out the OpenStack Security Guide for tips on securing " -"your network. We wish you a good relationship with your networking team!" -msgstr "" -"IPアドレスレイアウトと利用できるトポロジーおよびサービスの知識を武器にネット" -"ワークを構成する準備をする時が来ました。また、ネットワークをセキュアに保つた" -"めに OpenStack セキュリティガイドを確認する事を忘れないでください。ネットワークチームと良い関" -"係をとなる事を願っています!" - -msgid "" -"Artificial scale testing can go only so far. After your cloud is upgraded, " -"you must pay careful attention to the performance aspects of your cloud." -msgstr "" -"人工的なスケールテストは、あくまである程度のものです。クラウドをアップグレー" -"ドした後、クラウドのパフォーマンス観点で十分に注意する必要があります。" - -msgid "" -"As a cloud administrative user, you can use the OpenStack dashboard to " -"create and manage projects, users, images, and flavors. Users are allowed to " -"create and manage images within specified projects and to share images, " -"depending on the Image service configuration. Typically, the policy " -"configuration allows admin users only to set quotas and create and manage " -"services. The dashboard provides an Admin tab with a " -"System Panel and an Identity tab. " -"These interfaces give you access to system information and usage as well as " -"to settings for configuring what end users can do. Refer to the OpenStack " -"Admin User Guide for detailed how-to information about using the " -"dashboard as an admin user.working " -"environmentdashboarddashboard" -msgstr "" -"クラウドの管理ユーザーとして OpenStack Dashboard を使用して、プロジェクト、" -"ユーザー、イメージ、フレーバーの作成および管理を行うことができます。ユーザー" -"は Image service の設定に応じて、指定されたプロジェクト内でイメージを作成/管" -"理したり、共有したりすることができます。通常、ポリシーの設定では、管理ユー" -"ザーのみがクォータの設定とサービスの作成/管理を行うことができます。ダッシュ" -"ボードには 管理 タブがあり、システムパネルユーザー管理タブ に分かれています。これらの" -"インターフェースにより、システム情報と使用状況のデータにアクセスすることがで" -"きるのに加えて、エンドユーザーが実行可能な操作を設定することもできます。管理" -"ユーザーとしてダッシュボードを使用する方法についての詳しい説明は OpenStack 管理ユーザーガイド を参照してください。作業環境ダッシュボードダッシュボード" - -msgid "" -"As a community, we take security very seriously and follow a specific " -"process for reporting potential issues. We vigilantly pursue fixes and " -"regularly eliminate exposures. You can report security issues you discover " -"through this specific process. The OpenStack Vulnerability Management Team " -"is a very small group of experts in vulnerability management drawn from the " -"OpenStack community. The team's job is facilitating the reporting of " -"vulnerabilities, coordinating security fixes and handling progressive " -"disclosure of the vulnerability information. Specifically, the team is " -"responsible for the following functions:vulnerability tracking/managementsecurity issuesreporting/fixing vulnerabilitiesOpenStack communitysecurity information" -msgstr "" -"我々は、コミュニティーとして、セキュリティーを非常に重要だと考えており、潜在" -"的な問題の報告は決められたプロセスに基いて処理されます。修正を用心深く追跡" -"し、定期的にセキュリティー上の問題点を取り除きます。あなたは発見したセキュリ" -"ティー上の問題をこの決められたプロセスを通して報告できます。OpenStack 脆弱性" -"管理チームは、OpenStackコミュニティーから選ばれた脆弱性管理の専門家で構成され" -"るごく少人数のグループです。このチームの仕事は、脆弱性の報告を手助けし、セ" -"キュリティー上の修正の調整を行い、脆弱性情報の公開を続けていくことです。特" -"に、このチームは以下の役割を担っています。vulnerability tracking/managementsecurity issuesreporting/fixing vulnerabilitiesOpenStack communitysecurity information" - -msgid "" -"As a last resort, our network admin (Alvaro) and myself sat down with four " -"terminal windows, a pencil, and a piece of paper. In one window, we ran " -"ping. In the second window, we ran on the cloud controller. " -"In the third, on the compute node. And the forth had " -" on the instance. For background, this cloud was a multi-" -"node, non-multi-host setup." -msgstr "" -"結局、我々のネットワーク管理者(Alvao)と私自身は4つのターミナルウィンドウ、" -"1本の鉛筆と紙切れを持って座った。1つのウインドウで我々は ping を実行した。" -"2つ目のウインドウではクラウドコントローラー上の 、3つ目では" -"コンピュートノード上の 、4つ目ではインスタンス上の " -" を実行した。前提として、このクラウドはマルチノード、非マルチ" -"ホスト構成である。" - -msgid "As a scalable, reliable data store for virtual machine images" -msgstr "スケーラブルで信頼性のある仮想マシンイメージデータストアとして利用する" - -msgid "" -"As a specific example, compare a cloud that supports a managed web-hosting " -"platform with one running integration tests for a development project that " -"creates one VM per code commit. In the former, the heavy work of creating a " -"VM happens only every few months, whereas the latter puts constant heavy " -"load on the cloud controller. You must consider your average VM lifetime, as " -"a larger number generally means less load on the cloud controller.cloud controllersscalability and" -msgstr "" -"特定の例としては、マネージド Web ホスティングプラットフォームをサポートするク" -"ラウドと、コードコミットごとに仮想マシンを1つ作成するような開発プロジェクト" -"の統合テストを実行するクラウドを比較してみましょう。前者では、VMを作成する負" -"荷の大きい処理は数か月に 一度しか発生しないのに対して、後者ではクラウドコント" -"ローラに常に負荷の大きい処理が発生します。一般論として、VMの平均寿命が長いと" -"いうことは、クラウドコントローラの負荷が軽いことを意味するため、平均的なVMの" -"寿命を検討する必要があります。クラウド" -"コントローラースケーラビリティ" - -msgid "" -"As always, refer to the if you are " -"unclear about any of the terminology mentioned in architecture examples." -msgstr "" -"これらのアーキテクチャー例で言及されている用語が明確に理解できない場合には、" -"通常通り を参照してください。" - -msgid "" -"As an OpenStack cloud is composed of so many different services, there are a " -"large number of log files. This chapter aims to assist you in locating and " -"working with them and describes other ways to track the status of your " -"deployment.debugginglogging/monitoring; maintenance/debugging" -msgstr "" -"OpenStackクラウドは、様々なサービスから構成されるため、多くのログファイルが存" -"在します。この章では、それぞれのログの場所と取り扱い、そしてシステムのさらな" -"る監視方法について説明します。debugginglogging/monitoring; maintenance/" -"debugging" - -msgid "" -"As an administrative user, you can update the Block Storage service quotas " -"for a tenant, as well as update the quota defaults for a new tenant. See " -".Block Storage" -msgstr "" -"管理ユーザーは、既存のテナントの Block Storage のクォータを更新できます。ま" -"た、新規テナントのクォータのデフォルト値を更新することもできます。 を参照してください。Block Storage" - -msgid "" -"As an administrative user, you can update the Compute service quotas for an " -"existing tenant, as well as update the quota defaults for a new tenant." -"ComputeCompute " -"service See ." -msgstr "" -"管理ユーザーは、既存のテナントの Compute のクォータを更新できます。また、新規" -"テナントのクォータのデフォルト値を更新することもできます。ComputeCompute サービス を参照してくだ" -"さい。" - -msgid "" -"As an administrative user, you can use the cinder quota-* " -"commands, which are provided by the python-cinderclient " -"package, to view and update tenant quotas." -msgstr "" -"管理ユーザーは cinder quota-* コマンドを使って、テナント" -"のクォータを表示したり更新したりできます。コマンドは python-" -"cinderclient パッケージに含まれます。" - -msgid "" -"As an administrative user, you can use the nova quota-* " -"commands, which are provided by the python-novaclient " -"package, to view and update tenant quotas." -msgstr "" -"管理ユーザーは nova quota-* コマンドを使って、テナントの" -"クォータを表示したり更新したりできます。コマンドは python-" -"novaclient パッケージに含まれます。" - -msgid "" -"As an administrator, you have a few ways to discover what your OpenStack " -"cloud looks like simply by using the OpenStack tools available. This section " -"gives you an idea of how to get an overview of your cloud, its shape, size, " -"and current state.servicesobtaining overview ofserversobtaining overview " -"ofcloud " -"computingcloud overviewcommand-line toolsservers and services" -msgstr "" -"管理者は、利用可能な OpenStack ツールを使用して、OpenStack クラウドが全体像を" -"確認する方法がいくつかあります。本項では、クラウドの概要、形態、サイズ、現在" -"の状態についての情報を取得する方法について説明します。サービス概観の取得サーバークラ" -"ウドコンピューティングクラウドの全体像コマンドラインツールサーバーとサービス" - -msgid "" -"As an example, let's create a security group that allows web traffic " -"anywhere on the Internet. We'll call this group global_http, which is clear and reasonably concise, encapsulating what is " -"allowed and from where. From the command line, do:" -msgstr "" -"例のとおり、インターネットのどこからでも Web 通信を許可するセキュリティグルー" -"プを作成しましょう。このグループを global_http と呼ぶこと" -"にします。許可されるものと許可されるところを要約した、明白で簡潔な名前になっ" -"ています。コマンドラインから以下のようにします。" - -msgid "" -"As an example, recording nova-api usage can allow you to track " -"the need to scale your cloud controller. By keeping an eye on nova-" -"api requests, you can determine whether you need to spawn more " -"nova-api processes or go as far as introducing an " -"entirely new server to run nova-api. To get an approximate " -"count of the requests, look for standard INFO messages in /var/log/" -"nova/nova-api.log:" -msgstr "" -"例として、nova-apiの使用を記録することでクラウドコントローラーを" -"スケールする必要があるかを追跡できます。nova-apiのリクエスト数に" -"注目することにより、nova-apiプロセスを追加するか、もしく" -"は、nova-apiを実行するための新しいサーバーを導入することまで行な" -"うかを決定することができます。リクエストの概数を取得するには/var/log/" -"nova/nova-api.logのINFOメッセージを検索します。" - -msgid "" -"As an open source project, one of the unique aspects of OpenStack is that it " -"has many different levels at which you can begin to engage with it—you don't " -"have to do everything yourself." -msgstr "" -"OpenStack は、オープンソースプロジェクトとして、ユニークな点があります。その " -"1 つは、さまざまなレベルで OpenStack に携わりはじめることができる点です。すべ" -"てを自分自身で行う必要はありません。" - -msgid "" -"As an operator, you are in a very good position to report unexpected " -"behavior with your cloud. Since OpenStack is flexible, you may be the only " -"individual to report a particular issue. Every issue is important to fix, so " -"it is essential to learn how to easily submit a bug report.maintenance/debuggingreporting " -"bugsbugs, " -"reportingOpenStack communityreporting bugs" -msgstr "" -"運用者として、あなたは、あなたのクラウドでの予期しない動作を報告できる非常に" -"よい立場にいます。 OpenStack は柔軟性が高いので、ある特定の問題を報告するのは" -"あなた一人かもしれません。すべての問題は重要で修正すべきものです。そのために" -"は、簡単にバグ報告を登録する方法を知っておくことは欠かせません。maintenance/debuggingreporting bugsbugs, reportingOpenStack communityreporting " -"bugs" - -msgid "" -"As another example, if you choose to use single-host networking where the " -"cloud controller is the network gateway for all instances, then the cloud " -"controller must support the total amount of traffic that travels between " -"your cloud and the public Internet." -msgstr "" -"他の例としては、クラウドコントローラーがすべてのインスタンスのゲートウェイと" -"なるような単一ホスト・ネットワークモデルを使うことにした場合、クラウドコント" -"ローラーは外部インターネットとあなたのクラウドの間でやりとりされるすべてのト" -"ラフィックを支えられなければなりません。" - -msgid "" -"As another example, you could use pairs of servers for a collective cloud " -"controller—one active, one standby—for redundant nodes providing a given set " -"of related services, such as:" -msgstr "" -"他の例として、コントローラーをクラスタとして構成し1つはアクティブ、もう一つは" -"スタンバイとして冗長ノードが以下のような機能を提供できるように複数のサーバを" -"使用する事ができます。" - -msgid "" -"As discussed in previous chapters, there are several options for networking " -"in OpenStack Compute. We recommend FlatDHCP and to use " -"Multi-Host networking mode for high availability, " -"running one nova-network daemon per OpenStack compute host. " -"This provides a robust mechanism for ensuring network interruptions are " -"isolated to individual compute hosts, and allows for the direct use of " -"hardware network gateways." -msgstr "" -"前章で述べたように、OpenStack Compute のネットワークにはいくつかの選択肢があ" -"りますが、高可用性には FlatDHCPマルチホス" -"ト ネットワークモードを使用して、OpenStack Compute ホスト毎に " -"nova-network デーモンを 1 つ実行することを推奨します。これによ" -"り、ネットワーク障害が確実に各コンピュートホスト内に隔離される堅牢性の高いメ" -"カニズムが提供され、各ホストはハードウェアのネットワークゲートウェイと直接通" -"信することが可能となります。" - -msgid "" -"As for your initial deployment, you should ensure that all hardware is " -"appropriately burned in before adding it to production. Run software that " -"uses the hardware to its limits—maxing out RAM, CPU, disk, and network. Many " -"options are available, and normally double as benchmark software, so you " -"also get a good idea of the performance of your system.hardwaremaintenance/debuggingmaintenance/" -"debugginghardware" -msgstr "" -"初期導入時と同じように、本番環境に追加する前に、すべてのハードウェアについて" -"適切な通電テストを行うべきでしょう。ハードウェアを限界まで使用するソフトウェ" -"アを実行します。RAM、CPU、ディスク、ネットワークを限界まで使用します。多くの" -"オプションが利用可能であり、通常はベンチマークソフトウェアとの役割も果たしま" -"す。そのため、システムのパフォーマンスに関する良いアイディアを得ることもでき" -"ます。hardwaremaintenance/debuggingmaintenance/debugginghardware" - -msgid "" -"As mentioned, there's currently no way to cleanly migrate from nova-" -"network to neutron. We recommend that you keep a migration in mind " -"and what that process might involve for when a proper migration path is " -"released." -msgstr "" -"言及されたとおり、nova-network から neutron にきれいに移行" -"する方法は現在ありません。適切な移行パスがリリースされるまで、移行を心に留め" -"ておき、そのプロセスに関わることを推奨します。" - -msgid "" -"As noted in the previous chapter, the number of rules per security group is " -"controlled by the quota_security_group_rules, and the number of " -"allowed security groups per project is controlled by the " -"quota_security_groups quota." -msgstr "" -"前の章で述べたとおり、セキュリティグループごとのルール数は " -"quota_security_group_rules により制御されます。また、プロジェク" -"トごとに許可されるセキュリティグループ数は quota_security_groups クォータにより制御されます。" - -msgid "" -"As part of our commitment to work with the security community, the team " -"ensures that proper credit is given to security researchers who responsibly " -"report issues in OpenStack." -msgstr "" -"セキュリティーコミュニティーと仕事をする責務の一つとして、セキュリティー情報" -"の公開が、確実に、そしてOpenStack のセキュリティー問題を責任をもって扱うセ" -"キュリティ研究者の適切なお墨付きを得て行われるようにします。" - -msgid "" -"As part of the procurement for a compute cluster, you must specify some " -"storage for the disk on which the instantiated instance runs. There are " -"three main approaches to providing this temporary-style storage, and it is " -"important to understand the implications of the choice.storageinstance storage " -"solutionsinstancesstorage solutionscompute nodesinstance storage solutions" -msgstr "" -"コンピュートクラスターの調達の一部として、インスタンス化されたインスタンスを" -"実行するディスクのストレージを指定する必要があります。一時ストレージ提供のア" -"プローチは主に 3 つあり、それぞれのアプローチの含意を理解することは重要です。" -"storageインストー" -"ラーストレージ・ソリューションインスタンスストレージ・ソリュー" -"ションコン" -"ピュートノードインスタンスストレージソリューション" - -msgid "" -"As shown, end users can interact through the dashboard, CLIs, and APIs. All " -"services authenticate through a common Identity service, and individual " -"services interact with each other through public APIs, except where " -"privileged administrator commands are necessary. shows the most common, but not the only logical architecture for " -"an OpenStack cloud." -msgstr "" -"下図に示したように、エンドユーザーはダッシュボード、CLI、および API を使用し" -"て対話することができます。サービスはすべて、共通の Identity service を介して" -"認証を行い、またサービス相互間の対話は、特権のある管理者がコマンドを実行する" -"必要がある場合を除いてパブリック API を使用して行われます。 には、OpenStack の最も一般的な論理アーキテクチャーを" -"示しています。ただし、これは唯一のアーキテクチャーではありません。" - -msgid "As soon as this setting was fixed, everything worked." -msgstr "全力でこの問題を修正した結果、全てが正常に動作するようになった。" - -msgid "As this would be the server's bonded NIC." -msgstr "これはサーバーの冗長化された(bonded)NIC であるべきだからだ。" - -msgid "" -"As with databases and message queues, having more than one API " -"server is a good thing. Traditional HTTP load-balancing " -"techniques can be used to achieve a highly available nova-api " -"service.API (application programming " -"interface)API server" -msgstr "" -"データベースやメッセージキューのように、1台以上のAPI サーバー を設置する事は良い案です。nova-api サービスを高可用" -"にするために、伝統的な HTTP 負荷分散技術を利用することができます。" - -msgid "" -"As with most architecture choices, the right answer depends on your " -"environment. If you are using existing hardware, you know the disk density " -"of your servers and can determine some decisions based on the options above. " -"If you are going through a procurement process, your user's requirements " -"also help you determine hardware purchases. Here are some examples from a " -"private cloud providing web developers custom environments at AT&T. This " -"example is from a specific deployment, so your existing hardware or " -"procurement opportunity may vary from this. AT&T uses three types of " -"hardware in its deployment:" -msgstr "" -"多くのアーキテクチャーの選択肢と同様に、環境により適切なソリューションは変" -"わって来ます。既存のハードウェアを使用する場合、サーバーのディスク密度を把握" -"し、上記のオプションをもとに意思決定していきます。調達プロセスを行っている場" -"合、ユーザー要件などもハードウェア購入決定の一助となります。ここでは AT&" -"T の Web 開発者にカスタムの環境を提供するプライベートクラウドの例をあげていま" -"す。この例は、特定のデプロイメントであるため、既存のハードウェアや調達機会は" -"これと異なる可能性があります。AT&T は、デプロイメントに 3 種類のハード" -"ウェアを使用しています。" - -msgid "" -"As with other architecture decisions, storage concepts within OpenStack " -"offer many options. This chapter lays out the choices for you." -msgstr "" -"他のアーキテクチャーの判断と同じように、OpenStack におけるストレージの概念" -"は、さまざまな選択肢があります。この章は選択肢を提示します。" - -msgid "" -"As with other removable disk technology, it is important that the operating " -"system is not trying to make use of the disk before removing it. On Linux " -"instances, this typically involves unmounting any file systems mounted from " -"the volume. The OpenStack volume service cannot tell whether it is safe to " -"remove volumes from an instance, so it does what it is told. If a user tells " -"the volume service to detach a volume from an instance while it is being " -"written to, you can expect some level of file system corruption as well as " -"faults from whatever process within the instance was using the device." -msgstr "" -"他のリムーバブルディスク技術と同じように、ディスクを取り外す前に、オペレー" -"ティングシステムがそのディスクを使用しないようにすることが重要です。Linux イ" -"ンスタンスにおいて、一般的にボリュームからマウントされているすべてのファイル" -"システムをアンマウントする必要があります。OpenStack Volume Service は、インス" -"タンスから安全にボリュームを取り外すことができるかはわかりません。そのため、" -"指示されたことを実行します。ボリュームに書き込み中にインスタンスからボリュー" -"ムの切断を、ユーザーが Volume Service に指示すると、何らかのレベルのファイル" -"システム破損が起きる可能性があります。それだけでなく、デバイスを使用していた" -"インスタンスの中のプロセスがエラーを起こす可能性もあります。" - -msgid "" -"As your cloud grows, MySQL is utilized more and more. If you suspect that " -"MySQL might be becoming a bottleneck, you should start researching MySQL " -"optimization. The MySQL manual has an entire section dedicated to this " -"topic: Optimization Overview." -msgstr "" -"クラウドが大きくなるにつれて、MySQL がさらに使用されてきます。MySQL がボトル" -"ネックになってきたことが疑われる場合、MySQL 最適化の調査から始めるとよいで" -"しょう。MySQL のマニュアルでは、Optimization Overview というセ" -"クションがあり、一つのセクション全部をあててこの話題を扱っています。" - -msgid "" -"As your focus turns to stable operations, we recommend that you do skim the " -"remainder of this book to get a sense of the content. Some of this content " -"is useful to read in advance so that you can put best practices into effect " -"to simplify your life in the long run. Other content is more useful as a " -"reference that you might turn to when an unexpected event occurs (such as a " -"power failure), or to troubleshoot a particular problem." -msgstr "" -"安定した運用に頭を切り替えるために、この文書の残りの部分をざっくりと読み、感" -"覚をつかんでおくことをお薦めします。これらの一部はあらかじめ読んでおくと、長" -"期運用に向けてのベストプラクティスを実践するのに役に立つことでしょう。その他" -"の部分は、 (電源障害のような) 予期しないイベントが発生した場合や特定の問題の" -"トラブルシューティングをする場合に、リファレンスとして役に立つでしょう。" - -msgid "" -"Aside from connection failures, RabbitMQ log files are generally not useful " -"for debugging OpenStack related issues. Instead, we recommend you use the " -"RabbitMQ web management interface.RabbitMQlogging/monitoringRabbitMQ web management " -"interface Enable it on your cloud controller:" -"cloud controllersenabling RabbitMQ" -msgstr "" -"接続エラーは別として、RabbitMQ のログファイルは一般的に OpenStack 関連の問題" -"をデバッグするために役立ちません。代わりに、RabbitMQ の Web 管理インター" -"フェースを使用することを推奨します。RabbitMQlogging/monitoringRabbitMQ web management " -"interface Enable it on your cloud controller:" -"cloud controllersenabling RabbitMQ" - -msgid "" -"Aside from the creation and termination of VMs, you must consider the impact " -"of users accessing the service—particularly on nova-api " -"and its associated database. Listing instances garners a great deal of " -"information and, given the frequency with which users run this operation, a " -"cloud with a large number of users can increase the load significantly. This " -"can occur even without their knowledge—leaving the OpenStack dashboard " -"instances tab open in the browser refreshes the list of VMs every 30 seconds." -msgstr "" -"仮想マシンの作成、停止以外に、特に nova-api や関連のデータ" -"ベースといったサービスにアクセスする際の影響について考慮する必要があります。" -"インスタンスを一覧表示することで膨大な量の情報を収集し、この操作の実行頻度を" -"前提として、ユーザー数の多いクラウドで負荷を大幅に増加させることができます。" -"これはユーザーには透過的に行われます。OpenStack のダッシュボードをブラウザで" -"開いた状態にすると、仮想マシンの一覧が 30 秒毎に更新されます。" - -msgid "" -"Aside from the direct-to-blueprint pathway, there is another very well-" -"regarded mechanism to influence the development roadmap: the user survey. " -"Found at , it allows " -"you to provide details of your deployments and needs, anonymously by " -"default. Each cycle, the user committee analyzes the results and produces a " -"report, including providing specific information to the technical committee " -"and project team leads." -msgstr "" -"開発ロードマップに影響を与えるために、直接ブループリントに関わる道以外に、非" -"常に高く評価された別の方法があります。ユーザー調査です。 にあります。基本的に匿名で、お使いの環" -"境の詳細、要望を送ることができます。各サイクルで、ユーザーコミッティーが結果" -"を分析して、報告書を作成します。具体的な情報を TC や PTL に提供することを含み" -"ます。" - -msgid "Aspects to Watch" -msgstr "ウォッチの観点" - -msgid "Assignee: <yourself>" -msgstr "Assignee: <あなた自身>" - -msgid "Associating Security Groups" -msgstr "セキュリティグループの割り当て" - -msgid "Associating Users with Projects" -msgstr "プロジェクトへのユーザーの割り当て" - -msgid "" -"Associating existing users with an additional project or removing them from " -"an older project is done from the Projects page of the dashboard by " -"selecting Modify Users from the Actions column, as shown in ." -msgstr "" -"既存のユーザーを追加のプロジェクトに割り当てる、または古いプロジェクトから削" -"除することは、 にあるとおり、ダッシュ" -"ボードのプロジェクトページから、アクション列のユーザーの変更を選択することに" -"より実行できます。" - -msgid "" -"At that time, our control services were hosted by another team and we didn't " -"have much debugging information to determine what was going on with the " -"master, and we could not reboot it. That team noted that it failed without " -"alert, but managed to reboot it. After an hour, the cluster had returned to " -"its normal state and we went home for the day." -msgstr "" -"この時、我々のコントロールサービスは別のチームによりホスティングされており、" -"我々には現用系サーバー上で何が起こっているのかを調査するための大したデバッグ" -"情報がなく、再起動もできなかった。このチームは警報なしで障害が起こったと連絡" -"してきたが、そのサーバーの再起動を管理していた。1時間後、クラスタは通常状態" -"に復帰し、我々はその日は帰宅した。" - -msgid "" -"At the data center, I was finishing up some tasks and remembered the lock-" -"up. I logged into the new instance and ran again. It " -"worked. Phew. I decided to run it one more time. It locked up." -msgstr "" -"データセンターで、私はいくつかの仕事を済ませると、ロックアップのことを思い出" -"した。私は新しいインスタンスにログインし、再度 を実行した。" -"コマンドは機能した。ふぅ。私はもう一度試してみることにした。今度はロックアッ" -"プした。" - -msgid "" -"At the end of 2012, Cybera (a nonprofit with a mandate to oversee the " -"development of cyberinfrastructure in Alberta, Canada) deployed an updated " -"OpenStack cloud for their DAIR project (http://www.canarie.ca/" -"en/dair-program/about). A few days into production, a compute node locks up. " -"Upon rebooting the node, I checked to see what instances were hosted on that " -"node so I could boot them on behalf of the customer. Luckily, only one " -"instance." -msgstr "" -"2012年の終わり、Cybera (カナダ アルバータ州にある、サイバーインフラのデプロ" -"イを監督する権限を持つ非営利団体)が、彼らの DAIR プロジェクト " -"(http://www.canarie.ca/en/dair-program/about) 用に新しい OpenStack クラウドを" -"デプロイした。サービスインから数日後、あるコンピュートノードがロックアップし" -"た。問題のノードの再起動にあたり、私は顧客の権限でインスタンスを起動するた" -"め、そのノード上で何のインスタンスがホスティングされていたかを確認した。幸運" -"にも、インスタンスは1つだけだった。" - -msgid "" -"At the end of August 2012, a post-secondary school in Alberta, Canada " -"migrated its infrastructure to an OpenStack cloud. As luck would have it, " -"within the first day or two of it running, one of their servers just " -"disappeared from the network. Blip. Gone." -msgstr "" -"2012年8月の終わり、カナダ アルバータ州のある大学はそのインフラを OpenStack ク" -"ラウドに移行した。幸か不幸か、サービスインから1~2日間に、彼らのサーバーの1台" -"がネットワークから消失した。ビッ。いなくなった。" - -msgid "" -"At the same time of finding the bug report, a co-worker was able to " -"successfully reproduce The Issue! How? He used to spew a " -"ton of bandwidth at an instance. Within 30 minutes, the instance just " -"disappeared from the network." -msgstr "" -"バグ報告を発見すると同時に、同僚が「あの問題」を再現することに成功した!どう" -"やって?彼は を使用して、インスタンス上で膨大なネットワーク" -"負荷をかけた。30分後、インスタンスはネットワークから姿を消した。" - -msgid "" -"At the time of writing, OpenStack has more than 3,000 configuration options. " -"You can see them documented at the OpenStack " -"configuration reference guide. This chapter cannot hope to document " -"all of these, but we do try to introduce the important concepts so that you " -"know where to go digging for more information." -msgstr "" -"執筆時点では、OpenStack は 3,000 以上の設定オプションがあります。OpenStack configuration reference guide にド" -"キュメント化されています。本章は、これらのすべてをドキュメント化できません" -"が、どの情報を掘り下げて調べるかを理解できるよう、重要な概念を紹介したいと考" -"えています。" - -msgid "" -"At the very base of any operating system are the hard drives on which the " -"operating system (OS) is installed.RAID (redundant array of independent disks)partitionsdisk partitioningdisk partitioning" -msgstr "" -"オペレーティングシステムの基盤は、オペレーティングシステムがインストールされ" -"るハードドライブです。RAID (Redundant " -"Array of Independent Disks)パーティションディスクパーティショニングディスクパーティ" -"ショニング" - -msgid "" -"At this stage, a developer works on a fix. During that time, to avoid " -"duplicating the work, the developer should set:" -msgstr "" -"この段階では、開発者が修正を行います。修正を行っている間、作業の重複を避ける" -"ため、バグの状態を以下のようにセットすべきです。" - -msgid "Attaching Block Storage" -msgstr "ブロックストレージの接続" - -msgid "" -"Attempt to list the objects in the middleware-test " -"container:" -msgstr "" -"middleware-test コンテナーにあるオブジェクトを一覧表示しよ" -"うとします。" - -msgid "Aug 10, 2012" -msgstr "2012年8月10日" - -msgid "Aug 8, 2013" -msgstr "2013年8月8日" - -msgid "Aug 8, 2014" -msgstr "2014年8月8日" - -msgid "Austin" -msgstr "Austin" - -msgid "Authentication and Authorization" -msgstr "認証と認可" - -msgid "Authentication and authorization for identity management" -msgstr "アイデンティティ管理のための認証と認可" - -msgid "Automated Configuration" -msgstr "自動環境設定" - -msgid "Automated Deployment" -msgstr "自動デプロイメント" - -msgid "Availability Zones and Host Aggregates" -msgstr "アベイラビリティゾーンとホストアグリゲート" - -msgid "Availability zone" -msgstr "アベイラビリティゾーン" - -msgid "Availability zones" -msgstr "アベイラビリティゾーン" - -msgid "" -"Availability zones are implemented through and configured in a similar way " -"to host aggregates." -msgstr "" -"アベイラビリティゾーンは、ホストアグリゲートを利用して実装されており、ホスト" -"アグリゲートと同様の方法で設定します。" - -msgid "Available user-level quotes" -msgstr "利用可能なユーザーレベルのクォータ" - -msgid "Available vCPUs" -msgstr "利用可能な vCPU 数" - -msgid "" -"Back in your DevStack instance on the shell screen, add some metadata to " -"your container to allow the request from the remote machine:" -msgstr "" -"シェル画面において DevStack 用インスタンスに戻り、リモートマシンからのリクエ" -"ストを許可するようなコンテナのメタデータを追加します。" - -msgid "Back matter:" -msgstr "後付:" - -msgid "Backing storage services" -msgstr "バックエンドのストレージサービス" - -msgid "Backup and Recovery" -msgstr "バックアップとリカバリー" - -msgid "" -"Backup and subsequent recovery is one of the first tasks system " -"administrators learn. However, each system has different items that need " -"attention. By taking care of your database, image service, and appropriate " -"file system locations, you can be assured that you can handle any event " -"requiring recovery." -msgstr "" -"バックアップ、その後のリカバリーは、最初に学習するシステム管理の 1 つです。し" -"かしながら、各システムは、それぞれ注意を必要とする項目が異なります。データ" -"ベース、Image service、適切なファイルシステムの場所に注意することにより、リカ" -"バリーを必要とするすべてのイベントを処理できることが保証されます。" - -msgid "Bare metal Deployment (ironic)" -msgstr "Bare metal Deployment (ironic)" - -msgid "Basic node deployment" -msgstr "基本ノードデプロイメント" - -msgid "" -"Be sure that the instance has successfully booted and is at a login screen " -"before doing the above." -msgstr "" -"上記を実行する前に、インスタンスが正常に起動し、ログイン画面になっていること" -"を確認します。" - -msgid "" -"Because OpenStack is highly configurable, with many different back ends and " -"network configuration options, it is difficult to write documentation that " -"covers all possible OpenStack deployments. Therefore, this guide defines " -"examples of architecture to simplify the task of documenting, as well as to " -"provide the scope for this guide. Both of the offered architecture examples " -"are currently running in production and serving users." -msgstr "" -"OpenStack は、多数の異なるバックエンドおよびネットワーク設定オプションを利用" -"して高度に設定することが可能です。可能な OpenStack デプロイメントをすべて網羅" -"するドキュメントを執筆するのは難しいため、本ガイドではアーキテクチャーの例を" -"定義することによって文書化の作業を簡素化すると共に、本書のスコープを規定しま" -"す。以下に紹介するアーキテクチャーの例はいずれも本番環境で現在実行中であり、" -"ユーザーにサービスを提供しています。" - -msgid "" -"Because OpenStack is so, well, open, this chapter is dedicated to helping " -"you navigate the community and find out where you can help and where you can " -"get help." -msgstr "" -"OpenStack は非常にオープンであるため、この章は、コミュニティーを案内する手助" -"けになり、支援したり支援されたりできる場所を見つけることに専念します。" - -msgid "" -"Because Pacemaker is cluster software, the software itself handles its own " -"availability, leveraging corosync and cman underneath." -msgstr "" -"Pacemaker は、クラスタリングソフトウェアであるため、基盤となる " -"corosync および cman を活用して、ソフト" -"ウェア自体が自らの可用性を処理します。" - -msgid "" -"Because it is recommended to not use partitions on a swift disk, simply " -"format the disk as a whole:" -msgstr "" -"Swift ディスクではパーティションを使用しないことが推奨されるので、単にディス" -"ク全体をフォーマットします。" - -msgid "" -"Because network troubleshooting is especially difficult with virtual " -"resources, this chapter is chock-full of helpful tips and tricks for tracing " -"network traffic, finding the root cause of networking failures, and " -"debugging related services, such as DHCP and DNS." -msgstr "" -"ネットワークのトラブルシューティングは、仮想リソースでとくに難しくなります。" -"この章は、ネットワーク通信の追跡、ネットワーク障害の根本原因の調査、DHCP や " -"DNS などの関連サービスのデバッグに関するヒントとコツがたくさん詰まっていま" -"す。" - -msgid "" -"Because of all the decisions the other chapters discuss, this chapter " -"describes the decisions made for this particular book and much of the " -"justification for the example architecture." -msgstr "" -"他の章で議論するすべての判断により、本章はこのドキュメントのために行われる判" -"断、アーキテクチャー例の正当性について説明します。" - -msgid "" -"Because of the high redundancy of Object Storage, dealing with object " -"storage node issues is a lot easier than dealing with compute node issues." -msgstr "" -"オブジェクトストレージの高い冗長性のため、オブジェクトストレージのノードに関" -"する問題を処理することは、コンピュートノードに関する問題を処理するよりも簡単" -"です。" - -msgid "" -"Because the cloud controller handles so many different services, it must be " -"able to handle the amount of traffic that hits it. For example, if you " -"choose to host the OpenStack Image service on the cloud controller, the " -"cloud controller should be able to support the transferring of the images at " -"an acceptable speed.cloud " -"controllersnetwork traffic andnetworksdesign considerationsdesign considerationsnetworks" -msgstr "" -"クラウドコントローラーは非常に多くのサービスを取り扱うため、それらすべてのト" -"ラフィックを処理できなければなりません。例えば、クラウドコントローラー上に " -"OpenStack Image service を乗せることにした場合、そのクラウドコントローラーは" -"許容可能な速度でイメージの転送できなければなりません。クラウドコントローラーネットワーク" -"トラフィック" -"ネットワーク設計上の考慮事項設計上の考慮事項ネットワーク" - -msgid "" -"Because without sensible quotas a single tenant could use up all the " -"available resources, default quotas are shipped with OpenStack. You should " -"pay attention to which quota settings make sense for your hardware " -"capabilities." -msgstr "" -"妥当なクォータがないと、単一のテナントが利用可能なリソースをすべて使用してし" -"まう可能性があるため、デフォルトのクォータが OpenStack には含まれています。お" -"使いのハードウェア機能には、どのクォータ設定が適切か注意してください。" - -msgid "" -"Because your cloud is most likely composed of many servers, you must check " -"logs on each of those servers to properly piece an event together. A better " -"solution is to send the logs of all servers to a central location so that " -"they can all be accessed from the same area.logging/monitoringcentral log management" -msgstr "" -"クラウドは多くのサーバーから構成されるため、各サーバー上にあるイベントログを" -"繋ぎあわせて、ログをチェックしなければなりません。よい方法は全てのサーバーの" -"ログを一ヶ所にまとめ、同じ場所で確認できるようにすることです。logging/monitoringcentral " -"log management" - -msgid "Best Practices" -msgstr "ベストプラクティス" - -msgid "" -"Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room " -"reshuffle and helped us settle in for the week." -msgstr "" -"熱狂的なエグゼクティブアシスタントの Betsy Hagemeier は、部屋の改造の面倒を見" -"てくれて、1週間で解決する手助けをしてくれました。" - -msgid "Between the proxies and your users" -msgstr "proxy server と 利用者の間" - -msgid "Between those servers and the proxies" -msgstr "object/container/account server と proxy server の間" - -msgid "Bexar" -msgstr "Bexar" - -msgid "Block" -msgstr "ブロックストレージ" - -msgid "Block Storage" -msgstr "ブロックストレージ" - -msgid "Block Storage (cinder) back end" -msgstr "Block Storage (cinder) バックエンド" - -msgid "" -"Block Storage (cinder) is installed natively on external storage nodes and " -"uses the LVM/iSCSI plug-in. Most Block Storage plug-ins " -"are tied to particular vendor products and implementations limiting their " -"use to consumers of those hardware platforms, but LVM/iSCSI is robust and " -"stable on commodity hardware." -msgstr "" -"Block Storage (cinder) は、外部ストレージノードにネイティブでインストールさ" -"れ、LVM/iSCSI plug-in を使用します。大半の Block " -"Storage プラグインは、特定のベンダーの製品や実装と関連付けられており、使用は" -"それらのハードウェアプラットフォームのユーザーに制限されていますが、LVM/" -"iSCSI はコモディティハードウェア上で堅牢性および安定性があります。" - -msgid "Block Storage Creation Failures" -msgstr "ブロックストレージの作成エラー" - -msgid "Block Storage Improvements" -msgstr "Block Storage の改善" - -msgid "Block Storage back end" -msgstr "Block Storage バックエンド" - -msgid "" -"Block Storage is considered a stable project, with wide uptake and a long " -"track record of quality drivers. The team has discussed many areas of work " -"at the summits, including better error reporting, automated discovery, and " -"thin provisioning features." -msgstr "" -"Block Storage は、品質ドライバーの幅広い理解と長く取られている記録を持つ、安" -"定したプロジェクトと考えられています。このチームは、よりよいエラー報告、自動" -"探索、シンプロビジョニング機能など、さまざまな領域の作業をサミットで議論しま" -"した。" - -msgid "Block Storage nodes" -msgstr "Block Storage ノード" - -msgid "Block Storage quota descriptions" -msgstr "Block Storage のクォータの説明" - -msgid "Block storage" -msgstr "ブロックストレージ" - -msgid "" -"Block storage (sometimes referred to as volume storage) provides users with " -"access to block-storage devices. Users interact with block storage by " -"attaching volumes to their running VM instances.volume storageblock storagestorageblock storage" -msgstr "" -"Block storage (ボリュームストレージと呼ばれる場合もある) は、ユーザーがブロッ" -"クストレージデバイスにアクセスできるようにします。ユーザーは、実行中の仮想マ" -"シンインスタンスにボリュームを接続することで、ブロックストレージと対話しま" -"す。ボリュームストレージブロックストレージストレージブロックストレージ" - -msgid "" -"Boolean value that indicates whether the flavor is available to all users or " -"private. Private flavors do not get the current tenant assigned to them. " -"Defaults to True." -msgstr "" -"フレーバーがすべてのユーザーに利用可能であるか、プライベートであるかを示す論" -"理値。プライベートなフレーバーは、現在のテナントをそれらに割り当てません。デ" -"フォルトは True です。" - -msgid "Boot a test server:" -msgstr "テストサーバーを起動します。" - -msgid "" -"Boot an instance from the dashboard or the nova command-line interface (CLI) " -"with the following parameters:" -msgstr "" -"ダッシュボード、または nova のコマンドラインインタフェース(CLI)から、以下のパ" -"ラメータでインスタンスを起動してください。" - -msgid "" -"Both nova-network and neutron services provide similar " -"capabilities, such as VLAN between VMs. You also can provide multiple NICs " -"on VMs with either service. Further discussion follows." -msgstr "" -"nova-networkとneutronサービスはVM間でのVLANといった似通っ" -"た可用性を提供します。また、それぞれのサービスで複数のNICを付与したVMも提供す" -"る事ができます。" - -msgid "" -"Both Compute and Block Storage rely on schedulers to determine where to " -"place virtual machines or volumes. In Havana, the Compute scheduler " -"underwent significant improvement, while in Icehouse it was the scheduler in " -"Block Storage that received a boost. Further down the track, an effort " -"started this cycle that aims to create a holistic scheduler covering both " -"will come to fruition. Some of the work that was done in Kilo can be found " -"under the Gantt project.Kiloscheduler improvements" -msgstr "" -"Compute と Block Storage はどちらも、仮想マシンやボリュームを配置する場所を決" -"めるためにスケジューラーに頼っています。Havana では、Compute のスケジューラー" -"が大幅に改善されました。これは、Icehouse において支援を受けた Block Storage " -"におけるスケジューラーでした。さらに掘り下げて追跡すると、どちらも取り扱う全" -"体的なスケジューラーを作成することを目指した、このサイクルを始めた努力が実を" -"結ぶでしょう。Kilo において実行されたいくつかの作業は、Gantt projectにありま" -"す。Kiloscheduler improvements" - -msgid "" -"Both Ubuntu and Red Hat Enterprise Linux include mechanisms for configuring " -"the operating system, including preseed and kickstart, that you can use " -"after a network boot. Typically, these are used to bootstrap an automated " -"configuration system. Alternatively, you can use an image-based approach for " -"deploying the operating system, such as systemimager. You can use both " -"approaches with a virtualized infrastructure, such as when you run VMs to " -"separate your control services and physical infrastructure." -msgstr "" -"Ubuntu と Red Hat Enterprise Linux にはいずれも、ネットワークブート後に使用可" -"能なpreseed や kickstart といった、オペレーティングシステムを設定するための仕" -"組みがあります。これらは、典型的には自動環境設定システムのブートストラップに" -"使用されます。他の方法としては、systemimager のようなイメージベースのオペレー" -"ティングシステムのデプロイメント手法を使うこともできます。これらの手法はどち" -"らも、物理インフラストラクチャーと制御サービスを分離するために仮想マシンを実" -"行する場合など、仮想化基盤と合わせて使用できます。" - -msgid "Bug Fixing" -msgstr "バグ修正" - -msgid "Burn-in Testing" -msgstr "エージング試験" - -msgid "" -"But how can you tell whether images are being successfully uploaded to the " -"Image service? Maybe the disk that Image service is storing the images on is " -"full or the S3 back end is down. You could naturally check this by doing a " -"quick image upload:" -msgstr "" -"しかし、Image service にイメージが正しくアップロードされたことをどのように知" -"ればいいのでしょうか? もしかしたら、Image service が保管しているイメージの" -"ディスクが満杯、もしくは S3 のバックエンドがダウンしているかもしれません。簡" -"易的なイメージアップロードを行なうことでこれをチェックすることができます。" - -msgid "" -"By comparing a tenant's hard limit with their current resource usage, you " -"can see their usage percentage. For example, if this tenant is using 1 " -"floating IP out of 10, then they are using 10 percent of their floating IP " -"quota. Rather than doing the calculation manually, you can use SQL or the " -"scripting language of your choice and create a formatted report:" -msgstr "" -"テナントのハード制限と現在の使用量を比較することにより、それらの使用割合を確" -"認できます。例えば、このテナントが Floating IP を 10 個中 1 個使用している場" -"合、Floating IP クォータの 10% を使用していることになります。手動で計算するよ" -"り、SQL やお好きなスクリプト言語を使用して、定型化されたレポートを作成できま" -"す。" - -msgid "By default, Object Storage logs to syslog." -msgstr "デフォルトで Object Storage は syslog にログを出力します。" - -msgid "" -"By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 " -"instead of bond0 thereby stacking one VLAN on top of another. This added an " -"extra 4 bytes to each packet and caused a packet of 1504 bytes to be sent " -"out which would cause problems when it arrived at an interface that only " -"accepted 1500." -msgstr "" -"ミスにより、私は全てのテナント VLAN を bond0 の代わりに vlan20 にアタッチする" -"よう OpenStack を設定した。これにより1つの VLAN が別の VLAN の上に積み重な" -"り、各パケットに余分に4バイトが追加され、送信されるパケットサイズが 1504 バ" -"イトになる原因となった。これがパケットサイズ 1500 のみ許容するインターフェー" -"スに到達した際、問題の原因となったのだった!" - -msgid "" -"By modifying your configuration setup, you can set up IPv6 when using " -"nova-network for networking, and a tested setup is " -"documented for FlatDHCP and a multi-host configuration. The key is to make " -"nova-network think a radvd command ran " -"successfully. The entire configuration is detailed in a Cybera blog post, " -"“An IPv6 enabled cloud”." -msgstr "" -"セットアップした設定を変更することにより、ネットワークに nova-" -"network を使用している場合に、IPv6 をセットアップできます。テストさ" -"れたセットアップ環境が FlatDHCP とマルチホストの設定向けにドキュメント化され" -"ています。重要な点は、radvd を正常に実行されたと、" -"nova-network が考えるようにすることです。設定全体の詳細" -"は、Cybera のブログ記事 “An IPv6 enabled cloud” に" -"あります。" - -msgid "" -"By running this command periodically and keeping a record of the result, you " -"can create a trending report over time that shows whether your nova-" -"api usage is increasing, decreasing, or keeping steady." -msgstr "" -"このコマンドを定期的に実行し結果を記録することで、トレンドレポートを作ること" -"ができます。これにより/var/log/nova/nova-api.logの使用量が増えて" -"いるのか、減っているのか、安定しているのか、を知ることができます。" - -msgid "" -"By taking this script and rolling it into an alert for your monitoring " -"system (such as Nagios), you now have an automated way of ensuring that " -"image uploads to the Image Catalog are working." -msgstr "" -"このスクリプトを(Nagiosのような)監視システムに組込むことで、イメージカタログ" -"のアップロードが動作していることを自動的に確認することができます。" - -msgid "CERN" -msgstr "CERN" - -msgid "" -"CONF.node_availability_zone has been renamed to CONF." -"default_availability_zone and is used only by the nova-api and nova-scheduler services." -msgstr "" -"CONF.node_availability_zone は、CONF.default_availability_zone に名前が変更さ" -"れ、nova-api および nova-scheduler サー" -"ビスのみで使用されます。" - -msgid "CONF.node_availability_zone still works but is deprecated." -msgstr "CONF.node_availability_zone は今も機能しますが、非推奨扱いです。" - -msgid "CPU allocation ratio: 16:1" -msgstr "CPU 割当比: 16:1" - -msgid "CPU overcommit ratio (virtual cores per physical core)" -msgstr "CPU オーバーコミット比率 (物理コアごとの仮想コア)" - -msgid "CPU: 1x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz" -msgstr "CPU: 1x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz" - -msgid "CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz" -msgstr "CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz" - -msgid "CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz" -msgstr "CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz" - -msgid "CSAIL homepage" -msgstr "CSAIL ホームページ" - -msgid "Cactus" -msgstr "Cactus" - -msgid "Can instances launch and be destroyed?" -msgstr "インスタンスの起動と削除が可能か?" - -msgid "Can objects be stored and deleted?" -msgstr "オブジェクトの保存と削除は可能か?" - -msgid "Can users be created?" -msgstr "ユーザの作成は可能か?" - -msgid "Can volumes be created and destroyed?" -msgstr "ボリュームの作成と削除は可能か?" - -msgid "Capacity Planning" -msgstr "キャパシティプランニング" - -msgid "Cells" -msgstr "セル" - -msgid "Cells and Regions" -msgstr "セルとリージョン" - -msgid "" -"Cells and regions, which segregate an entire cloud and result in running " -"separate Compute deployments." -msgstr "" -"セルおよびリージョン。クラウド全体を分離し、個別にコンピュートデプロイメント" -"を稼働します。" - -msgid "Centrally Managing Logs" -msgstr "ログの集中管理" - -msgid "Ceph" -msgstr "Ceph" - -msgid "" -"Ceph was designed to expose different types of storage interfaces to the end " -"user: it supports object storage, block storage, and file-system interfaces, " -"although the file-system interface is not yet considered production-ready. " -"Ceph supports the same API as swift for object storage and can be used as a " -"back end for cinder block storage as well as back-end storage for glance " -"images. Ceph supports \"thin provisioning,\" implemented using copy-on-write." -msgstr "" -"Ceph は、異なる種類のストレージインターフェースをエンドユーザーに公開するよう" -"に設計されました。Ceph は、オブジェクトストレージ、ブロックストレージ、ファイ" -"ルシステムインターフェースをサポートしていますが、ファイルシステムインター" -"フェースは、実稼動環境での使用にはまだ適していません。Ceph は、オブジェクトス" -"トレージでは swift と同じ API をサポートしており、cinder ブロックストレージの" -"バックエンド、glance イメージのバックエンドストレージとして使用することができ" -"ます。Ceph は、copy-on-write を使用して実装されたシンプロビジョニングをサポー" -"トします。" - -msgid "" -"Ceph's advantages are that it gives the administrator more fine-grained " -"control over data distribution and replication strategies, enables you to " -"consolidate your object and block storage, enables very fast provisioning of " -"boot-from-volume instances using thin provisioning, and supports a " -"distributed file-system interface, though this interface is not " -"yet recommended for use in production deployment by the Ceph project." -msgstr "" -"Ceph の利点は、管理者がより細かくデータの分散やレプリカのストラテジーを管理で" -"きるようになり、オブジェクトとブロックストレージを統合し、シンプロビジョニン" -"グでボリュームから起動するインスタンスを非常に早くプロビジョニングできるだけ" -"でなく、分散ファイルシステムのインターフェースもサポートしている点です。ただ" -"し、このインターフェースは、Ceph プロジェクトによる実稼動デプロイメントでの使" -"用には、 推奨されていません。" - -msgid "CephCeph" -msgstr "CephCeph" - -msgid "Change access rules for shares, reset share state" -msgstr "共有のアクセスルールの変更、共有状態のリセット" - -msgid "Change to the devstack repository: " -msgstr "devstack リポジトリーに移動します: " - -msgid "Change to the directory where Object Storage is installed:" -msgstr "Object Storage がインストールされるディレクトリーを変更します。" - -msgid "Check cloud usage: " -msgstr "クラウドの使用量を確認します: " - -msgid "Check for instances in a failed or weird state and investigate why." -msgstr "故障または異常になっているインスタンスを確認し、理由を調査します。" - -msgid "Check for operator accounts that should be removed." -msgstr "削除すべきオペレーターアカウントを確認します。" - -msgid "Check for security patches and apply them as needed." -msgstr "セキュリティパッチを確認し、必要に応じて適用します。" - -msgid "Check for user accounts that should be removed." -msgstr "削除すべきユーザーアカウントを確認します。" - -msgid "Check usage and trends over the past month." -msgstr "この 1 か月における使用量および傾向を確認します。" - -msgid "Check your monitoring system for alerts and act on them." -msgstr "監視システムのアラートを確認し、それらに対処します。" - -msgid "Check your ticket queue for new tickets." -msgstr "チケットキューの新しいチケットを確認します。" - -msgid "Choice of File System" -msgstr "ファイルシステムの選択" - -msgid "Choosing Storage Back Ends" -msgstr "ストレージバックエンドの選択" - -msgid "Choosing a CPU" -msgstr "CPU の選択" - -msgid "Choosing a Hypervisor" -msgstr "ハイパーバイザーの選択" - -msgid "" -"Clean up after an OpenStack upgrade (any unused or new services to be aware " -"of?)." -msgstr "" -"OpenStack のアップグレード後に後始末を行います (未使用または新しいサービスを" -"把握していますか?)。" - -msgid "" -"Clean up by clearing all mirrors on br-int and deleting the " -"dummy interface:" -msgstr "" -"br-int にあるすべてのミラーを解除して、ダミーインターフェースを" -"削除することにより、クリーンアップします。" - -msgid "Click the Create Project button." -msgstr "プロジェクトの作成 ボタンをクリックします。" - -msgid "Clone the devstack repository: " -msgstr "" -"devstack リポジトリーをクローンします: " - -msgid "Cloud (General)" -msgstr "Cloud (General)" - -msgid "Cloud Controller and Storage Proxy Failures and Maintenance" -msgstr "クラウドコントローラーとストレージプロキシの故障とメンテナンス" - -msgid "" -"Cloud computing is quite an advanced topic, and this book requires a lot of " -"background knowledge. However, if you are fairly new to cloud computing, we " -"recommend that you make use of the at " -"the back of the book, as well as the online documentation for OpenStack and " -"additional resources mentioned in this book in ." -msgstr "" -"クラウドコンピューティングは非常に高度な話題です。また、本書は多くの基礎知識" -"を必要とします。しかしながら、クラウドコンピューティングに慣れていない場合、" -"本書の最後にある、OpenStack のオンライ" -"ンドキュメント、にある本書で参照されて" -"いる参考資料を使うことを推奨します。" - -msgid "Cloud controller" -msgstr "クラウドコントローラー" - -msgid "Cloud controller hardware sizing considerations" -msgstr "クラウドコントローラーのハードウェアサイジングに関する考慮事項" - -msgid "Cloud controller receives the renewal request and sends a response." -msgstr "クラウドコントローラーは更新リクエストを受信し、レスポンスを返す。" - -msgid "Cloud controller receives the second request and sends a new response." -msgstr "" -"クラウドコントローラーは2度めのリクエストを受信し、新しいレスポンスを返す。" - -msgid "Column" -msgstr "項目" - -msgid "Command prompts" -msgstr "コマンドプロンプト" - -msgid "Command-Line Tools" -msgstr "コマンドラインツール" - -msgid "Command-line interface (CLI)" -msgstr "コマンドラインインターフェース (CLI)" - -msgid "" -"Commands prefixed with the # prompt should be executed by " -"the root user. These examples can also be executed using " -"the sudo command, if available." -msgstr "" -"# プロンプトから始まるコマンドは、root " -"ユーザーにより実行すべきです。これらの例は、sudo コマンド" -"が利用できる場合、それを使用することもできます。" - -msgid "" -"Commands prefixed with the $ prompt can be executed by " -"any user, including root." -msgstr "" -"$ プロンプトから始まるコマンドは、root " -"を含む、すべてのユーザーにより実行できます。" - -msgid "Commodity Storage Back-end Technologies" -msgstr "コモディティハードウェア上の ストレージ技術" - -msgid "" -"Compare an attribute in the resource with an attribute extracted from the " -"user's security credentials and evaluates successfully if the comparison is " -"successful. For instance, \"tenant_id:%(tenant_id)s\" is " -"successful if the tenant identifier in the resource is equal to the tenant " -"identifier of the user submitting the request." -msgstr "" -"リソースの属性をユーザーのセキュリティクレデンシャルから抽出した属性と比較" -"し、一致した場合に成功と評価されます。たとえば、リソースのテナント識別子がリ" -"クエストを出したユーザーのテナント識別子と一致すれば、\"tenant_id:" -"%(tenant_id)s\" が成功します。" - -msgid "Component" -msgstr "コンポーネント" - -msgid "Components" -msgstr "コンポーネント" - -msgid "Compute" -msgstr "コンピュート" - -msgid "Compute Node Failures and Maintenance" -msgstr "コンピュートノードの故障とメンテナンス" - -msgid "Compute Nodes" -msgstr "コンピュートノード" - -msgid "Compute and storage communications" -msgstr "コンピュートとストレージの通信" - -msgid "Compute node" -msgstr "コンピュートノード" - -msgid "Compute nodes" -msgstr "コンピュートノード" - -msgid "" -"Compute nodes are the workhorse of your cloud and the place where your " -"users' applications will run. They are likely to be affected by your " -"decisions on what to deploy and how you deploy it. Their requirements should " -"be reflected in the choices you make." -msgstr "" -"コンピュートノードはクラウドの主なアイテムで、ユーザーのアプリケーションが実" -"行される場所です。何をどのようにデプロイするかユーザーの意思決定により、コン" -"ピュートノードは影響を受けます。これらの要件は、意思決定していく際に反映させ" -"る必要があります。" - -msgid "" -"Compute nodes are where the computing resources are held, and in our example " -"architecture, they run the hypervisor (KVM), libvirt (the driver for the " -"hypervisor, which enables live migration from node to node), nova-" -"compute, nova-api-metadata (generally only used when " -"running in multi-host mode, it retrieves instance-specific metadata), " -"nova-vncproxy, and nova-network." -msgstr "" -"コンピュートノードには、コンピューティングリソースが保持されます。このアーキ" -"テクチャー例では、コンピュートノードで、ハイパーバイザー (KVM)、libvirt (ノー" -"ド間でのライブマイグレーションを可能にするハイパーバイザー用ドライバー)、" -"nova-computenova-api-metadata (通常はマルチホス" -"トモードの場合のみ使用され、インスタンス固有のメタデータを取得する)、nova-" -"vncproxy、nova-network を実行します。" - -msgid "" -"Compute nodes can fail the same way a cloud controller can fail. A " -"motherboard failure or some other type of hardware failure can cause an " -"entire compute node to go offline. When this happens, all instances running " -"on that compute node will not be available. Just like with a cloud " -"controller failure, if your infrastructure monitoring does not detect a " -"failed compute node, your users will notify you because of their lost " -"instances.compute nodesfailuresmaintenance/debuggingcompute node " -"total failures" -msgstr "" -"コンピュートノードは、クラウドコントローラーの障害と同じように故障します。マ" -"ザーボードや他の種類のハードウェア障害により、コンピュートノード全体がオフラ" -"インになる可能性があります。これが発生した場合、そのコンピュートノードで動作" -"中のインスタンスがすべて利用できなくなります。ちょうどクラウドコントローラー" -"が発生した場合のように、インフラ監視機能がコンピュートノードの障害を検知しな" -"くても、インスタンスが失われるので、ユーザーが気づくでしょう。compute nodesfailuresmaintenance/" -"debuggingcompute node total failures" - -msgid "" -"Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and " -"approximately 40 GB of ephemeral storage per core." -msgstr "" -"コンピュートノードは 24~48コアがあり、1コアあたり 4GB 以上の RAM があり、1" -"コアあたり約 40GB 以上の一時ストレージがあります。" - -msgid "Compute nodes run the virtual machine instances in OpenStack. They:" -msgstr "" -"コンピュートノードは、OpenStack 内の仮想マシンインスタンスを実行します。コン" -"ピュートノードは、以下のような役割を果たします。" - -msgid "" -"Compute nodes typically need IP addresses accessible by external networks." -msgstr "" -"コンピュートノードは概して外部ネットワークからアクセス可能なIPアドレスが必要" -"です。" - -msgid "Compute quota descriptions" -msgstr "Compute のクォータの説明" - -msgid "Conclusion" -msgstr "まとめ" - -msgid "Conductor Services" -msgstr "コンダクターサービス" - -msgid "Conductor services" -msgstr "コンダクターサービス" - -msgid "Configuration Management" -msgstr "構成管理" - -msgid "Configuration changes to nova.conf." -msgstr "nova.conf の設定を変更" - -msgid "" -"Configure DHCP agents and routing agents. Network Address Translation (NAT) " -"performed outside of compute nodes, typically on one or more network nodes." -msgstr "" -"DHCPエージェントとルーティングエージェントを構成します。Network Address " -"Translation (NAT) によってコンピュートノードと外部を接続します。一般的に1つ以" -"上のネットワークノードで成り立ちます。" - -msgid "" -"Configure a single bridge as the integration bridge (br-int) and connect it " -"to a physical network interface with the Modular Layer 2 (ML2) plug-in, " -"which uses Open vSwitch by default." -msgstr "" -"インテグレーションブリッジ(br-int)として1つのブリッジを構成し、Open vSwitchを" -"デフォルトとして使うModuler Layer2(ML2)プラグインでbr-intと物理ネットワークイ" -"ンターフェースを接続します。" - -msgid "" -"Configure neutron with multiple DHCP and layer-3 agents. Network nodes are " -"not able to failover to each other, so the controller runs networking " -"services, such as DHCP. Compute nodes run the ML2 plug-in with support for " -"agents such as Open vSwitch or Linux Bridge." -msgstr "" -"複数のDHCPおよびレイヤ-3エージェントを持つようなneutronの構成では、ネットワー" -"クノードは互いにフェールオーバできませんのでコントローラはDHCPといったネット" -"ワークサービスを実行します。コンピュートノードはOpen vSwitchまたはLinux " -"BridgeといったエージェントをサポートするML2プラグインを実行します。" - -msgid "" -"Configured nova-consoleauth to use Memcached for session " -"management (so that it can have multiple copies and run in a load balancer)." -msgstr "" -"nova-consoleauth がセッション管理に Memcached を使用するよ" -"うに設定 (複数のコピーが存在可能で、ロードバランサーで実行できる)。" - -msgid "" -"Configured to use Qpid, qpid_heartbeat = 10, " -"configured to use Memcached for caching, configured to use libvirt, configured to " -"use neutron." -msgstr "" -"Qpid を使用するように設定、qpid_heartbeat = 10 " -" Memcached をキャッシュに使用するように設定、 libvirt を使用するように設定、 " -"neutron を使用するように設" -"定" - -msgid "Confirming and Prioritizing" -msgstr "バグの確認と優先付け" - -msgid "" -"Congratulations! By now, you should have a solid design for your cloud. We " -"now recommend that you turn to the OpenStack Installation Guides (), which " -"contains a step-by-step guide on how to manually install the OpenStack " -"packages and dependencies on your cloud." -msgstr "" -"おめでとう!これでクラウドのための設計が固まりました。次は、OpenStack インス" -"トールガイド() を見ることをお薦めします。ここには、あなたのクラウド" -"で OpenStack パッケージや依存パッケージを手動でインストールする方法が手順を" -"追って説明されています。" - -msgid "Connect the qemu-nbd device to the disk." -msgstr "qemu-nbd デバイスをディスクに接続します。" - -msgid "Connect the qemu-nbd device to the disk:" -msgstr "qemu-nbd デバイスをディスクに接続します。" - -msgid "Connectivity for maximum performance" -msgstr "パフォーマンスを最大化するための接続性" - -msgid "" -"Consider adopting structure and options from the service configuration files " -"and merging them with existing configuration files. The OpenStack Configuration Reference contains " -"new, updated, and deprecated options for most services." -msgstr "" -"このサービス設定ファイルから構造とオプションを適用して、既存の設定ファイルに" -"マージすることを検討してください。ほとんどのサービスは、OpenStack Configuration Reference に新しいオ" -"プション、更新されたオプション、非推奨になったオプションがあります。" - -msgid "" -"Consider creating other private networks for communication between internal " -"components of OpenStack, such as the message queue and OpenStack Compute. " -"Using a virtual local area network (VLAN) works well for these scenarios " -"because it provides a method for creating multiple virtual networks on a " -"physical network." -msgstr "" -"メッセージキューや OpenStack コンピュート といった OpenStack 内部のコンポーネ" -"ント間の通信用に別のプライベートネットワークの作成を検討して下さい。Virtual " -"Local Area Network(VLAN) は1つの物理ネットワークに複数の仮想ネットワークを作" -"成できるのでこのシナリオに非常に適しています。" - -msgid "" -"Consider the approach to upgrading your environment. You can perform an " -"upgrade with operational instances, but this is a dangerous approach. You " -"might consider using live migration to temporarily relocate instances to " -"other compute nodes while performing upgrades. However, you must ensure " -"database consistency throughout the process; otherwise your environment " -"might become unstable. Also, don't forget to provide sufficient notice to " -"your users, including giving them plenty of time to perform their own " -"backups." -msgstr "" -"お使いの環境をアップグレードする方法を検討します。運用中のインスタンスがある" -"状態でアップグレードを実行できます。しかし、これは非常に危険なアプローチで" -"す。アップグレードの実行中は、ライブマイグレーションを使用して、インスタンス" -"を別のコンピュートノードに一時的に再配置することを考慮すべきでしょう。しかし" -"ながら、プロセス全体を通して、データベースの整合性を担保する必要があります。" -"そうでなければ、お使いの環境が不安定になるでしょう。また、ユーザーに十分に注" -"意を促すことを忘れてはいけません。バックアップを実行するために時間の猶予を与" -"えることも必要です。" - -msgid "" -"Consider the example where you want to take a snapshot of a persistent block " -"storage volume, detected by the guest operating system as /dev/vdb and mounted on /mnt. The fsfreeze command " -"accepts two arguments:" -msgstr "" -"永続ブロックストレージのスナップショットを取得したい例を検討します。ゲストオ" -"ペレーティングシステムにより /dev/vdb として認識され、" -"/mnt にマウントされているとします。fsfreeze コマンドが 2 " -"つの引数を受け取ります:" - -msgid "Consider the following example:" -msgstr "次のような例を考えてみましょう。" - -msgid "" -"Consider the impact of an upgrade to users. The upgrade process interrupts " -"management of your environment including the dashboard. If you properly " -"prepare for the upgrade, existing instances, networking, and storage should " -"continue to operate. However, instances might experience intermittent " -"network interruptions." -msgstr "" -"アップグレードによるユーザーへの影響を考慮してください。アップグレードプロセ" -"スは、ダッシュボードを含む、環境の管理機能を中断します。このアップグレードを" -"正しく準備する場合、既存のインスタンス、ネットワーク、ストレージは通常通り動" -"作し続けるべきです。しかしながら、インスタンスがネットワークの中断を経験する" -"かもしれません。" - -msgid "" -"Consider the scenario where an entire server fails and 24 TB of data needs " -"to be transferred \"immediately\" to remain at three copies—this can put " -"significant load on the network." -msgstr "" -"サーバー全体が停止し、24 TB 分のデータを即時に 3 つのコピーに残るように移行す" -"るというシナリオを思い浮かべてください。これは、ネットワークに大きな負荷がか" -"かる可能性があります。" - -msgid "" -"Consider updating your SQL server configuration as described in the OpenStack " -"Installation Guide." -msgstr "" -"OpenStack インストールガイド に記載されているように、SQL サーバーの" -"設定の更新を考慮してください。" - -msgid "" -"Consider using a public cloud to test the scalability limits of your cloud " -"controller configuration. Most public clouds bill by the hour, which means " -"it can be inexpensive to perform even a test with many nodes.cloud controllersscalability and" -msgstr "" -"お使いのクラウドコントローラーの設定に関するスケーラビリティーの限界をテスト" -"するために、パブリッククラウドを使用することを考慮します。多くのパブリックク" -"ラウドは時間単位で課金されます。つまり、多くのノードを用いてテストしても、そ" -"れほど費用がかかりません。cloud " -"controllersscalability and" - -msgid "Consideration" -msgstr "考慮事項" - -msgid "Considered experimental." -msgstr "試験として使用" - -msgid "Console (boot up messages) for VM instances:" -msgstr "仮想マシンインスタンスのコンソール (起動メッセージ):" - -msgid "Constant width" -msgstr "等幅" - -msgid "Constant width italic" -msgstr "等幅・斜体" - -msgid "" -"Consult the vendor documentation to check for virtualization support. For " -"Intel, read “Does my processor support " -"Intel® Virtualization Technology?”. For AMD, read AMD Virtualization. Note that " -"your CPU may support virtualization but it may be disabled. Consult your " -"BIOS documentation for how to enable CPU features.virtualization technologyAMD VirtualizationIntel Virtualization Technology" -msgstr "" -"仮想化サポートがあるかどうかについては、ベンダーの文書を参照してください。" -"Intel については “Does my processor support " -"Intel® Virtualization Technology?” を確認してください。AMD については " -" AMD " -"Virtualization を確認してください。CPU が仮想化をサポートしていても無" -"効になっている場合があることに注意してください。また、CPU 機能を有効にする方" -"法はお使いのシステムの BIOS の文書を参照してください。仮想化技術AMD 仮想化Intel 仮想化技術/primary>" - -msgid "Container quotas" -msgstr "コンテナーのクォータ" - -msgid "" -"Contains a reference listing of all configuration options for core and " -"integrated OpenStack services by release version" -msgstr "" -"リリースバージョン毎に、OpenStack のコアサービス、統合されたサービスのすべて" -"の設定オプションの一覧が載っています" - -msgid "" -"Contains each floating IP address that was added to Compute. This table is " -"related to the fixed_ips table by way of the " -"floating_ips.fixed_ip_id column." -msgstr "" -"Compute に登録された各 Floating IP アドレス。このテーブルは " -"floating_ips.fixed_ip_id 列で fixed_ips テーブルと関連付けられます。" - -msgid "" -"Contains each possible IP address for the subnet(s) added to Compute. This " -"table is related to the instances table by way of the " -"fixed_ips.instance_uuid column." -msgstr "" -"novaに登録されたサブネットで利用可能なIPアドレス。このテーブルは " -"fixed_ips.instance_uuid 列で instances テーブルと関連付けられます。" - -msgid "" -"Contains how-to information for managing an OpenStack cloud as needed for " -"your use cases, such as storage, computing, or software-defined-networking" -msgstr "" -"あなたのユースケースに合わせて、ストレージ、コンピューティング、Software-" -"defined-networking など OpenStack クラウドを管理する方法が書かれています" - -msgid "" -"Continuing the diagnosis the next morning was kick started by another " -"identical failure. We quickly got the message queue running again, and tried " -"to work out why Rabbit was suffering from so much network traffic. Enabling " -"debug logging on nova-api quickly " -"brought understanding. A was scrolling by faster than we'd " -"ever seen before. CTRL+C on that and we could plainly see the contents of a " -"system log spewing failures over and over again - a system log from one of " -"our users' instances." -msgstr "" -"翌朝の継続調査は別の同様の障害でいきなり始まった。我々は急いで RabbitMQ サー" -"バーを再起動し、何故 RabbitMQ がそのような過剰なネットワーク負荷に直面してい" -"るのかを調べようとした。nova-api " -"のデバッグログを出力することにより、理由はすぐに判明した。 は" -"我々が見たこともない速さでスクロールしていた。CTRL+C でコマンドを止め、障害を" -"吐き出していたシステムログの内容をはっきり目にすることが出来た。-我々のユー" -"ザの1人のインスタンスからのシステムログだった。" - -msgid "Control services public interfaces" -msgstr "コントロールサービス用パブリックインターフェース" - -msgid "Controller" -msgstr "コントローラー" - -msgid "Controller node" -msgstr "コントローラーノード" - -msgid "" -"Controller nodes are responsible for running the management software " -"services needed for the OpenStack environment to function. These nodes:" -msgstr "" -"コントローラーノードは、 OpenStack 環境が機能するために必要な管理ソフトウェア" -"サービスを実行します。これらのノードは、以下のような役割を果たします。" - -msgid "Conventions Used in This Book" -msgstr "このドキュメントに使用される規則" - -msgid "" -"Copy contents of configuration backup directories that you created during " -"the upgrade process back to /etc/<service> " -"directory." -msgstr "" -"アップグレード作業中に作成した、設定ディレクトリーのバックアップの中身を " -"/etc/<service> にコピーします。" - -msgid "" -"Copy the code in into " -"ip_whitelist.py. The following code is a middleware " -"example that restricts access to a container based on IP address as " -"explained at the beginning of the section. Middleware passes the request on " -"to another application. This example uses the swift \"swob\" library to wrap " -"Web Server Gateway Interface (WSGI) requests and responses into objects for " -"swift to interact with. When you're done, save and close the file." -msgstr "" -" にあるコードを ip_whitelist.py にコピーします。以下のコードは、このセクションの初めに説明されたよ" -"うに、IP アドレスに基づいてコンテナーへのアクセスを制限するミドルウェアの例で" -"す。ミドルウェアは、他のアプリケーションへのリクエストを通過させます。この例" -"は、swift \"swob\" ライブラリーを使用して、swift が通信するオブジェクトに関す" -"る Web Server Gateway Interface (WSGI) のリクエストとレスポンスをラップしま" -"す。これを実行したとき、ファイルを保存して閉じます。" - -msgid "Copyright details are filled in by the template." -msgstr "Copyright details are filled in by the template." - -msgid "Core developers also prioritize the bug, based on its impact:" -msgstr "にセットして下さい。" - -msgid "Create Share" -msgstr "共有の作成" - -msgid "Create Snapshots" -msgstr "スナップショットの作成" - -msgid "Create a Share Network" -msgstr "共有ネットワークの作成" - -msgid "" -"Create a clone of your automated configuration infrastructure with changed " -"package repository URLs." -msgstr "" -"変更したパッケージリポジトリー URL を用いて、自動化された設定インフラストラク" -"チャーのクローンを作成する。" - -msgid "Create a container called middleware-test:" -msgstr "" -"middleware-test という名前のコンテナーを作成します。" - -msgid "Create a share from a snapshot." -msgstr "スナップショットから共有を作成します。" - -msgid "Create a share network" -msgstr "共有ネットワークの作成" - -msgid "" -"Create a share on either a share server or standalone, depending on the " -"selected back-end mode, with or without using a share network." -msgstr "" -"選択したバックエンドのモードに応じて、共有ネットワークの使用有無を指定して、" -"共有サーバーまたはスタンドアロンで共有を作成します。" - -msgid "" -"Create a share specifying its size, shared file system protocol, visibility " -"level" -msgstr "容量、ファイル共有システムプロトコル、公開レベルを指定した共有の作成" - -msgid "Create an OpenStack Development Environment" -msgstr "OpenStack 開発環境の作成" - -msgid "Create and bring up a dummy interface, snooper0:" -msgstr "ダミーインターフェース snooper0 を作成して起動します。" - -msgid "Create context" -msgstr "コンテキストの作成" - -msgid "" -"Create mirror of patch-tun to snooper0 (returns " -"UUID of mirror port):" -msgstr "" -"patch-tun のミラーを snooper0 に作成します (ミラー" -"ポートの UUID を返します)。" - -msgid "Create share" -msgstr "共有の作成" - -msgid "Create share networks" -msgstr "共有ネットワークの作成" - -msgid "Create snapshots" -msgstr "スナップショットの作成" - -msgid "" -"Create the ip_scheduler.py Python source code file:" -msgstr "" -"ip_scheduler.py Python ソースコードファイルを作成しま" -"す。" - -msgid "Create the ip_whitelist.py Python source code file:" -msgstr "" -"ip_whitelist.py Python ソースコードファイルを作成します。" - -msgid "Create ways to automatically test these actions." -msgstr "それらのアクションに対して自動テストを作成する" - -msgid "Create, update, delete and force-delete shares" -msgstr "共有の作成、更新、削除、強制削除" - -msgid "Creating New Users" -msgstr "新規ユーザーの作成" - -msgid "Current stable release, security-supported" -msgstr "現在の安定版リリース、セキュリティアップデート対象" - -msgid "Customization" -msgstr "カスタマイズ" - -msgid "Customizing Authorization" -msgstr "権限のカスタマイズ" - -msgid "Customizing Object Storage (Swift) Middleware" -msgstr "Object Storage (Swift) ミドルウェアのカスタマイズ" - -msgid "Customizing the Dashboard (Horizon)" -msgstr "Dashboard (Horizon) のカスタマイズ" - -msgid "Customizing the OpenStack Compute (nova) Scheduler" -msgstr "OpenStack Compute (nova) スケジューラーのカスタマイズ" - -msgid "DAIR" -msgstr "DAIR" - -msgid "DAIR homepage" -msgstr "DAIR ホームページ" - -msgid "" -"DAIR is hosted at two different data centers across Canada: one in Alberta " -"and the other in Quebec. It consists of a cloud controller at each location, " -"although, one is designated the \"master\" controller that is in charge of " -"central authentication and quotas. This is done through custom scripts and " -"light modifications to OpenStack. DAIR is currently running Havana." -msgstr "" -"DAIR はカナダの2つの異なるデータセンタ(1つはアルバータ州、もう1つはケベック" -"州)でホスティングされています。各拠点にはそれぞれクラウドコントローラがあり" -"ますが、その1つが「マスター」コントローラーとして、認証とクォータ管理を集中" -"して行うよう設計されています。これは、特製スクリプトと OpenStack の軽微な改造" -"により実現されています。DAIR は現在、Havana で運営されています。" - -msgid "" -"DHCP agents running on OpenStack networks run in namespaces similar to the " -"l3-agents. DHCP namespaces are named qdhcp-<uuid> and " -"have a TAP device on the integration bridge. Debugging of DHCP issues " -"usually involves working inside this network namespace." -msgstr "" -"OpenStack ネットワークで動作している DHCP エージェントは、l3-agent と同じよう" -"な名前空間で動作します。DHCP 名前空間は、qdhcp-<uuid> とい" -"う名前を持ち、統合ブリッジに TAP デバイスを持ちます。DHCP の問題のデバッグ" -"は、通常この名前空間の中での動作に関連します。" - -msgid "DHCP traffic can be isolated within an individual host." -msgstr "DHCP トラフィックは個々のホスト内に閉じ込めることができる。" - -msgid "" -"DHCP traffic uses UDP. The client sends from port 68 to port 67 on the " -"server. Try to boot a new instance and then systematically listen on the " -"NICs until you identify the one that isn't seeing the traffic. To use " -"tcpdump to listen to ports 67 and 68 on br100, you would do:" -msgstr "" -"DHCPトラフィックはUDPを使います。そして、クライアントは68番ポートからサーバー" -"の67番ポートへパケットを送信します。新しいインスタンスを起動し、機械的にNICを" -"リッスンしてください。トラフィックに現れない通信を特定できるまで行います。" -"tcpdumpでbr100上のポート67、68をリッスンするには、こうします。" - -msgid "DNS" -msgstr "DNS" - -msgid "DNS service (designate)" -msgstr "DNS service (designate)" - -msgid "Daemon" -msgstr "デーモン" - -msgid "Daily" -msgstr "日次" - -msgid "Dashboard" -msgstr "ダッシュボード" - -msgid "Dashboard node" -msgstr "ダッシュボードサービス" - -msgid "Dashboard's Create Project form" -msgstr "Dashboard のプロジェクトの作成フォーム" - -msgid "Data processing service for OpenStack (sahara)" -msgstr "Data processing service for OpenStack (sahara)" - -msgid "Database" -msgstr "データベース" - -msgid "Database Backups" -msgstr "データベースのバックアップ" - -msgid "Database Connectivity" -msgstr "データベース接続性" - -msgid "Database and message queue server (such as MySQL, RabbitMQ)" -msgstr "データベースとメッセージキューサーバ(例:MySQL、RabbitMQ)" - -msgid "Database as a Service (trove)" -msgstr "Database as a Service (trove)" - -msgid "Databases" -msgstr "データベース" - -msgid "Date" -msgstr "リリース日" - -msgid "Dealing with Network Namespaces" -msgstr "ネットワーク名前空間への対応" - -msgid "Debugging DHCP Issues with nova-network" -msgstr "nova-network の DHCP 問題の デバッグ" - -msgid "Debugging DNS Issues" -msgstr "DNS の問題をデバッグする" - -msgid "Dec 13, 2012" -msgstr "2012年12月13日" - -msgid "Dec 16, 2013" -msgstr "2013年12月16日" - -msgid "" -"Decrease DHCP timeouts by modifying /etc/nova/nova.conf " -"on the compute nodes back to the original value for your environment." -msgstr "" -"コンピュートノードにおいて /etc/nova/nova.conf を変更す" -"ることにより、DHCP タイムアウトを元の環境の値に減らして戻します。" - -msgid "" -"Dedicate entire disks to certain partitions. For example, you could allocate " -"disk one and two entirely to the boot, root, and swap partitions under a " -"RAID 1 mirror. Then, allocate disk three and four entirely to the LVM " -"partition, also under a RAID 1 mirror. Disk I/O should be better because I/O " -"is focused on dedicated tasks. However, the LVM partition is much smaller." -msgstr "" -"全ディスク領域を特定のパーティションに割り当てます。例えば、ディスク 1 と 2 " -"すべてを RAID 1 ミラーとして boot、root、swapパーティションに割り当てます。そ" -"して、ディスク 3 と 4 すべてを、同様に RAID 1 ミラーとしてLVMパーティションに" -"割り当てます。I/O は専用タスクにフォーカスするため、ディスクの I/O は良くなる" -"はずです。しかし、LVM パーティションははるかに小さくなります。" - -msgid "" -"Default CPU overcommit ratio (cpu_allocation_ratio in nova." -"conf) of 16:1." -msgstr "" -"デフォルトの CPU オーバーコミット比率 (cpu_allocation_ratio in " -"nova.conf) は 16:1 とします。" - -msgid "Default drop rule for unmatched traffic." -msgstr "一致しない通信のデフォルト破棄ルール。" - -msgid "Define new share types" -msgstr "新しい共有種別の作成" - -msgid "Delete Share" -msgstr "共有の削除" - -msgid "Deleted by user" -msgstr "ユーザーが削除するまで" - -msgid "Deleting Images" -msgstr "イメージの削除" - -msgid "" -"Deleting an image does not affect instances or snapshots that were based on " -"the image." -msgstr "" -"イメージを削除しても、そのイメージがベースになっているインスタンスやスナップ" -"ショットには影響がありません。" - -msgid "" -"Depending on design, heavy I/O usage from some instances can affect " -"unrelated instances." -msgstr "" -"設計次第では、一部のインスタンスの I/O が非常に多い場合に、無関係のインスタン" -"スに影響が出る場合があります。" - -msgid "" -"Depending on the type of server, the contents and order of your package list " -"might vary from this example." -msgstr "" -"サーバーの種類に応じて、パケット一覧の内容や順番がこの例と異なるかもしれませ" -"ん。" - -msgid "" -"Depending on your specific configuration, upgrading all packages might " -"restart or break services supplemental to your OpenStack environment. For " -"example, if you use the TGT iSCSI framework for Block Storage volumes and " -"the upgrade includes new packages for it, the package manager might restart " -"the TGT iSCSI services and impact connectivity to volumes." -msgstr "" -"お使いの設定によっては、すべてのパッケージを更新することにより、OpenStack 環" -"境の補助サービスを再起動または破壊するかもしれません。例えば、Block Storage " -"ボリューム向けに TGT iSCSI フレームワークを使用していて、それの新しいパッケー" -"ジがアップグレードに含まれる場合、パッケージマネージャーが TGT iSCSI サービス" -"を再起動して、ボリュームへの接続性に影響を与えるかもしれません。" - -msgid "Deployment" -msgstr "デプロイ" - -msgid "Deployment scenarios" -msgstr "構成シナリオ" - -msgid "Deprecated" -msgstr "非推奨" - -msgid "Deprecation of Nova Network" -msgstr "nova-network の非推奨" - -msgid "" -"Describes a manual installation process, as in, by hand, without automation, " -"for multiple distributions based on a packaging system:" -msgstr "" -"自動化せずに、手動で行う場合のインストール手順について説明しています。パッ" -"ケージングシステムがある複数のディストリビューション向けのインストールガイド" -"があります。" - -msgid "" -"Describes potential strategies for making your OpenStack services and " -"related controllers and data stores highly available" -msgstr "" -"OpenStack サービス、関連するコントローラやデータストアを高可用にするために取" -"りうる方策に説明しています" - -msgid "Description" -msgstr "説明" - -msgid "Description of the expected results instead of what you saw" -msgstr "期待される結果の説明 (出くわした現象ではなく)" - -msgid "" -"Design and create an architecture for your first nontrivial OpenStack cloud. " -"After you read this guide, you'll know which questions to ask and how to " -"organize your compute, networking, and storage resources and the associated " -"software packages." -msgstr "" -"初めての本格的な OpenStack クラウドのアーキテクチャーの設計と構築。この本を読" -"み終えると、コンピュート、ネットワーク、ストレージのリソースを選ぶにはどんな" -"質問を自分にすればよいのか、どのように組み上げればよいのかや、どんなソフト" -"ウェアパッケージが必要かが分かることでしょう。" - -msgid "" -"Designate a server as the central logging server. The best practice is to " -"choose a server that is solely dedicated to this purpose. Create a file " -"called /etc/rsyslog.d/server.conf with the following " -"contents:" -msgstr "" -"集中ログサーバーとして使用するサーバーを決めます。ログ専用のサーバーを利用す" -"るのが最も良いです。/etc/rsyslog.d/server.conf を次のよ" -"うに作成します。" - -msgid "" -"Designing an OpenStack cloud is a great achievement. It requires a robust " -"understanding of the requirements and needs of the cloud's users to " -"determine the best possible configuration to meet them. OpenStack provides a " -"great deal of flexibility to achieve your needs, and this part of the book " -"aims to shine light on many of the decisions you need to make during the " -"process." -msgstr "" -"OpenStack クラウドの設計は重要な作業です。クラウドのユーザーの要件とニーズを" -"確実に理解して、それらに対応するために可能な限り最良の構成を決定する必要があ" -"ります。OpenStack は、ユーザーのニーズを満たすための優れた柔軟性を提供しま" -"す。本セクションでは、このようなプロセスで行う必要のある数多くの決定事項を明" -"確に説明することを目的としています。" - -msgid "" -"Designing for Cloud Controllers and Cloud " -"Management" -msgstr "" -"クラウドコントローラーのデザインとクラウド管理" -"" - -msgid "" -"Despite only outputting the newly added rule, this operation is additive:" -msgstr "新しく追加されたルールのみが出力されますが、この操作は追加操作です:" - -msgid "Detailed Description" -msgstr "詳細な説明" - -msgid "Details" -msgstr "詳細" - -msgid "" -"Determine which OpenStack packages are installed on your system. Use the " -" command. Filter for OpenStack packages, filter again to " -"omit packages explicitly marked in the deinstall state, and " -"save the final output to a file. For example, the following command covers a " -"controller node with keystone, glance, nova, neutron, and cinder:" -msgstr "" -"お使いの環境にインストールされている OpenStack パッケージを判断します。" -" コマンドを使用します。OpenStack パッケージをフィルターしま" -"す。再びフィルターして、明示的に deinstall 状態になっているパッ" -"ケージを省略します。最終出力をファイルに保存します。例えば、以下のコマンド" -"は、keystone、glance、nova、neutron、cinder を持つコントローラーノードを取り" -"扱います。" - -msgid "Determining Which Component Is Broken" -msgstr "故障しているコンポーネントの特定" - -msgid "" -"Determining the scalability of your cloud and how to improve it is an " -"exercise with many variables to balance. No one solution meets everyone's " -"scalability goals. However, it is helpful to track a number of metrics. " -"Since you can define virtual hardware templates, called \"flavors\" in " -"OpenStack, you can start to make scaling decisions based on the flavors " -"you'll provide. These templates define sizes for memory in RAM, root disk " -"size, amount of ephemeral data disk space available, and number of cores for " -"starters.virtual machine (VM)hardwarevirtual hardwareflavorscalingmetrics for" -msgstr "" -"多くのアイテムでバランスを取りながら、クラウドのスケーラビリティや改善方法を" -"決定していきます。ソリューション 1 つだけでは、スケーラビリティの目的を達成す" -"ることはできません。しかし、複数のメトリクスをトラッキングすると役に立ちま" -"す。仮想ハードウェアのテンプレート (OpenStack ではフレーバーと呼ばれる) を定" -"義できるため、用意するフレーバーをもとにスケーリングの意思決定を行うことがで" -"きます。これらのテンプレートは、メモリーのサイズ (RAM)、root ディスクのサイ" -"ズ、一時データディスクの空き容量、スターターのコア数を定義します。仮想マシンハードウェア仮想ハードウェアフレーバースケーリングメトリクス" - -msgid "" -"Develop an upgrade procedure and assess it thoroughly by using a test " -"environment similar to your production environment." -msgstr "" -"アップグレード手順を作成し、本番環境と同じようなテスト環境を使用して、全体を" -"評価します。" - -msgid "Development list" -msgstr "開発者向けメーリングリスト" - -msgid "Diablo" -msgstr "Diablo" - -msgid "Diagnose Your Compute Nodes" -msgstr "コンピュートノードの診断" - -msgid "Diane Fleming" -msgstr "Diane Fleming" - -msgid "" -"Diane works on the OpenStack API documentation tirelessly. She helped out " -"wherever she could on this project." -msgstr "" -"Diane は OpenStack API ドキュメントプロジェクトで非常に熱心に活動しています。" -"このプロジェクトでは自分ができるところであれば、どこでも取り組んでくれまし" -"た。" - -msgid "Differences Between Various Drivers" -msgstr "ドライバーによる違い" - -msgid "Direct I/O access can increase performance." -msgstr "I/O アクセスが直接行われるので、性能向上が図れます。" - -msgid "Direct incoming traffic from VM to the security group chain." -msgstr "仮想マシンからセキュリティグループチェインへの直接受信。" - -msgid "Direct packets associated with a known session to the RETURN chain." -msgstr "既知のセッションに関連付けられたパケットの RETURN チェインへの転送。" - -msgid "Direct traffic from the VM interface to the security group chain." -msgstr "仮想マシンインスタンスからセキュリティグループチェインへの直接通信。" - -msgid "Disable live snapshotting" -msgstr "ライブスナップショットの無効化" - -msgid "Disappearing Images" -msgstr "イメージの消失" - -msgid "Disconnect the qemu-nbd device." -msgstr "qemu-nbd デバイスを切断します。" - -msgid "" -"Discrete regions with separate API endpoints and no coordination between " -"regions." -msgstr "" -"リージョンごとに別々のAPIエンドポイントが必要で、リージョン間で協調する必要が" -"ない場合" - -msgid "Disk" -msgstr "ディスク" - -msgid "Disk Partitioning and RAID" -msgstr "ディスクのパーティショニングと RAID" - -msgid "Disk Size: minimum 5 GB" -msgstr "ディスクサイズ: 最低 5 GB" - -msgid "Disk partitioning and disk array setup for scalability" -msgstr "" -"スケーラビリティ確保に向けたディスクのパーティショニングおよびディスク配列設" -"定" - -msgid "Disk space" -msgstr "ディスク領域" - -msgid "Disk space is cheap these days. Data recovery is not." -msgstr "今日、ディスクスペースは安価である。データの復元はそうではない。" - -msgid "Disk usage" -msgstr "ディスク使用量" - -msgid "Disk: two 300 GB 10000 RPM SAS Disks" -msgstr "ディスク: 300 GB 10000 RPM SAS ディスク x 2" - -msgid "Disk: two 500 GB 7200 RPM SAS Disks" -msgstr "ディスク: 500 GB 7200 RPM SAS ディスク x 2" - -msgid "" -"Disk: two 500 GB 7200 RPM SAS Disks and twenty-four 600 GB 10000 RPM SAS " -"Disks" -msgstr "" -"ディスク: 500 GB 7200 RPM SAS ディスク x 2 および 600 GB 10000 RPM SAS ディス" -"ク x 24" - -msgid "Disk: two 600 GB 10000 RPM SAS Disks" -msgstr "ディスク: 600 GB 10000 RPM SAS ディスク x 2" - -msgid "Distributed Virtual Router" -msgstr "分散仮想ルーター" - -msgid "Dive Into Python" -msgstr "Dive Into Python" - -msgid "Do I need to support live migration?" -msgstr "管理者がライブマイグレーションを必要とするか?" - -msgid "" -"Do a full manual install by using the OpenStack Installation " -"Guide for your platform. Review the final configuration " -"files and installed packages." -msgstr "" -"お使いのプラットフォーム用の OpenStack Installation Guide を使用して、完全な手動インストールを実行する。最終的な設定" -"ファイルとインストールされたパッケージをレビューします。" - -msgid "Do more spindles result in better I/O despite network access?" -msgstr "" -"ネットワークアクセスがあったとしても、ディスク数が多い方が良い I/O 性能が得ら" -"れるか?" - -msgid "Do my users need block storage?" -msgstr "ユーザがブロックストレージを必要とするか?" - -msgid "Do my users need object storage?" -msgstr "ユーザがオブジェクトストレージを必要とするか?" - -msgid "Docker" -msgstr "Docker" - -msgid "Does your authentication system also verify externally?" -msgstr "認証システムは外部に確認を行いますか?" - -msgid "Double VLAN" -msgstr "二重 VLAN" - -msgid "Down the Rabbit Hole" -msgstr "ウサギの穴に落ちて" - -msgid "Downgrade OpenStack packages." -msgstr "OpenStack パッケージをダウングレードします。" - -msgid "" -"Downgrading packages is by far the most complicated step; it is highly " -"dependent on the distribution and the overall administration of the system." -msgstr "" -"パッケージのダウングレードは、かなり最も複雑な手順です。ディストリビューショ" -"ン、システム管理全体に非常に依存します。" - -msgid "" -"Downtime, whether planned or unscheduled, is a certainty when running a " -"cloud. This chapter aims to provide useful information for dealing " -"proactively, or reactively, with these occurrences.maintenance/debuggingtroubleshooting" -msgstr "" -"停止時間(計画的なものと予定外のものの両方)はクラウドを運用するときに確実に" -"発生します。本章は、プロアクティブまたはリアクティブに、これらの出来事に対処" -"するために有用な情報を提供することを目的としています。maintenance/debuggingtroubleshooting" - -msgid "Driver Quality Improvements" -msgstr "ドライバー品質の改善" - -msgid "Drop packets that are not associated with a state." -msgstr "どの状態にも関連付けられていないパケットの破棄。" - -msgid "Drop traffic without an IP/MAC allow rule." -msgstr "IP/MAC 許可ルールにない通信の破棄。" - -msgid "" -"During an upgrade, operators can add configuration options to nova." -"conf which lock the version of RPC messages and allow live " -"upgrading of the services without interruption caused by version mismatch. " -"The configuration options allow the specification of RPC version numbers if " -"desired, but release name alias are also supported. For example:" -msgstr "" -"運用者は、アップグレード中、RPC バージョンをロックして、バージョン不一致によ" -"り引き起こされる中断なしでサービスのライブアップグレードできるよう、" -"nova.conf に設定オプションを追加できます。この設定オプ" -"ションは、使いたければ RPC バージョン番号を指定できます。リリース名のエイリア" -"スもサポートされます。例:" - -msgid "" -"EC2 compatibility credentials can be downloaded by selecting " -"Project, then Compute , then Access & Security, then " -"API Access to display the Download EC2 " -"Credentials button. Click the button to generate a ZIP file with " -"server x509 certificates and a shell script fragment. Create a new directory " -"in a secure location because these are live credentials containing all the " -"authentication information required to access your cloud identity, unlike " -"the default user-openrc. Extract the ZIP file here. You should " -"have cacert.pem, cert.pem, " -"ec2rc.sh, and pk.pem. The " -"ec2rc.sh is similar to this:access key" -msgstr "" -"EC2 互換のクレデンシャルをダウンロードするには、プロジェクトコンピュート アクセス" -"とセキュリティAPI アクセス の順に" -"選択し、EC2 認証情報のダウンロード ボタンを表示します。" -"このボタンをクリックすると、 サーバーの x509 証明書とシェルスクリプトフラグメ" -"ントが含まれた ZIP が生成されます。これらのファイルは、デフォルトの " -"user-openrc とは異なり、クラウドのアイデンティティへのアクセスに" -"必要なすべての認証情報を含む有効なクレデンシャルなので、セキュリティ保護され" -"た場所に新規ディレクトリを作成して、そこで ZIP ファイルを展開します。" -"cacert.pemcert.pem、" -"ec2rc.sh、および pk.pem が含まれて" -"いるはずです。ec2rc.sh には、以下と似たような内容が記述" -"されています:アクセスキー" - -msgid "" -"Each OpenStack cloud is different even if you have a near-identical " -"architecture as described in this guide. As a result, you must still test " -"upgrades between versions in your environment using an approximate clone of " -"your environment." -msgstr "" -"このガイドに記載されているような、理想的なアーキテクチャーに近いと思われる場" -"合でも、各 OpenStack クラウドはそれぞれ異なります。そのため、お使いの環境の適" -"切なクローンを使用して、お使いの環境のバージョン間でアップグレードをテストす" -"る必要があります。" - -msgid "Each cell has a full nova installation except nova-api." -msgstr "各セルには nova-api 以外の全 nova 設定が含まれています。" - -msgid "" -"Each method provides different functionality and can be best divided into " -"two groups:" -msgstr "" -"メソッド毎に異なる機能を提供しますが、このメソッドは 2 つのグループに分類する" -"と良いでしょう。" - -msgid "Each region has a full nova installation." -msgstr "各リージョンにフルセットのNovaサービスが必要" - -msgid "" -"Each site runs a different configuration, as a resource cells in an OpenStack Compute cells setup. Some sites span multiple " -"data centers, some use off compute node storage with a shared file system, " -"and some use on compute node storage with a non-shared file system. Each " -"site deploys the Image service with an Object Storage back end. A central " -"Identity, dashboard, and Compute API service are used. A login to the " -"dashboard triggers a SAML login with Shibboleth, which creates an " -"account in the Identity service with an SQL back end. " -"An Object Storage Global Cluster is used across several sites." -msgstr "" -"各サイトは(OpenStack Compute のセル設定におけるリソースセルとして)異なる設定で実行されています。数サイトは複数データセンター" -"に渡り、コンピュートノード外のストレージを共有ストレージで使用しているサイト" -"もあれば、コンピュートノード上のストレージを非共有型ファイルシステムで使用し" -"ているサイトもあります。各サイトは Object Storage バックエンドを持つ Image " -"service をデプロイしています。中央の Identity、dashboard、Compute API サービ" -"スが使用されています。dashboard へのログインが Shibboleth の SAML ログインの" -"トリガーになり、SQL バックエンドの Identity サービスのアカウントを作成します。Object Storage Global Cluster は、いくつかの拠点をま" -"たがり使用されます。" - -msgid "Each tenant is isolated to its own VLANs." -msgstr "それぞれのテナントは自身のVLANによって独立しています。" - -msgid "" -"Early indications are that it does do this well for a base set of scenarios, " -"such as using the ML2 plug-in with Open vSwitch, one flat external network " -"and VXLAN tenant networks. However, it does appear that there are problems " -"with the use of VLANs, IPv6, Floating IPs, high north-south traffic " -"scenarios and large numbers of compute nodes. It is expected these will " -"improve significantly with the next release, but bug reports on specific " -"issues are highly desirable." -msgstr "" -"初期の目安は、ML2 プラグインと Open vSwitch、1 つのフラットな外部ネットワーク" -"と VXLAN のテナントネットワークなど、基本的なシナリオに対してこれをうまく実行" -"することです。しかしながら、VLAN、IPv6、Floating IP、大量のノース・サウス通信" -"のシナリオ、大量のコンピュートノードなどで問題が発生しはじめます。これらは次" -"のリリースで大幅に改善されることが期待されていますが、特定の問題におけるバグ" -"報告が強く望まれています。" - -msgid "Easier Upgrades" -msgstr "より簡単なアップグレード" - -msgid "" -"Edit the local.conf configuration file that controls " -"what DevStack will deploy. Copy the example local.conf " -"file at the end of this section ():" -msgstr "" -"DevStack が配備するものを制御する local.conf 設定ファイ" -"ルを編集します。このセクションの最後にある local.conf " -"ファイル例をコピーします ()。" - -msgid "" -"Either snap, which means that the volume was created from " -"a snapshot, or anything other than snap (a blank string " -"is valid). In the preceding example, the volume was not created from a " -"snapshot, so we leave this field blank in our following example." -msgstr "" -"ボリュームがスナップショットから作成されたことを意味する snap、または snap 以外の何か (空文字列も有効) です。上" -"の例では、ボリュームがスナップショットから作成されていません。そのため、この" -"項目を以下の例において空白にしてあります。" - -msgid "" -"Either approach is valid. Use the approach that matches your experience." -msgstr "" -"どのアプローチも有効です。あなたの経験に合うアプローチを使用してください。" - -msgid "Email address" -msgstr "電子メールアドレス" - -msgid "" -"Emma Richards of Rackspace Guest Relations took excellent care of our lunch " -"orders and even set aside a pile of sticky notes that had fallen off the " -"walls." -msgstr "" -"Rackspace ゲストリレーションズの Emma Richards は、私たちのランチの注文を素晴" -"らしく面倒を見てくれて、更に壁から剥がれ落ちた付箋紙の山を脇においてくれまし" -"た。" - -msgid "" -"Enables users to submit API calls to OpenStack services through commands." -"Command-line interface (CLI)" -msgstr "" -"ユーザーはコマンドを使用して API コールを OpenStack サービスに送信できます。" -"コマンドラインインターフェース (CLI)" - -msgid "Enabling IPv6 Support" -msgstr "IPv6 サポートの有効化" - -msgid "Encode certificate in DER format" -msgstr "証明書を DER 形式でエンコードします" - -msgid "Encryption set by…" -msgstr "暗号化の設定 …" - -msgid "End-User Configuration of Security Groups" -msgstr "セキュリティグループのエンドユーザー設定" - -msgid "End-of-life" -msgstr "エンドオブライフ" - -msgid "Ensure that the operating system has recognized the new disk:" -msgstr "" -"オペレーティングシステムが新しいディスクを認識していることを確認します。" - -msgid "" -"Ensure that your messaging queue handles requests successfully and size " -"accordingly." -msgstr "" -"メッセージキューが正しくリクエストを処理することを保証し、適切にサイジングし" -"てください。" - -msgid "Ensuring Snapshots of Linux Guests Are Consistent" -msgstr "Linux ゲストのスナップショットの整合性の保証" - -msgid "Ensuring Snapshots of Windows Guests Are Consistent" -msgstr "Windows ゲストのスナップショットの整合性の保証" - -msgid "Ephemeral" -msgstr "エフェメラル" - -msgid "Ephemeral Storage" -msgstr "一時ストレージ" - -msgid "Ephemeral storage" -msgstr "エフェメラルストレージ" - -msgid "Essex" -msgstr "Essex" - -msgid "" -"Evaluate successfully if a field of the resource specified in the current " -"request matches a specific value. For instance, \"field:networks:" -"shared=True\" is successful if the attribute shared of the network " -"resource is set to true." -msgstr "" -"現在のリクエストに指定されたリソースの項目が指定された値と一致すれば、成功と" -"評価されます。たとえば、ネットワークリソースの shared 属性が true に設定されている場合、\"field:networks:shared=True\" が" -"成功します。" - -msgid "" -"Evaluate successfully if the user submitting the request has the specified " -"role. For instance, \"role:admin\" is successful if the user " -"submitting the request is an administrator." -msgstr "" -"リクエストを出したユーザーが指定された役割を持っていれば、成功と評価されま" -"す。たとえば、リクエストを出しているユーザーが管理者ならば、\"role:" -"admin\" が成功します。" - -msgid "" -"Even at smaller-scale testing, look for excess network packets to determine " -"whether something is going horribly wrong in inter-component communication." -msgstr "" -"より小規模なテストにおいてさえも、過剰なネットワークパケットを探して、コン" -"ポーネント間の通信で何かとてつもなくおかしくなっていないかどうかを判断しま" -"す。" - -msgid "" -"Ever have one of those days where all of the sudden you get the Google " -"results you were looking for? Well, that's what happened here. I was looking " -"for information on dhclient and why it dies when it can't renew its lease " -"and all of the sudden I found a bunch of OpenStack and dnsmasq discussions " -"that were identical to the problem we were seeing!" -msgstr "" -"探し続けてきた Google の検索結果が突然得られたという事態をお分かりだろうか?" -"えっと、それがここで起こったことだ。私は dhclient の情報と、何故 dhclient が" -"そのリースを更新できない場合に死ぬのかを探していて、我々が遭遇したのと同じ問" -"題についての OpenStack と dnsmasq の議論の束を突然発見した。" - -msgid "Everett Toews" -msgstr "Everett Toews" - -msgid "" -"Everett is a developer advocate at Rackspace making OpenStack and the " -"Rackspace Cloud easy to use. Sometimes developer, sometimes advocate, and " -"sometimes operator, he's built web applications, taught workshops, given " -"presentations around the world, and deployed OpenStack for production use by " -"academia and business." -msgstr "" -"Everett は Rackspace の Developer Advocate で、OpenStack や Rackspace Cloud " -"を使いやすくする仕事をしています。ある時は開発者、ある時は advocate、またある" -"時は運用者です。彼は、ウェブアプリケーションを作成し、ワークショップを行い、" -"世界中で公演を行い、教育界やビジネスでプロダクションユースとして使われる " -"OpenStack を構築しています。" - -msgid "Example" -msgstr "例" - -msgid "Example Architecture—Legacy Networking (nova)" -msgstr "アーキテクチャー例: レガシーネットワーク (nova)" - -msgid "Example Architecture—OpenStack Networking" -msgstr "アーキテクチャー例: OpenStack Networking" - -msgid "Example Component Configuration" -msgstr "コンポーネントの設定例" - -msgid "Example Image service Database Queries" -msgstr "Image service のデータベースクエリーの例" - -msgid "Example hardware" -msgstr "ハードウェアの例" - -msgid "Example of Complexity" -msgstr "複雑な例" - -msgid "Example of typical usage…" -msgstr "典型的な利用例" - -msgid "Example service restoration priority list" -msgstr "サービス復旧優先度一覧の例" - -msgid "Experimental" -msgstr "テスト用" - -msgid "Extensions" -msgstr "API 拡張" - -msgid "" -"External systems such as LDAP or Active Directory " -"require network connectivity between the cloud controller and an external " -"authentication system. Also ensure that the cloud controller has the CPU " -"power to keep up with requests." -msgstr "" -"LDAPやActive Directoryのような外部認証システムを利用す" -"る場合クラウドコントローラと外部認証システムとの間のネットワーク接続性が良好" -"である必要があります。また、クラウドコントローラがそれらのリクエストを処理す" -"るための十分のCPUパワーが必要です。" - -msgid "Extremely simple topology." -msgstr "極めて単純なトポロジー。" - -msgid "" -"Failures of hardware are common in large-scale deployments such as an " -"infrastructure cloud. Consider your processes and balance time saving " -"against availability. For example, an Object Storage cluster can easily live " -"with dead disks in it for some period of time if it has sufficient capacity. " -"Or, if your compute installation is not full, you could consider live " -"migrating instances off a host with a RAM failure until you have time to " -"deal with the problem." -msgstr "" -"クラウドインフラなどの大規模環境では、ハードウェアの故障はよくあることです。" -"作業内容を考慮し、可用性と時間の節約のバランスを取ります。たとえば、オブジェ" -"クトストレージクラスターは、十分な容量がある場合には、ある程度の期間は死んだ" -"ディスクがあっても問題なく動作します。また、(クラウド内の) コンピュートノード" -"に空きがある場合には、問題に対処する時間が取れるまで、ライブマイグレーション" -"で RAM が故障したホストから他のホストへインスタンスを移動させることも考慮する" -"とよいでしょう。" - -msgid "" -"Feature requests typically start their life in Etherpad, a collaborative " -"editing tool, which is used to take coordinating notes at a design summit " -"session specific to the feature. This then leads to the creation of a " -"blueprint on the Launchpad site for the particular project, which is used to " -"describe the feature more formally. Blueprints are then approved by project " -"team members, and development can begin." -msgstr "" -"機能追加リクエストは、通常 Etherpad で始まります。Etherpad は共同編集ツール" -"で、デザインサミットのその機能に関するセッションで議論を整理するのに使われま" -"す。続けて、プロジェクトの Launchpad サイトに blueprint が作成され、" -"blueprint を使ってよりきちんとした形で機能が規定されていきます。 この後、" -"blueprint はプロジェクトメンバーによって承認され、開発が始まります。" - -msgid "Feb 13, 2014" -msgstr "2014年2月13日" - -msgid "Feb 3, 2011" -msgstr "2011年2月3日" - -msgid "" -"Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed " -"this story." -msgstr "" -"台湾の Academia Sinica Grid Computing Centre の Felix Lee さんがこの話を提供" -"してくれました。" - -msgid "Field-based rules" -msgstr "項目に基づいたルール" - -msgid "File System Backups" -msgstr "ファイルシステムバックアップ" - -msgid "File injection" -msgstr "ファイルインジェクション" - -msgid "File system" -msgstr "ファイルシステム" - -msgid "" -"File system to store files and directories, where all the data lives, " -"including the root partition that starts and runs the system" -msgstr "" -"ファイルやディレクトリを格納するファイルシステム。システムを起動、実行する " -"root パーティションなど、全データが設置される場所。" - -msgid "File-level Storage (for Live Migration)" -msgstr "ファイルレベルのストレージ (ライブマイグレーション用)" - -msgid "File-level" -msgstr "ファイルレベル" - -msgid "Final steps" -msgstr "最終手順" - -msgid "" -"Finally, Alvaro noticed something. When a packet from the outside hits the " -"cloud controller, it should not be configured with a VLAN. We verified this " -"as true. When the packet went from the cloud controller to the compute node, " -"it should only have a VLAN if it was destined for an instance. This was " -"still true. When the ping reply was sent from the instance, it should be in " -"a VLAN. True. When it came back to the cloud controller and on its way out " -"to the Internet, it should no longer have a VLAN. False. Uh oh. It looked as " -"though the VLAN part of the packet was not being removed." -msgstr "" -"遂に、Alvaro が何かを掴んだ。外部からのパケットがクラウドコントローラーを叩い" -"た際、パケットは VLAN で設定されるべきではない。我々はこれが正しいことを検証" -"した。パケットがクラウドコントローラーからコンピュートノードに行く際、パケッ" -"トはインスタンス宛の場合にのみ VLAN を持つべきである。これもまた正しかった。" -"ping のレスポンスがインスタンスから送られる際、パケットは VLAN 中にいるべきで" -"ある。OK。クラウドコントローラーからインターネットにパケットが戻る際、パ" -"ケットには VLAN を持つべきではない。NG。うぉっ。まるで パケットの VLAN 部分" -"が削除されていないように見える。" - -msgid "" -"Finally, I checked StackTach and reviewed the user's events. They had " -"created and deleted several snapshotsmost likely experimenting. Although the " -"timestamps didn't match up, my conclusion was that they launched their " -"instance and then deleted the snapshot and it was somehow removed from /var/" -"lib/nova/instances/_base. None of that made sense, but it was the best I " -"could come up with." -msgstr "" -"最後に、私は StackTack をチェックし、ユーザのイベントを見直した。彼らはいくつ" -"かのスナップショットを作ったり消したりしていた-ありそうな操作ではあるが。タ" -"イムスタンプが一致しないとはいえ、彼らがインスタンスを起動して、その後スナッ" -"プショットを削除し、それが何故か /var/lib/nova/instances/_base から削除された" -"というのが私の結論だった。大した意味は無かったが、それがその時私が得た全て" -"だった。" - -msgid "Finally, mount the disk:" -msgstr "最後に、ディスクをマウントします。" - -msgid "" -"Finally, reattach volumes using the same method described in the section " -"Volumes." -msgstr "" -"最後に、ボリュームのセクションで説明されてい" -"るのと同じ方法を用いて、ボリュームを再接続します。" - -msgid "" -"Find the [filter:ratelimit] section in /etc/swift/" -"proxy-server.conf, and copy in the following configuration " -"section after it:" -msgstr "" -"/etc/swift/proxy-server.conf[filter:" -"ratelimit] セクションを探し、その後ろに以下の環境定義セクションを貼り" -"付けてください。" - -msgid "" -"Find the [pipeline:main] section in /etc/swift/proxy-" -"server.conf, and add ip_whitelist after ratelimit to " -"the list like so. When you're done, save and close the file:" -msgstr "" -"/etc/swift/proxy-server.conf\n" -"[pipeline:main] セクションを探し、このように " -"ip_whitelist リストを ratelimit の後ろに追加してください。完了し" -"たら、ファイルを保存して閉じてください。" - -msgid "" -"Find the provider:segmentation_id of the network you're " -"interested in. This is the same field used for the VLAN ID in VLAN-based " -"networks:" -msgstr "" -"興味あるネットワークの provider:segmentation_id を探します。これ" -"は、VLAN ベースのネットワークにおける VLAN ID に使用されるものと同じ項目で" -"す。" - -msgid "Find the scheduler_driver config and change it like so:" -msgstr "" -"scheduler_driver 設定を見つけ、このように変更してください。" - -msgid "" -"Find the external VLAN tag of the network you're interested in. This is the " -"provider:segmentation_id as returned by the networking service:" -msgstr "" -"興味のあるネットワークの外部 VLAN タグを見つけます。これは、ネットワークサー" -"ビスにより返される provider:segmentation_id です。" - -msgid "Finding Additional Information" -msgstr "さらに情報を見つける" - -msgid "Finding a Failure in the Path" -msgstr "経路上の障害を見つける" - -msgid "First, find the UUID of the instance in question:" -msgstr "まず、インスタンスのUUIDを確認します。" - -msgid "First, unmount the disk:" -msgstr "まず、ディスクをアンマウントします。" - -msgid "" -"First, you can discover what servers belong to your OpenStack cloud by " -"running:" -msgstr "" -"まず、あなたのOpenStackクラウドに属し、稼働しているサーバーを把握することがで" -"きます。" - -msgid "" -"Fixed IP addresses are required, whereas it is possible to run OpenStack " -"without floating IPs. One of the most common use cases for floating IPs is " -"to provide public IP addresses to a private cloud, where there are a limited " -"number of IP addresses available. Another is for a public cloud user to have " -"a \"static\" IP address that can be reassigned when an instance is upgraded " -"or moved.IP addressesstaticstatic IP addresses" -msgstr "" -"固定 IP アドレスは必須ですが、フローティング IP アドレスはなくても OpenStack " -"を実行する事が出来ます。フローティング IP の最も一般的な用法の1つは、利用可" -"能なIPアドレス数が限られているプライベートクラウドにパブリックIPアドレスを提" -"供する事です。他にも、インスタンスがアップグレード又は移動した際に割り当て可" -"能な「静的」 IP アドレスを持つパブリック・クラウドユーザ用です。IPアドレス固定固定IPアドレス" - -msgid "" -"Fixed IP addresses can be private for private clouds, or public for public " -"clouds. When an instance terminates, its fixed IP is lost. It is worth " -"noting that newer users of cloud computing may find their ephemeral nature " -"frustrating.IP addressesfixedfixed IP addresses" -msgstr "" -"固定IPアドレスはプライベートクラウドではプライベートに、パブリッククラウドで" -"はパブリックにする事が出来ます。インスタンスが削除される際、そのインスタンス" -"の固定IPは割当を解除されます。クラウドコンピューティングの初心者が一時的にス" -"トレスを感じる可能性がある事に注意しましょう。IPアドレス固定固定IPアドレス" - -msgid "Fixed IPs" -msgstr "固定 IP" - -msgid "Flat" -msgstr "Flat" - -msgid "FlatDHCP" -msgstr "FlatDHCP" - -msgid "FlatDHCP Multi-host with high availability (HA)" -msgstr "高可用性(HA)のためのフラットDHCPマルチホスト構成" - -msgid "Flavor parameters" -msgstr "フレーバーのパラメーター" - -msgid "Flavors" -msgstr "フレーバー" - -msgid "" -"Flavors define a number of parameters, resulting in the user having a choice " -"of what type of virtual machine to run—just like they would have if they " -"were purchasing a physical server. " -"lists the elements that can be set. Note in particular extra_specs, which can be used to " -"define free-form characteristics, giving a lot of flexibility beyond just " -"the size of RAM, CPU, and Disk.base " -"image" -msgstr "" -"フレーバーは、数多くのパラメーターを定義します。これにより、ユーザーが実行す" -"る仮想マシンの種類を選択できるようになります。ちょうど、物理サーバーを購入す" -"る場合と同じようなことです。 は、設定" -"できる要素の一覧です。とくに extra_specs, に注意してください。これは、メモ" -"リー、CPU、ディスクの容量以外にもかなり柔軟に、自由形式で特徴を定義するために" -"使用できます。base image" - -msgid "Floating IPs" -msgstr "Floating IP" - -msgid "Folsom" -msgstr "Folsom" - -msgid "" -"For Compute, instance metadata is a collection of key-value pairs associated " -"with an instance. Compute reads and writes to these key-value pairs any time " -"during the instance lifetime, from inside and outside the instance, when the " -"end user uses the Compute API to do so. However, you cannot query the " -"instance-associated key-value pairs with the metadata service that is " -"compatible with the Amazon EC2 metadata service." -msgstr "" -"Compute では、インスタンスのメタデータはインスタンスと関連付けられたキーバ" -"リューペアの集まりです。エンドユーザーがこれらのキーバリューペアを読み書きす" -"るために Compute API を使用するとき、Compute がインスタンスの生存期間中にイン" -"スタンスの内外からこれらを読み書きします。しかしながら、Amazon EC2 メタデータ" -"サービスと互換性のあるメタデータサービスを用いて、インスタンスに関連付けられ" -"たキーバリューペアをクエリーできません。" - -msgid "For Object Storage, each region has a swift environment." -msgstr "オブジェクトストレージ用に、各リージョンには swift 環境があります。" - -msgid "" -"For OpenStack Networking with the neutron project, typical configurations " -"are documented with the idea that any setup you can configure with real " -"hardware you can re-create with a software-defined equivalent. Each tenant " -"can contain typical network elements such as routers, and services such as " -"DHCP." -msgstr "" -"neutronプロジェクトを使用したOpenStackネットワークのための代表的な構成は物理" -"ハードウェアを使った物、ソフトウェア定義の環境を使用して再作成できるものを" -"使ったセットアップ方法を文書化されています。それぞれのテナントはルータやDHCP" -"サービスと言った一般的なネットワーク構成要素を提供する事ができます。" - -msgid "" -"For a discussion of metric tracking, including how to extract metrics from " -"your cloud, see ." -msgstr "" -"クラウドからメトリクスを抽出する方法など、メトリクスの追跡については、 を参照してください。" - -msgid "" -"For an example of instance metadata, users can generate and register SSH " -"keys using the nova command:" -msgstr "" -"インスタンスのメタデータの場合、ユーザーが nova コマンドを" -"使用して SSH 鍵を生成および登録できます。" - -msgid "" -"For compute nodes, nova-scheduler will take care of differences " -"in sizing having to do with core count and RAM amounts; however, you should " -"consider that the user experience changes with differing CPU speeds. When " -"adding object storage nodes, a weight should be " -"specified that reflects the capability of the node." -msgstr "" -"コンピュートノードについては、nova-scheduler がコア数やメモリ量" -"のサイジングに関する違いを吸収しますが、CPUの速度が違うことによって、ユーザー" -"の使用感が変わることを考慮するべきです。オブジェクトストレージノードを追加す" -"る際には、 そのノードのcapability を反映する" -"weight を指定するべきです。" - -msgid "" -"For environments using the OpenStack Networking service (neutron), verify " -"the release version of the database. For example:" -msgstr "" -"OpenStack Networking サービス (neutron) を使用している環境では、リリースバー" -"ジョンのデータベースを検証します。例:" - -msgid "" -"For example, KVM is the most widely adopted hypervisor in the OpenStack " -"community. Besides KVM, more deployments run Xen, LXC, VMware, and Hyper-V " -"than the others listed. However, each of these are lacking some feature " -"support or the documentation on how to use them with OpenStack is out of " -"date." -msgstr "" -"例えば、 KVM は OpenStack コミュニティーでは最も多く採用されているハイパーバ" -"イザーです。KVM 以外では、Xen、LXC、VMware、Hyper-V を実行するデプロイメント" -"のほうが、他に リストされているハイパーバイザーよりも多くなっています。しか" -"し、いずれのハイパーバイザーも、機能のサポートがないか、OpenStack を併用する" -"方法に関する文書が古くなっています。" - -msgid "" -"For example, a cloud administrator might be able to list all instances in " -"the cloud, whereas a user can see only those in his current group. Resources " -"quotas, such as the number of cores that can be used, disk space, and so on, " -"are associated with a project." -msgstr "" -"例えば、クラウドのユーザは自分の現在のグループに属するインスタンスのみが見え" -"るのに対して、クラウドの管理者はそのクラウドのすべてのインスタンスの一覧をと" -"ることができるでしょう。利用可能なコア数、ディスク容量等のリソースのクォータ" -"はプロジェクトに対して関連づけられています。" - -msgid "" -"For example, a group of users have instances that are utilizing a large " -"amount of compute resources for very compute-intensive tasks. This is " -"driving the load up on compute nodes and affecting other users. In this " -"situation, review your user use cases. You may find that high compute " -"scenarios are common, and should then plan for proper segregation in your " -"cloud, such as host aggregation or regions." -msgstr "" -"例えば、あるユーザーのグループが、非常に計算負荷の高い作業用に大量のコン" -"ピュートリソースを使うインスタンスを持っているとします。これにより、Compute " -"ノードの負荷が高くなり、他のユーザーに影響を与えます。この状況では、ユーザー" -"のユースケースを精査する必要があります。計算負荷が高いシナリオがよくあるケー" -"スだと判明し、ホスト集約やリージョンなど、クラウドを適切に分割することを計画" -"すべき場合もあるでしょう。" - -msgid "" -"For example, if a physical node has 48 GB of RAM, the scheduler allocates " -"instances to that node until the sum of the RAM associated with the " -"instances reaches 72 GB (such as nine instances, in the case where each " -"instance has 8 GB of RAM)." -msgstr "" -"例えば、物理ノードに 48GB の RAM がある場合、そのノード上のインスタンスに割り" -"当てられた RAM の合計が 72GB に達するまでは、スケジューラーはそのノードにイン" -"スタンスを割り振ることになります (例えば、各インスタンスのメモリが 8GB であれ" -"ば、9 インスタンス割り当てられます)。" - -msgid "" -"For example, if you estimate that your cloud must support a maximum of 100 " -"projects, pick a free VLAN range that your network infrastructure is " -"currently not using (such as VLAN 200–299). You must configure OpenStack " -"with this range and also configure your switch ports to allow VLAN traffic " -"from that range." -msgstr "" -"例えば、あなたのクラウドでサポートする必要があるプロジェクト数が最大で100と見" -"込まれる場合、ネットワークインフラで現在使用されていない VLAN の範囲を選んで" -"下さい (例えば VLAN 200 - 299)。この VLAN の範囲を OpenStack に設定するととも" -"に、この範囲の VLAN トラフィックを許可するようにスイッチポートを設定しなけれ" -"ばいけません。" - -msgid "" -"For example, let's say you have a special authorized_keys file named special_authorized_keysfile that for some reason you " -"want to put on the instance instead of using the regular SSH key injection. " -"In this case, you can use the following command:" -msgstr "" -"例えば、何らかの理由で通常の SSH 鍵の注入ではなく、" -"special_authorized_keysfile という名前の特別な authorized_keys ファイルをインスタンスに置きたいと言うとします。この場合、以下のコ" -"マンドを使用できます:" - -msgid "For example, run the following command:" -msgstr "例えば、以下のコマンドを実行します。" - -msgid "" -"For example, take a deployment that has both OpenStack Compute and Object " -"Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26 available. " -"One way to segregate the space might be as follows:" -msgstr "" -"例えば、OpenStack コンピュート と オブジェクトストレージ の両方を使用し、プラ" -"イベートアドレス範囲として 172.22.42.0/24 と 172.22.87.0/26 が利用できる場面" -"を考えます。一例として、アドレス空間を以下のように分割することができます。" - -msgid "" -"For example, the EC2 API refers to instances using IDs that contain " -"hexadecimal, whereas the OpenStack API uses names and digits. Similarly, the " -"EC2 API tends to rely on DNS aliases for contacting virtual machines, as " -"opposed to OpenStack, which typically lists IP addresses.DNS (Domain Name Server, Service or System)DNS aliasestroubleshootingDNS issues" -msgstr "" -"例えば、EC2 API では、16 進数を含む ID を使ってインスタンスを参照するのに対し" -"て、OpenStack API では名前と数値を使います。同様に、EC2 API は仮想マシンに接" -"続するのに DNS エイリアスに頼る傾向がありますが、OpenStack では典型的には IP " -"アドレスを使います。DNS (Domain Name " -"Server, Service or System)DNS エイリアストラブルシューティングDNS 問題" - -msgid "" -"For example, to monitor disk capacity on a compute node with Nagios, add the " -"following to your Nagios configuration:" -msgstr "" -"例として、コンピュートノード上のディスク容量をNagiosを使って監視する場合、次" -"のようなNagios設定を追加します。" - -msgid "For example, to place a 5 GB quota on an account:" -msgstr "例として、アカウントに 5 GB のクォータを設定します。" - -msgid "For example, to restrict a project's image storage to 5 GB, do this:" -msgstr "" -"たとえば、プロジェクトのイメージストレージを 5GB に制限するには、以下を実行し" -"ます。" - -msgid "" -"For example, you must plan the number of IP addresses that you need for both " -"your guest instances as well as management infrastructure. Additionally, you " -"must research and discuss cloud network connectivity through proxy servers " -"and firewalls." -msgstr "" -"例えば、管理インフラだけでなくゲストインスタンス用のIPアドレスの数も計画しな" -"ければなりません。加えて、プロキシサーバーやファイアウォールを経由してのクラ" -"ウドネットワークの接続性を調査・議論する必要があります。" - -msgid "" -"For example, you usually cannot configure NICs for VLANs when PXE booting. " -"Additionally, you usually cannot PXE boot with bonded NICs. If you run into " -"this scenario, consider using a simple 1 GB switch in a private network on " -"which only your cloud communicates." -msgstr "" -"例えば、PXE ブートの際には、通常は VLAN の設定は行えません。さらに、通常は " -"bonding された NIC から PXE ブートを行うこともできません。このような状況の場" -"合、クラウド内でのみ通信できるネットワークで、シンプルな 1Gbps のスイッチを使" -"うことを検討してください。" - -msgid "For example:" -msgstr "例えば" - -msgid "For example: " -msgstr "例えば、" - -msgid "For example:" -msgstr "例:" - -msgid "" -"For further research about OpenStack deployment, investigate the supported " -"and documented preconfigured, prepackaged installers for OpenStack from " -"companies such as Canonical, Cisco, Cloudscaling, IBM, Metacloud, Mirantis, Piston, Rackspace, Red Hat, SUSE, and SwiftStack." -msgstr "" -"OpenStack の導入に関する詳細は、Canonical, Cisco, Cloudscaling, IBM, Metacloud, " -"Mirantis, Piston, Rackspace, Red Hat, SUSE, SwiftStack などの企業の、サポートされ" -"ドキュメント化された事前設定およびパッケージ化済みのインストーラーを確認して" -"ください。" - -msgid "" -"For many deployments, the cloud controller is a single node. However, to " -"have high availability, you have to take a few considerations into account, " -"which we'll cover in this chapter." -msgstr "" -"多くの構成では、クラウドコントローラーはシングルノードで構成されています。し" -"かし、高可用性のためには、この章で取り上げるいくつかの事項を考慮する必要があ" -"ります。" - -msgid "" -"For more information about updating Block Storage volumes (for example, " -"resizing or transferring), see the OpenStack End User Guide." -msgstr "" -"Block Storage ボリュームの更新に関する詳細は、OpenStack エンドユーザーガイドを参照し" -"てください。" - -msgid "" -"For our example, the cloud controller has a collection of nova-* components that represent the global state of the cloud; talks to " -"services such as authentication; maintains information about the cloud in a " -"database; communicates to all compute nodes and storage workers through a queue; and provides API access. Each service running " -"on a designated cloud controller may be broken out into separate nodes for " -"scalability or availability.storagestorage workersworkers" -msgstr "" -"我々の例では、クラウドコントローラはnova-* コンポーネントの集ま" -"りで、それらは認証のようなサービスとのやり取りや、データベース内の情報の管" -"理、キューを通したストレージ ワーカーコンピュートノー" -"ドとのコミュニケーション、APIアクセス、などといったクラウド全体の状態管理を受" -"け持っています。それぞれのサービスは可用性やスケーラビリティを考慮して別々の" -"ノードへ分離する事ができます。storageストレージワーカーワーカー" - -msgid "" -"For readers who need to get a specialized feature into OpenStack, this " -"chapter describes how to use DevStack to write custom middleware or a custom " -"scheduler to rebalance your resources." -msgstr "" -"OpenStack に特別な機能を追加したい読者向けに、この章は、カスタムミドルウェア" -"やカスタムスケジューラーを書いて、リソースを再配置するために、DevStack を使用" -"する方法について説明します。" - -msgid "" -"For stable operations, you want to detect failure promptly and determine " -"causes efficiently. With a distributed system, it's even more important to " -"track the right items to meet a service-level target. Learning where these " -"logs are located in the file system or API gives you an advantage. This " -"chapter also showed how to read, interpret, and manipulate information from " -"OpenStack services so that you can monitor effectively." -msgstr "" -"安定運用のために、障害を即座に検知して、原因を効率的に見つけたいと思います。" -"分散システムを用いると、目標サービスレベルを満たすために、適切な項目を追跡す" -"ることがより重要になります。ログが保存されるファイルシステムの場所、API が与" -"える利点を学びます。本章は、OpenStack のサービスを効率的に監視できるよう、そ" -"れらからの情報を読み、解釈し、操作する方法も説明しました。" - -msgid "" -"For the cloud controller, the good news is if your cloud is using the " -"FlatDHCP multi-host HA network mode, existing instances and volumes continue " -"to operate while the cloud controller is offline. For the storage proxy, " -"however, no storage traffic is possible until it is back up and running." -msgstr "" -"クラウドコントローラーの場合、良いニュースとしては、クラウドが FlatDHCP マル" -"チホスト HA ネットワークモードを使用していれば、既存のインスタンスとボリュー" -"ムはクラウドコントローラーがオフラインの間も動作を継続するという点がありま" -"す。しかしながら、ストレージプロキシの場合には、サーバーが元に戻され動作状態" -"になるまで、ストレージとの通信ができません。" - -msgid "" -"For the second path, you can write new features and plug them in using " -"changes to a configuration file. If the project where your feature would " -"need to reside uses the Python Paste framework, you can create middleware " -"for it and plug it in through configuration. There may also be specific ways " -"of customizing a project, such as creating a new scheduler driver for " -"Compute or a custom tab for the dashboard." -msgstr "" -"2 番目の方法として、新機能を書き、設定ファイルを変更して、それらをプラグイン" -"することもできます。もし、あなたの機能が必要とされるプロジェクトが Python " -"Paste フレームワークを使っているのであれば、そのための ミドルウェアを作成し、" -"環境設定を通じて組み込めばよいのです。例えば、Compute の新しいスケジューラー" -"や dashboard のカスタムタブなど、プロジェクトをカスタマイズする\n" -"を作成するといった、プロジェクトをカスタマイズする具体的な方法もあるかもしれ" -"ません。" - -msgid "" -"For the storage proxy, ensure that the Object Storage service has resumed:" -msgstr "" -"ストレージプロキシの場合、Object Storage サービスが再開していることを確認しま" -"す。" - -msgid "" -"For this example, we will use the Open vSwitch (OVS) back end. Other back-" -"end plug-ins will have very different flow paths. OVS is the most popularly " -"deployed network driver, according to the October 2015 OpenStack User " -"Survey, with 41 percent more sites using it than the Linux Bridge driver. " -"We'll describe each step in turn, with for reference." -msgstr "" -"この例のために、Open vSwitch (OVS) バックエンドを使用します。他のバックエンド" -"プラグインは、まったく別のフロー経路になるでしょう。OVS は、最も一般的に配備" -"されているネットワークドライバーです。2015 年 10 月の OpenStack User Survey " -"によると、2 番目の Linux Bridge ドライバーよりも、41% 以上も多くのサイトで使" -"用されています。 を参照しながら、各手" -"順を順番に説明していきます。" - -msgid "" -"Form the only ingress and egress point for instances running on top of " -"OpenStack." -msgstr "" -"OpenStack 上で実行されているインスタンス用の唯一の送受信ポイントを形成しま" -"す。" - -msgid "Freeze the system" -msgstr "システムをフリーズします" - -msgid "" -"From here, click the + icon to add users to the project. " -"Click the - to remove them." -msgstr "" -"ここから、プロジェクトにユーザーを追加するには + アイコン" -"をクリックします。削除するには - をクリックします。" - -msgid "" -"From the command line, you can get a list of security groups for the project " -"you're acting in using the nova command:" -msgstr "" -"コマンドラインからは、以下の nova コマンドを使って、現在の" -"プロジェクトのセキュリティグループのリストを取得できます:" - -msgid "" -"From the vnet NIC, the packet transfers to a bridge on the compute node, " -"such as br100." -msgstr "" -"パケットはvnet NICからコンピュートノードのブリッジ、例えばbr100" -"に転送されます。" - -msgid "" -"From these tables, you can see that a floating IP is technically never " -"directly related to an instance; it must always go through a fixed IP." -msgstr "" -"これらのテーブルから、Floating IPが技術的には直接インスタンスにひも付けられて" -"おらず、固定IP経由であることがわかります。" - -msgid "" -"From this view, you can do a number of useful things, as well as a few " -"dangerous ones." -msgstr "" -"このビューから、数多くの有用な操作、いくつかの危険な操作を実行できます。" - -msgid "" -"From this you see that the DHCP server on that network is using the " -"tape6256f7d-31 device and has an IP address of 10.0.1.100. Seeing the " -"address 169.254.169.254, you can also see that the dhcp-agent is running a " -"metadata-proxy service. Any of the commands mentioned previously in this " -"chapter can be run in the same way. It is also possible to run a shell, such " -"as bash, and have an interactive session within the " -"namespace. In the latter case, exiting the shell returns you to the top-" -"level default namespace." -msgstr "" -"ここから、そのネットワークにある DHCP サーバーが tape6256f7d-31 デバイスを使" -"用していて、IP アドレス 10.0.1.100 を持つことを確認します。アドレス " -"169.254.169.254 を確認することにより、dhcp-agent が metadata-proxy サービスを" -"実行していることも確認できます。この章の前の部分で言及したコマンドは、すべて" -"同じ方法で実行できます。bash などのシェルを実行して、名前" -"空間の中で対話式セッションを持つこともできます。後者の場合、シェルを抜けるこ" -"とにより、最上位のデフォルトの名前空間に戻ります。" - -msgid "" -"Front end web for API requests, the scheduler for choosing which compute " -"node to boot an instance on, Identity services, and the dashboard" -msgstr "" -"APIリクエスト用のフロントエンドウェブ、インスタンスをどのノードで起動するかを" -"決定するスケジューラ、Identity サービス、ダッシュボード" - -msgid "" -"Functional testing like this is not a replacement for proper unit and " -"integration testing, but it serves to get you started." -msgstr "" -"このような機能試験は、正しいユニットテストと結合テストの代わりになるものでは" -"ありませんが、作業を開始することはできます。" - -msgid "" -"Functional testing like this is not a replacement for proper unit and " -"integration testing, but it serves to get you started.testingfunctional testingfunctional " -"testing" -msgstr "" -"このような機能テストは、適切な機能テストや結合テストを置き換えるものではあり" -"ませんが、取り掛かりに良いでしょう。testingfunctional testingfunctional testing" - -msgid "Further Reading" -msgstr "参考資料" - -msgid "" -"Further days go by and we catch The Issue in action more and more. We find " -"that dhclient is not running after The Issue happens. Now we're back to " -"thinking it's a DHCP issue. Running /etc/init.d/networking restart brings everything back up and running." -msgstr "" -"それから何日か過ぎ、我々は「あの問題」に度々遭遇した。我々は「あの問題」の発" -"生後、dhclient が実行されていないことを発見した。今、我々は、それが DHCP の問" -"題であるという考えに立ち戻った。/etc/init.d/networking " -"restart を実行すると、全ては元通りに実行されるようになった。" - -msgid "" -"Further troubleshooting showed that libvirt was not running at all. This " -"made more sense. If libvirt wasn't running, then no instance could be " -"virtualized through KVM. Upon trying to start libvirt, it would silently die " -"immediately. The libvirt logs did not explain why." -msgstr "" -"さらなるトラブルシューティングにより、libvirt がまったく動作していないことが" -"わかりました。これは大きな手がかりです。libvirt が動作していないと、KVM によ" -"るインスタンスの仮想化ができません。libvirt を開始させようとしても、libvirt " -"は何も表示せずすぐに停止しました。libvirt のログでは理由がわかりませんでし" -"た。" - -msgid "" -"GRE-based networks are passed with patch-tun to the tunnel " -"bridge br-tun on interface patch-int. This bridge " -"also contains one port for each GRE tunnel peer, so one for each compute " -"node and network node in your network. The ports are named sequentially from " -"gre-1 onward." -msgstr "" -"GRE ベースのネットワークは、patch-tun を用いて、patch-" -"int インターフェースの br-tun トンネルブリッジに渡されま" -"す。このブリッジは、各 GRE トンネルの 1 つのポートにも含まれます。つまり、" -"ネットワーク上の各コンピュートノードとネットワークノードに対して 1 つです。" -"ポートの名前は、gre-1 から順番に増えていきます。" - -msgid "" -"GRE-based networks will be passed to the tunnel bridge br-tun, " -"which behaves just like the GRE interfaces on the compute node." -msgstr "" -"GRE ベースのネットワークは、トンネルブリッジ br-tun に転送されま" -"す。これは、コンピュートノードにおいて GRE インターフェースのように動作しま" -"す。" - -msgid "General Security Groups Configuration" -msgstr "一般的なセキュリティグループの設定" - -msgid "General list" -msgstr "一般向けメーリングリスト" - -msgid "Generate signature of image and convert it to a base64 representation:" -msgstr "イメージの署名を生成して、base64 形式に変換します。" - -msgid "Generic rules" -msgstr "一般的なルール" - -msgid "Geographical Considerations for Object Storage" -msgstr " Object Storage の地理的考慮事項" - -msgid "Getting Credentials" -msgstr "認証情報の取得方法" - -msgid "Getting Help" -msgstr "助けを得る" - -msgid "Getting Started with OpenStack" -msgstr "はじめての OpenStack" - -msgid "" -"Give ownership of the devstack directory to the stack " -"user:" -msgstr "" -"devstack ディレクトリーの所有者を stack ユーザーに変更しま" -"す。" - -msgid "Gluster" -msgstr "Gluster" - -msgid "" -"GlusterGlusterFS" -msgstr "" -"GlusterGlusterFS" - -msgid "GlusterFS" -msgstr "GlusterFS" - -msgid "" -"GlusterFS offers scalable storage. As your environment grows, you can " -"continue to add more storage nodes (instead of being restricted, for " -"example, by an expensive storage array)." -msgstr "" -"GlusterFS は拡張可能なストレージを提供します。環境の拡大に応じて、 (高額なス" -"トレージアレイなどによる制約を受けずに) ストレージノードを継続的に追加するこ" -"とが可能です。" - -msgid "Good Luck!" -msgstr "グッドラック!" - -msgid "" -"Grep for 0x<provider:segmentation_id>, 0x3 in this case, " -"in the output of ovs-ofctl dump-flows br-tun:" -msgstr "" -"この場合、ovs-ofctl dump-flows br-tun の出力で 0x<" -"provider:segmentation_id>, 0x3 を grep します。" - -msgid "" -"Grep for the provider:segmentation_id, 2113 in this case, in " -"the output of ovs-ofctl dump-flows br-int:" -msgstr "" -"この場合、ovs-ofctl dump-flows br-int の出力で " -"provider:segmentation_id, 2113 を grep します。" - -msgid "Grizzly" -msgstr "Grizzly" - -msgid "HDWMY" -msgstr "HDWMY" - -msgid "HTTP" -msgstr "HTTP" - -msgid "Handling a Complete Failure" -msgstr "完全な故障の対処" - -msgid "Hardware Considerations" -msgstr "ハードウェアの考慮事項" - -msgid "Hardware Procurement" -msgstr "ハードウェア調達" - -msgid "" -"Hardware does not have to be consistent, but it should at least have the " -"same type of CPU to support instance migration." -msgstr "" -"ハードウェアに整合性を持たせる必要はありませんが、インスタンスのマイグレー" -"ションをサポートできるように、最低限、CPU の種類は同じにする必要があります。" - -msgid "" -"Hardware for compute nodes. Typically 256 or 144 GB memory, two processors, " -"24 cores. 4–6 TB direct attached storage, typically in a RAID 5 " -"configuration." -msgstr "" -"コンピュートノードのハードウェア。通常、メモリー 256 GB または 144 GB、プロ" -"セッサー 2 個、コア 24 個、通常 RAID 5 設定のダイレクトアタッチストレージ " -"(DAS)。" - -msgid "" -"Hardware for controller nodes, used for all stateless OpenStack API " -"services. About 32–64 GB memory, small attached disk, one processor, varied " -"number of cores, such as 6–12." -msgstr "" -"コントローラーノードのハードウェア。ステートレスの OpenStack API サービスすべ" -"てに使用します。メモリー約 32-64GB、接続された容量の小さいディスク、プロセッ" -"サー 1 つ、6-12 個程度のコア。" - -msgid "" -"Hardware for storage nodes. Typically for these, the disk space is optimized " -"for the lowest cost per GB of storage while maintaining rack-space " -"efficiency." -msgstr "" -"ストレージノードのハードウェア。通常、ラックスペース効率を確保しつつも、ディ" -"スク容量のコストが GB ベースで最も低く最適化されています。" - -msgid "Havana" -msgstr "Havana" - -msgid "Havana Haunted by the Dead" -msgstr "Havana 死者の幽霊" - -msgid "" -"He re-enabled the switch ports and the two compute nodes immediately came " -"back to life." -msgstr "" -"彼はスイッチポートを再度有効にしたところ、2つのコンピュートノードは即時に復" -"活した。" - -msgid "" -"Heavy I/O usage on one compute node does not affect instances on other " -"compute nodes." -msgstr "" -"あるコンピュートノード上での I/O が非常に多い場合でも、他のコンピュートノード" -"のインスタンスに影響がありません。" - -msgid "" -"Here are snippets of the default nova policy.json file:" -msgstr "" -"これは標準の nova policy.json ファイルの抜粋です。" - -msgid "Here are some other resources:" -msgstr "これは他のいくつかのリソースです。" - -msgid "Here is an example error log:" -msgstr "これはエラーログの例です。" - -msgid "" -"Here is an example of a CRITICAL log message, with the corresponding TRACE " -"(Python traceback) immediately following:" -msgstr "" -"これは、トレース(Pythonのコールスタック)付きのCRITICALなログメッセージの例で" -"す。" - -msgid "" -"Here is an example using the ratios for gathering scalability information " -"for the number of VMs expected as well as the storage needed. The following " -"numbers support (200 / 2) 16 = 1600 VM instances and require 80 TB of " -"storage for /var/lib/nova/instances:" -msgstr "" -"ここでは、期待される仮想マシン数や必要なストレージ数などの拡張性の情報を収集" -"するために、これらの比率を使用した例を紹介しています。以下の数では、 (200 / " -"2) × 16 = 1600 仮想マシンのインスタンスをサポートし、/var/lib/nova/instances " -"のストレージ 80 TB が必要となります。" - -msgid "" -"Here we can see that the request was denied because the remote IP address " -"wasn't in the set of allowed IPs." -msgstr "" -"ここで、リモートIPアドレスが、許可されたIPアドレスの中になかったため、リクエ" -"ストが拒否されていることがわかります。" - -msgid "" -"Here you can see packets received on port ID 1 with the VLAN tag 2113 are " -"modified to have the internal VLAN tag 7. Digging a little deeper, you can " -"confirm that port 1 is in fact int-br-eth1:" -msgstr "" -"これで VLAN タグ 2113 を持つポート ID 1 で受信したパケットを参照できます。こ" -"れは変換され、内部 VLAN タグ 7 を持ちます。より深く掘り下げると、ポート 1 が" -"実際に int-br-eth1 であることが確認できます。" - -msgid "" -"Here's a quick list of various to-do items for each hour, day, week, month, " -"and year. Please note that these tasks are neither required nor definitive " -"but helpful ideas:maintenance/" -"debuggingschedule of tasks" -msgstr "" -"これらは、毎時間、日、週、月および年に実行する To Do 項目の簡単な一覧です。こ" -"れらのタスクは必要なものでも、絶対的なものでもありませんが、役に立つものばか" -"りです。maintenance/debuggingschedule of tasks" - -msgid "" -"Here, the ID associated with the instance is faf7ded8-4a46-413b-b113-" -"f19590746ffe. If you search for this string on the cloud controller " -"in the /var/log/nova-*.log files, it appears in " -"nova-api.log and nova-scheduler.log. If you search for this on the compute nodes in /var/log/" -"nova-*.log, it appears in nova-network.log " -"and nova-compute.log. If no ERROR or CRITICAL messages " -"appear, the most recent log entry that reports this may provide a hint about " -"what has gone wrong." -msgstr "" -"ここで、インスタンスのUUIDは faf7ded8-4a46-413b-b113-f19590746ffeです。クラウドコントローラー上の /var/log/nova-*.logファイルをこの文字列で検索すると、nova-api.log" -"と nova-scheduler.logで見つかります。同様にコンピュート" -"ノードで検索した場合、nova-network.log と " -"nova-compute.logで見つかります。もし、ERRORやCRITICALの" -"メッセージが存在しない場合、最後のログエントリが、何が悪いかのヒントを示して" -"いるかもしれません。" - -msgid "" -"Here, the external server received the ping request and sent a ping reply. " -"On the compute node, you can see that both the ping and ping reply " -"successfully passed through. You might also see duplicate packets on the " -"compute node, as seen above, because tcpdump captured the " -"packet on both the bridge and outgoing interface." -msgstr "" -"外部サーバーはpingリクエストを受信し、pingリプライを送信しています。コン" -"ピュートノード上では、pingとpingリプライがそれぞれ成功していることがわかりま" -"す。また、見ての通り、コンピュートノード上ではパケットが重複していることもわ" -"かるでしょう。なぜならtcpdumpはブリッジと外向けインター" -"フェイスの両方でパケットをキャプチャーするからです。" - -msgid "" -"Here, two floating IPs are available. The first has been allocated to a " -"project, while the other is unallocated." -msgstr "" -"この場合は、2 つの Floating IP アドレスが利用可能です。最初の IP アドレスはプ" -"ロジェクトに確保されていますが、もう一方は確保されていません。" - -msgid "" -"Here, you see three flows related to this GRE tunnel. The first is the " -"translation from inbound packets with this tunnel ID to internal VLAN ID 1. " -"The second shows a unicast flow to output port 53 for packets destined for " -"MAC address fa:16:3e:a6:48:24. The third shows the translation from the " -"internal VLAN representation to the GRE tunnel ID flooded to all output " -"ports. For further details of the flow descriptions, see the man page for " -"ovs-ofctl. As in the previous VLAN example, numeric port " -"IDs can be matched with their named representations by examining the output " -"of ovs-ofctl show br-tun." -msgstr "" -"ここで、この GRE トンネルに関連する 3 つのフローを見つけられます。1 番目は、" -"このトンネル ID を持つ受信パケットから内部 VLAN ID 1 に変換したものです。2 番" -"目は、MAC アドレス fa:16:3e:a6:48:24 宛のパケットに対する送信ポート 53 番への" -"ユニキャストフローです。3 番目は、内部 VLAN 表現から、すべての出力ポートにあ" -"ふれ出した GRE トンネル ID に変換したものです。フローの説明の詳細は " -"ovs-ofctl のマニュアルページを参照してください。前の VLAN " -"の例にあるように、数値ポート ID は、ovs-ofctl show br-tun " -"の出力を検査することにより、それらの名前を付けた表現に対応付けられます。" - -msgid "" -"Herein lies a selection of tales from OpenStack cloud operators. Read, and " -"learn from their wisdom." -msgstr "" -"ここにあるのは、OpenStack クラウドオペレータ達の苦闘の抜粋である。これを読" -"み、彼らの叡智を学ぶが良い。" - -msgid "High Availability" -msgstr "高可用性" - -msgid "Host aggregates" -msgstr "ホスト・アグリゲート" - -msgid "" -"Host aggregates and instance-type extra specs are used to provide two " -"different resource allocation ratios. The default resource allocation ratios " -"we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use instance " -"types that require non-oversubscribed hosts where cpu_ratio and ram_ratio are both set to 1.0. Since we have " -"hyper-threading enabled on our compute nodes, this provides one vCPU per CPU " -"thread, or two vCPUs per physical core." -msgstr "" -"ホストアグリゲートとインスタンス種別の追加スペックが使用され、2 種類の割り当" -"て倍率を提供します。私たちが使用するデフォルトのリソース割り当ては、4:1 CPU " -"と 1.5:1 RAM です。コンピュート中心の処理は、cpu_ratio と " -"ram_ratio をどちらも 1.0 に設定されている、オーバーサブス" -"クライブしていないホストを必要とするインスタンス種別を使用します。コンピュー" -"トノードでハイパースレッディングを有効化しているので、これは CPU スレッドごと" -"に 1 つの仮想 CPU、または、物理 CPU ごとに 2 つの仮想 CPU を提供します。" - -msgid "Host aggregates zone" -msgstr "ホストアグリゲートゾーン" - -msgid "Host operating system" -msgstr "ホストのオペレーティングシステム" - -msgid "Hourly" -msgstr "毎時" - -msgid "How Do I Modify an Existing Flavor?" -msgstr "どのように既存のフレーバーを変更しますか?" - -msgid "How This Book Is Organized" -msgstr "この本の構成" - -msgid "How do I manage the storage operationally?" -msgstr "ストレージの運用管理をどうするか?" - -msgid "How do you manage the storage operationally?" -msgstr "運用上ストレージをどのように管理したいのか?" - -msgid "How long does a single instance run?" -msgstr "1つのインスタンスがどのくらい実行され続けますか?" - -msgid "" -"How many nova-api services do you run at once for your cloud?" -msgstr "" -"あなたのクラウドで、何個のnova-apiサービスを同時に実行しますか?" - -msgid "How many backups to keep?" -msgstr "いくつのバックアップを持つべきか?" - -msgid "How many compute nodes will run at once?" -msgstr "同時にコンピュートノードが何ノード実行されますか?" - -msgid "How many instances will run at once?" -msgstr "同時に何インスタンス実行されますか?" - -msgid "" -"How many users will access the dashboard versus the " -"REST API directly?" -msgstr "" -"REST APIに直接アクセスに対してどのくらい多くのユーザが ダッシュ" -"ボードにアクセスしますか?" - -msgid "How many users will access the API?" -msgstr "どのくらいの数のユーザがAPIにアクセスしますか?" - -msgid "How much storage is required (flavor disk size " -msgstr "必要とされるストレージ容量(flavor disk size " - -msgid "How often should backups be tested?" -msgstr "どの程度の頻度でバックアップをテストすべきか?" - -msgid "" -"How redundant and distributed is the storage? What happens if a storage node " -"fails? To what extent can it mitigate my data-loss disaster scenarios?" -msgstr "" -"ストレージの冗長性と分散をどうするか?ストレージノード障害でどうなるか?災害" -"時、自分のデータ消失をどの程度軽減できるのか?" - -msgid "How to Contribute to This Book" -msgstr "この本の作成に参加するには" - -msgid "How to Contribute to the Documentation" -msgstr "ドキュメント作成への貢献の仕方" - -msgid "" -"However, hardware choice is important for many applications, so if that " -"applies to you, consider that there are several software distributions " -"available that you can run on servers, storage, and network products of your " -"choosing. Canonical (where OpenStack replaced Eucalyptus as the default " -"cloud option in 2011), Red Hat, and SUSE offer enterprise OpenStack " -"solutions and support. You may also want to take a look at some of the " -"specialized distributions, such as those from Rackspace, Piston, SwiftStack, " -"or Cloudscaling." -msgstr "" -"また一方、ハードウェアの選択が多くのアプリケーションにとって重要です。そのた" -"め、アプライアンスを適用する場合、自分で選択したサーバー、ストレージ、ネット" -"ワークの製品で実行できる、ソフトウェアディストリビューションがいくつかあるこ" -"とを考慮してください。Canonical (2011 年に標準のクラウド製品を Eucalyptus か" -"ら OpenStack に置き換えました)、Red Hat、SUSE は、エンタープライズレベルの " -"OpenStack ソリューションとサポートを提供しています。Rackspace、Piston、" -"SwiftStack、Cloudscaling などの専門的なディストリビューションも見たいかもしれ" -"ません。" - -msgid "" -"However, if you are more restricted in the number of physical hosts you have " -"available for creating your cloud and you want to be able to dedicate as " -"many of your hosts as possible to running instances, it makes sense to run " -"compute and storage on the same machines." -msgstr "" -"一方、クラウドの構築に使用できる物理ホスト数に制限があり、できるだけ多くのホ" -"ストをインスタンスの実行に使えるようにしたい場合は、同じマシンでコンピュート" -"ホストとストレージホストを動作させるのは理にかなっています。" - -msgid "" -"However, that is not to say that it needs to be the same size or use " -"identical hardware as the production environment. It is important to " -"consider the hardware and scale of the cloud that you are upgrading. The " -"following tips can help you minimise the cost:upgradingcontrolling cost of" -msgstr "" -"しかしながら、言うまでもなく、本番環境と同じ大きさや同一のハードウェアを使用" -"する必要がありません。アップグレードするクラウドのハードウェアや規模を考慮す" -"ることは重要です。以下のヒントにより予測不能なコストを最小化する役に立つで" -"しょう。upgradingcontrolling cost of" - -msgid "" -"However, the enticing part of OpenStack might be to build your own private " -"cloud, and there are several ways to accomplish this goal. Perhaps the " -"simplest of all is an appliance-style solution. You purchase an appliance, " -"unpack it, plug in the power and the network, and watch it transform into an " -"OpenStack cloud with minimal additional configuration." -msgstr "" -"しかしながら、OpenStack の魅力的な部分は、自分のプライベートクラウドを構築す" -"ることかもしれません。この目標を達成するいくつかの方法があります。おそらく最" -"も簡単な方法は、アプライアンス形式のソリューションです。アプライアンスを購入" -"し、それを展開し、電源とネットワークを接続します。そして、最小限の設定だけで " -"OpenStack クラウドが構築されていくことを見ていてください。" - -msgid "" -"However, this guide has a different audience—those seeking flexibility from " -"the OpenStack framework by deploying do-it-yourself solutions." -msgstr "" -"しかしながら、このガイドは別の想定読者もいます。独自の自作ソリューションを導" -"入することにより、OpenStack フレームワークの柔軟性を求めている人々です。" - -msgid "However, this option has several downsides:" -msgstr "しかし、この方法にはいくつかマイナス点があります。" - -msgid "" -"However, you need more than the core count alone to estimate the load that " -"the API services, database servers, and queue servers are likely to " -"encounter. You must also consider the usage patterns of your cloud." -msgstr "" -"しかし、APIサービスやデータベースサーバー、MQサーバーがおそらく遭遇する負荷を" -"見積もるためには、コア数以外の検討も行う必要があります。クラウドの利用パター" -"ンも考慮しなければなりません。" - -msgid "However, you use them for different reasons." -msgstr "" -"しかし、アベイラビリティゾーンとホストアグリゲートは別の用途で使用します。" - -msgid "" -"Hyper-Threading is Intel's proprietary simultaneous multithreading " -"implementation used to improve parallelization on their CPUs. You might " -"consider enabling Hyper-Threading to improve the performance of " -"multithreaded applications." -msgstr "" -"ハイパースレッディングは、Intel 専用の同時マルチスレッド実装で、CPU の並列化" -"向上に使用されます。マルチスレッドアプリケーションのパフォーマンスを改善する" -"には、ハイパースレッディングを有効にすることを検討してください。" - -msgid "Hyper-V" -msgstr "Hyper-V" - -msgid "Hypervisor" -msgstr "ハイパーバイザー" - -msgid "" -"I checked Glance and noticed that this image was a snapshot that the user " -"created. At least that was good newsthis user would have been the only user " -"affected." -msgstr "" -"私は Glance をチェックし、問題のイメージがそのユーザの作成したスナップショッ" -"トであることに注目した。最終的に、それはグッドニュースだった。このユーザが影" -"響を受けた唯一のユーザだった。" - -msgid "" -"I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO " -"repository and everything was running pretty wellexcept the EC2 API." -msgstr "" -"RDO リポジトリーを使用して Grizzly から Havana 2013.2-2 に OpenStack を単に" -"アップグレードしました。そして、すべてのものが EC2 API で非常に良く動作してい" -"ました。" - -msgid "" -"I logged into the cloud controller and was able to both and " -"SSH into the problematic compute node which seemed very odd. Usually if I " -"receive this type of alert, the compute node has totally locked up and would " -"be inaccessible." -msgstr "" -"実に奇妙なことだが、私はクラウドコントローラーにログインし、問題のコンピュー" -"トノードに と SSH の両方を実行できた。通常、この種の警告を受" -"け取ると、コンピュートノードは完全にロックしていてアクセス不可になる。" - -msgid "" -"I looked at the status of both NICs in the bonded pair and saw that neither " -"was able to communicate with the switch port. Seeing as how each NIC in the " -"bond is connected to a separate switch, I thought that the chance of a " -"switch port dying on each switch at the same time was quite improbable. I " -"concluded that the 10gb dual port NIC had died and needed replaced. I " -"created a ticket for the hardware support department at the data center " -"where the node was hosted. I felt lucky that this was a new node and no one " -"else was hosted on it yet." -msgstr "" -"私は bonding ペアの両方の NIC の状態を確認し、両方ともスイッチポートへの通信" -"ができないことを知った。bond 中の各 NIC が異なるスイッチに接続されていること" -"を知り、私は、各スイッチのスイッチポートが同時に死ぬ可能性はまずないと思っ" -"た。私は 10Gb デュアルポート NIC が死んで、交換が必要だと結論づけた。私は、そ" -"のノードがホスティングされているデータセンターのハードウェアサポート部門に宛" -"てたチケットを作成した。私は、それが新しいノードで、他のインスタンスがまだそ" -"のノード上でホスティングされていないことを幸運に思った。" - -msgid "" -"I noticed that the API would suffer from a heavy load and respond slowly to " -"particular EC2 requests such as RunInstances." -msgstr "" -"この API は、RunInstances などの特定の EC2 リクエストに対" -"して、高負荷になり、応答が遅くなることに気がつきました。" - -msgid "" -"I reviewed the nova database and saw the instance's entry in " -"the nova.instances table. The image that the instance was using " -"matched what virsh was reporting, so no inconsistency there." -msgstr "" -"私は nova データベースを見直し、nova.instances テー" -"ブル中の当該インスタンスのレコードを見た。インスタンスが使用しているイメージ" -"は virsh が報告したものと一致した。よって、ここでは矛盾は発見されなかった。" - -msgid "" -"I was on-site in Kelowna, British Columbia, Canada setting up a new " -"OpenStack cloud. The deployment was fully automated: Cobbler deployed the OS " -"on the bare metal, bootstrapped it, and Puppet took over from there. I had " -"run the deployment scenario so many times in practice and took for granted " -"that everything was working." -msgstr "" -"私は、新しい OpenStack クラウドのセットアップをするため、カナダのブリティッ" -"シュコロンビア州ケロウナの現地にいた。デプロイ作業は完全に自動化されていた。" -"Cobbler が物理マシンに OS をデプロイし、それを起動し、その後は Puppet が引き" -"継いだ。私は練習で幾度もデプロイシナリオを実行してきたし、もちろん全て正常で" -"あった。" - -msgid "" -"I was totally confused at this point, so I texted our network admin to see " -"if he was available to help. He logged in to both switches and immediately " -"saw the problem: the switches detected spanning tree packets coming from the " -"two compute nodes and immediately shut the ports down to prevent spanning " -"tree loops: " -msgstr "" -"私はこの時点で完全に混乱した。よって、私はネットワーク管理者に対して、私を助" -"けられるか聞いてみるためメールした。彼は両方のスイッチにログインし、すぐに問" -"題を発見した。そのスイッチは2つのコンピュートノードから来たスパニングツリー" -"パケットを検出し、スパニングツリーループを回避するため、即時にそれらのポート" -"をダウンさせたのだ。" - -msgid "IBM (Storwize family/SVC, XIV)" -msgstr "IBM (Storwize family/SVC, XIV)" - -msgid "ID" -msgstr "ID" - -msgid "IP Address Planning" -msgstr "IP アドレス計画" - -msgid "" -"IP addresses for public-facing interfaces on the controller nodes (which end " -"users will access the OpenStack services)" -msgstr "" -"コントローラーノード上のパブリックインターフェースの IP アドレス (エンドユー" -"ザーが OpenStack サービスにアクセスするのに使用)" - -msgid "Icehouse" -msgstr "Icehouse" - -msgid "Identity" -msgstr "認証" - -msgid "Identity (keystone) driver" -msgstr "Identity (keystone) ドライバー" - -msgid "Identity driver" -msgstr "Identity driver" - -msgid "" -"If nova show does not sufficiently explain the failure, " -"searching for the instance UUID in the nova-compute.log on the " -"compute node it was scheduled on or the nova-scheduler.log on " -"your scheduler hosts is a good place to start looking for lower-level " -"problems." -msgstr "" -"nova show が十分な失敗の理由が表示されていない場合、そのインスタ" -"ンスがスケジューリングされたコンピュートノードの nova-compute.log やスケジューラーホストの nova-scheduler.log を、インスタン" -"スの UUID で検索するのが、より低レベルの問題を調査する良い出発点となります。" - -msgid "" -"If OpenStack is not set up in the right way, it is simple to have scenarios " -"in which users are unable to contact their instances due to having only an " -"incorrect DNS alias. Despite this, EC2 compatibility can assist users " -"migrating to your cloud." -msgstr "" -"もしOpenStakcが正しく構成されていない場合、正しくないDNSエイリアスしな無いた" -"め、ユーザはインスタンスへアクセスできないというシンプルな事態が生じます。そ" -"れにもかかわらず、EC2互換はユーザをあなたのクラウドへ移行することをアシストし" -"ます。" - -msgid "" -"If a compute node fails and won't be fixed for a few hours (or at all), you " -"can relaunch all instances that are hosted on the failed node if you use " -"shared storage for /var/lib/nova/instances." -msgstr "" -"コンピュートノードが故障し、2〜3時間もしくはそれ以上たっても復旧できないと見" -"込まれる場合、/var/lib/nova/instances に共有ストレージを使用して" -"いれば、故障したノードで動作していたインスタンスをすべて再スタートすることが" -"できます。" - -msgid "If a compute node fails, instances are usually easily recoverable." -msgstr "" -"コンピュートホストが故障した場合、通常インスタンスは簡単に復旧できます。" - -msgid "If a compute node fails, the instances running on that node are lost." -msgstr "" -"コンピュートノードが故障すると、そのノードで実行中のインスタンスが失われてし" -"まいます。" - -msgid "" -"If a dedicated remote access controller chip is included in servers, often " -"these are on a separate network." -msgstr "" -"専用のリモートアクセスコントローラーチップがサーバーに搭載されている場合、多" -"くの場合、これらは独立したネットワーク上に置かれます。" - -msgid "" -"If a hard drive fails in an Object Storage node, replacing it is relatively " -"easy. This assumes that your Object Storage environment is configured " -"correctly, where the data that is stored on the failed drive is also " -"replicated to other drives in the Object Storage environment.hard drives, replacingmaintenance/debuggingswift disk replacement" -msgstr "" -"Object Storage ノードのハードディスクが故障した場合、その交換は比較的簡単で" -"す。Object Storage 環境が正しく設定され、故障したディスクに保存されているデー" -"タが Object Storage 環境内の他のディスクにも複製されていることを前提にしてい" -"ます。hard drives, replacingmaintenance/" -"debuggingswift disk replacement" - -msgid "" -"If a storage node requires a reboot, simply reboot it. Requests for data " -"hosted on that node are redirected to other copies while the server is " -"rebooting.storage nodenodesstorage nodesmaintenance/debuggingstorage node " -"reboot" -msgstr "" -"ストレージノードを再起動する必要がある場合、単に再起動します。そのノードに配" -"置されているデータへのリクエストは、そのサーバーが再起動している間、別のコ" -"ピーにリダイレクトされます。storage " -"nodenodesstorage nodesmaintenance/debuggingstorage node " -"reboot" - -msgid "" -"If a user tries to create a volume and the volume immediately goes into an " -"error state, the best way to troubleshoot is to grep the cinder log files " -"for the volume's UUID. First try the log files on the cloud controller, and " -"then try the storage node where the volume was attempted to be created:" -msgstr "" -"ユーザーがボリュームを作成しようとし、すぐにエラー状態になれば、トラブル解決" -"のために最適な方法は cinder ログファイルをボリュームの UUID で grep すること" -"です。まずクラウドコントローラーにあるログファイルを調べます。次に、ボリュー" -"ムを作成しようとしたストレージノードのログファイルを調べます:" - -msgid "If additional storage is required, this option does not scale." -msgstr "追加ストレージが必要な場合、このオプションはスケールしません。" - -msgid "" -"If an instance does not boot, meaning virsh list never shows " -"the instance as even attempting to boot, do the following on the compute " -"node:" -msgstr "" -"インスタンスがブートしなければ、つまりブートしようとしても virsh list がインスタンスを表示しなければ、コンピュートノードにおいて以下のとおり" -"実行します。" - -msgid "" -"If an instance fails to start and immediately moves to an error state, there " -"are a few different ways to track down what has gone wrong. Some of these " -"can be done with normal user access, while others require access to your log " -"server or compute nodes.instancesboot failures" -msgstr "" -"インスタンスの開始に失敗し、すぐにエラー状態になるならば、何が問題なのかを追" -"跡するために、いくつかの異なる方法があります。いくつかの方法は通常のユーザー" -"アクセスで実行でき、他の方法ではログサーバーやコンピュートノードへのアクセス" -"が必要です。instancesboot failures" - -msgid "" -"If any of these links is missing or incorrect, it suggests a configuration " -"error. Bridges can be added with ovs-vsctl add-br, and " -"ports can be added to bridges with ovs-vsctl add-port. " -"While running these by hand can be useful debugging, it is imperative that " -"manual changes that you intend to keep be reflected back into your " -"configuration files." -msgstr "" -"これらのリンクのどれかが存在しない、または誤っている場合、設定エラーを暗示し" -"ています。ブリッジは ovs-vsctl add-br で追加できます。ポー" -"トは ovs-vsctl add-port でブリッジに追加できます。これらを" -"手動で実行することはデバッグに有用ですが、維持することを意図した手動の変更が" -"設定ファイルの中に反映されなければいけません。" - -msgid "" -"If ephemeral or block storage is external to the compute node, this network " -"is used." -msgstr "" -"一時ディスクまたはブロックストレージがコンピュートノード以外にある場合、この" -"ネットワークが使用されます。" - -msgid "" -"If many users will make multiple requests, make sure that the CPU load for " -"the cloud controller can handle it." -msgstr "" -"もし多数のユーザが複数のリクエストを発行するのであれば、クラウドコントロー" -"ラーがそれらを扱えるよう、CPU負荷を確認してください。" - -msgid "" -"If possible, we highly recommend that you dump your production database " -"tables and test the upgrade in your development environment using this data. " -"Several MySQL bugs have been uncovered during database migrations because of " -"slight table differences between a fresh installation and tables that " -"migrated from one version to another. This will have impact on large real " -"datasets, which you do not want to encounter during a production outage." -msgstr "" -"可能ならば、本番環境のデータベースのテーブルをダンプして、このデータを使用し" -"て開発環境においてアップグレードをテストすることを非常に強く推奨します。" -"MySQL バグのいくつかは、新規インストールと旧バージョンから移行したバージョン" -"のテーブルの間のわずかな違いによる、データベース移行中に取り扱われません。こ" -"れは大規模な実データセットにおいてのみ影響するでしょう。本番環境の停止中に遭" -"遇したくないでしょう。" - -msgid "" -"If restarting the dnsmasq process doesn't fix the issue, you might need to " -"use tcpdump to look at the packets to trace where the failure " -"is. The DNS server listens on UDP port 53. You should see the DNS request on " -"the bridge (such as, br100) of your compute node. Let's say you start " -"listening with tcpdump on the compute node:" -msgstr "" -"dnsmasqの再起動でも問題が解決しないときは、tcpdumpで問題がある場" -"所のパケットトレースを行う必要があるでしょう。DNSサーバーはUDPポート53番で" -"リッスンします。あなたのコンピュートノードのブリッジ(br100など)上でDNSリクエ" -"ストをチェックしてください。コンピュートノード上にて、tcpdumpで" -"リッスンを開始すると、" - -msgid "" -"If the affected instances also had attached volumes, first generate a list " -"of instance and volume UUIDs:volumemaintenance/debuggingmaintenance/debuggingvolumes" -msgstr "" -"影響するインスタンスもボリュームを接続していた場合、まずインスタンスとボ" -"リュームの UUID の一覧を生成します。volumemaintenance/debuggingmaintenance/debuggingvolumes" - -msgid "" -"If the bug contains the solution, or a patch, set the bug status to " -"Triaged." -msgstr "" -"バグ報告に解決方法やパッチが書かれている場合、そのバグの状態は " -"Triaged にセットされます。" - -msgid "" -"If the error indicates that the problem is with another component, switch to " -"tailing that component's log file. For example, if nova cannot access " -"glance, look at the glance-api log:" -msgstr "" -"エラーから問題が他のコンポーネントにあることが分かる場合には、そのコンポーネ" -"ントのログファイルに表示を切り替えます。nova が glance にアクセスできなけれ" -"ば、glance-api ログを確認します:" - -msgid "" -"If the instance fails to resolve the hostname, you have a DNS problem. For " -"example:" -msgstr "" -"もしインスタンスがホスト名の解決に失敗するのであれば、DNSに問題があります。例" -"えば、" - -msgid "" -"If the issue is extremely sensitive, send an encrypted email to one of the " -"team's members. Find their GPG keys at OpenStack Security." -msgstr "" -"問題が極めて慎重に扱うべき情報の場合は、脆弱性管理チームのメンバーの誰かに暗" -"号化したメールを送って下さい。メンバーの GPG 鍵は OpenStack Security で" -"入手できます。" - -msgid "" -"If the networking performance of the basic layout is not enough, you can " -"move to , which provides 2 10G network links to " -"all instances in the environment as well as providing more network bandwidth " -"to the storage layer. bandwidth obtaining maximum performance" -msgstr "" -"基本レイアウトのネットワークパフォーマンスが十分でない場合には、 に移行することができます。このレイアウトでは、環境内の全" -"インスタンスに 10G のネットワークリンクを 2 つ提供するだけでなく、ストレージ" -"層にさらなるネットワーク帯域幅を提供し、パフォーマンスを最大化します。" - -msgid "" -"If the package manager prompts you to update configuration files, reject the " -"changes. The package manager appends a suffix to newer versions of " -"configuration files. Consider reviewing and adopting content from these " -"files." -msgstr "" -"パッケージマネージャーがさまざまな設定ファイルの更新をプロンプトで確認する場" -"合、変更を拒否します。パッケージマネージャーは、新しいバージョンの設定ファイ" -"ルにサフィックスを追加します。これらのファイルの内容を確認して適用することを" -"検討してください。" - -msgid "" -"If the problem does not seem to be related to dnsmasq itself, at this point " -"use tcpdump on the interfaces to determine where the packets " -"are getting lost." -msgstr "" -"もし問題がdnsmasqと関係しないようであれば、tcpdumpを使ってパケッ" -"トロスがないか確認してください。" - -msgid "" -"If there is not enough information in the existing logs, you may need to add " -"your own custom logging statements to the nova-* services." -"customizationcustom log statementslogging/monitoringadding " -"custom log statements" -msgstr "" -"十分な情報が既存のログにない場合、独自のロギング宣言を nova-* " -"サービスに追加する必要があるかもしれません。customizationcustom log statementslogging/" -"monitoringadding custom log statements" - -msgid "" -"If there's a suspicious-looking dnsmasq log message, take a look at the " -"command-line arguments to the dnsmasq processes to see if they look correct:" -msgstr "" -"もしdnsmasqのログメッセージで疑わしいものがあれば、コマンドラインにてdnsmasq" -"が正しく動いているか確認してください。" - -msgid "" -"If this is the first time you are deploying a cloud infrastructure in your " -"organization, after reading this section, your first conversations should be " -"with your networking team. Network usage in a running cloud is vastly " -"different from traditional network deployments and has the potential to be " -"disruptive at both a connectivity and a policy level.cloud computingvs. traditional " -"deployments" -msgstr "" -"これがあなたの組織で初めてのクラウド基盤構築であれば、この章を読んだ後、最初" -"にあなたの(組織の)ネットワーク管理チームと相談すべきです。クラウド運用にお" -"けるネットワークの使用は伝統的なネットワーク構築とはかなり異なり、接続性とポ" -"リシーレベルの両面で破壊的な結果をもたらす可能性があります。クラウドコンピューティングVS. 伝統" -"的実装" - -msgid "" -"If you access or view the user's content and data, get approval first!" -"security issuesfailed instance data inspection" -msgstr "" -"ユーザーのコンテンツやデータを参照したい場合、まず許可を得ましょう。" -"security issuesfailed instance data inspection" - -msgid "" -"If you are able to use SSH to log into an instance, but it takes a very long " -"time (on the order of a minute) to get a prompt, then you might have a DNS " -"issue. The reason a DNS issue can cause this problem is that the SSH server " -"does a reverse DNS lookup on the IP address that you are connecting from. If " -"DNS lookup isn't working on your instances, then you must wait for the DNS " -"reverse lookup timeout to occur for the SSH login process to complete." -"DNS (Domain Name Server, Service or " -"System)debuggingtroubleshootingDNS issues" -msgstr "" -"あなたが SSH を使用してインスタンスにログインできるけれども、プロンプトが表示" -"されるまで長い時間(約1分)を要する場合、DNS に問題があるかもしれません。SSH " -"サーバーが接続元 IP アドレスの DNS 逆引きすること、それがこの問題の原因です。" -"もしあなたのインスタンスで DNS が正しく引けない場合、SSH のログインプロセスが" -"完了するには、DNS の逆引きがタイムアウトするまで待たなければいけません。" -"DNS (Domain Name Server, Service or " -"System)debuggingtroubleshootingDNS issues" - -msgid "" -"If you are logged in to an instance and ping an external host—for example, " -"Google—the ping packet takes the route shown in .ping packetstroubleshootingnova-network traffic" -msgstr "" -"インスタンスにログインして、外部ホスト (例えば Google) に ping する場合、" -"ping パケットはに示されたルートを通ります。" -"ping packetstroubleshootingnova-network traffic" - -msgid "" -"If you are not using shared storage, you can use the --block-migrate option:" -msgstr "" -"共有ストレージを使用していない場合、--block-migrate オプションを" -"使用できます。" - -msgid "" -"If you are using nova-network and multi-host networking " -"in your cloud environment, nova-compute still requires " -"direct access to the database.multi-" -"host networking" -msgstr "" -"あなたのクラウド環境で、nova-networkとマルチホストネット" -"ワークを使用している場合、 nova-computeは現在もデータベー" -"スへの直接アクセスを必要とします。マル" -"チホストネットワーク" - -msgid "" -"If you are using cinder, run the following command to see a similar listing:" -msgstr "" -"cinder を使用している場合は、次のコマンドを実行して同様の一覧を表示します。" - -msgid "" -"If you are using the nova client, specify --flavor 3 for the nova boot command to get adequate memory and disk " -"sizes." -msgstr "" -"nova コマンドを使っているのであれば、適切なメモリ量とディスクサ" -"イズを得るために nova boot コマンドに --flavor 3 を" -"指定してください。" - -msgid "" -"If you can't ping the IP address of the compute node, the problem is between " -"the instance and the compute node. This includes the bridge connecting the " -"compute node's main NIC with the vnet NIC of the instance." -msgstr "" -"もしコンピュートノードのIPアドレスにpingできないのであれば、問題はインスタン" -"スとコンピュートノード間にあります。これはコンピュートノードの物理NICとインス" -"タンス vnet NIC間のブリッジ接続を含みます。" - -msgid "" -"If you can't, try pinging the IP address of the compute node where the " -"instance is hosted. If you can ping this IP, then the problem is somewhere " -"between the compute node and that compute node's gateway." -msgstr "" -"もしそれができないのであれば、インスタンスがホストされているコンピュートノー" -"ドのIPアドレスへpingを試行してください。もしそのIPにpingできるのであれば、そ" -"のコンピュートノードと、ゲートウェイ間のどこかに問題があります。" - -msgid "" -"If you cannot have any data loss at all, you should also focus on a highly " -"available deployment. The OpenStack High Availability Guide offers suggestions for elimination of a single point of failure " -"that could cause system downtime. While it is not a completely prescriptive " -"document, it offers methods and techniques for avoiding downtime and data " -"loss.datapreventing loss of" -msgstr "" -"すべてのデータをまったく失いたくない場合、高可用性を持たせた導入に注力すべき" -"です。OpenStack High Availability Guide は、システム停止に" -"つながる可能性がある、単一障害点の削減に向けた提案があります。完全に規定され" -"たドキュメントではありませんが、停止時間やデータ損失を避けるための方法や技術" -"を提供しています。datapreventing loss of" - -msgid "" -"If you decide to continue this step manually, don't forget to change " -"neutron to quantum where applicable." -msgstr "" -"これらの手順を手動で続けることを決定した場合、適切なところにある " -"neutronquantum に変更することを忘れないでくださ" -"い。" - -msgid "" -"If you deploy only the OpenStack Compute Service (nova), your users do not " -"have access to any form of persistent storage by default. The disks " -"associated with VMs are \"ephemeral,\" meaning that (from the user's point " -"of view) they effectively disappear when a virtual machine is terminated." -"storageephemeral" -msgstr "" -"OpenStack Compute Service (nova) のみをデプロイする場合、デフォルトでは、どの" -"ような形式の永続ストレージにもアクセスできません。仮想マシンに関連付けされて" -"いるディスクは一時的です。つまり、(ユーザー視点から) 仮想マシンが終了されると" -"このストレージは効率的に表示がなくなります。ストレージ一時" - -msgid "" -"If you do not follow steps 4 through 6, OpenStack Compute cannot manage the " -"instance any longer. It fails to respond to any command issued by OpenStack " -"Compute, and it is marked as shut down." -msgstr "" -"手順 4-6 を省略すると、OpenStack Compute がインスタンスを管理できなくなりま" -"す。OpenStack Compute により発行されるすべてのコマンドに対する応答が失敗し、" -"シャットダウンしているように見えます。" - -msgid "" -"If you do not need a share any more, you can delete it using command like: " -msgstr "" -"共有が必要なくなった場合、次のように コマンドを使用して削除" -"できます。" - -msgid "" -"If you do not see the DHCPDISCOVER, a problem exists with " -"the packet getting from the instance to the machine running dnsmasq. If you " -"see all of the preceding output and your instances are still not able to " -"obtain IP addresses, then the packet is able to get from the instance to the " -"host running dnsmasq, but it is not able to make the return trip." -msgstr "" -"もしDHCPDISCOVERが見つからなければ、dnsmasqが動いているマ" -"シンがインスタンスからパケットを受け取れない何らかの問題があります。もし上記" -"の出力が全て確認でき、かついまだにIPアドレスを取得できないのであれば、パケッ" -"トはインスタンスからdnsmasq稼働マシンに到達していますが、その復路に問題があり" -"ます。" - -msgid "" -"If you find a bug and can't fix it or aren't sure it's really a doc bug, log " -"a bug at OpenStack Manuals. Tag the bug under Extra " -"options with the ops-guide tag to indicate that the bug " -"is in this guide. You can assign the bug to yourself if you know how to fix " -"it. Also, a member of the OpenStack doc-core team can triage the doc bug." -msgstr "" -"バグを見つけたが、どのように直せばよいか分からない場合や本当にドキュメントの" -"バグか自信が持てない場合は、OpenStack Manuals にバグを登録して、バグの " -"Extra オプションで ops-guide タグを付" -"けて下さい。 ops-guide タグは、そのバグがこのガイドに関す" -"るものであることを示します。どのように直せばよいか分かる場合には、そのバグの" -"担当者を自分に割り当てることもできます。また、OpenStack doc-core チームのメン" -"バーがドキュメントバグを分類することもできます。" - -msgid "" -"If you find that you have reached or are reaching the capacity limit of your " -"computing resources, you should plan to add additional compute nodes. Adding " -"more nodes is quite easy. The process for adding compute nodes is the same " -"as when the initial compute nodes were deployed to your cloud: use an " -"automated deployment system to bootstrap the bare-metal server with the " -"operating system and then have a configuration-management system install and " -"configure OpenStack Compute. Once the Compute service has been installed and " -"configured in the same way as the other compute nodes, it automatically " -"attaches itself to the cloud. The cloud controller notices the new node(s) " -"and begins scheduling instances to launch there.cloud controllersnew compute nodes andnodesaddingcompute nodesadding" -msgstr "" -"コンピューティングリソースのキャパシティ限界に達した、または達しそうとわかれ" -"ば、さらなるコンピュートノードの追加を計画すべきです。さらなるコンピュート" -"ノードを追加することは簡単です。ノードを追加する手順は、最初にコンピュート" -"ノードをクラウドに導入したときと同じです。自動配備システムを使ってベアメタル" -"サーバーにオペレーティングシステムのインストールと起動を行い、次に構成管理シ" -"ステムにより OpenStack Compute サービスのインストールと設定を行います。他のコ" -"ンピュートノードと同じ方法で Compute サービスのインストールと設定が終わると、" -"自動的にクラウドに接続されます。クラウドコントローラーが新しいノードを検知" -"し、そこにインスタンスを起動するようスケジュールし始めます。cloud controllersnew compute " -"nodes andnodesaddingcompute nodesadding" - -msgid "" -"If you have an OpenStack Object Storage service, we recommend using this as " -"a scalable place to store your images. You can also use a file system with " -"sufficient performance or Amazon S3—unless you do not need the ability to " -"upload new images through OpenStack." -msgstr "" -"OpenStack Storageサービスがある場合は、イメージを保存するためにスケーラブルな" -"場所を利用する事を推奨します。また、OpenStackをとおして新しいイメージをアップ" -"ロードする必要がない場合を除いて、十分な性能を備えたファイルシステムかAmazon " -"S3を使用する事もできます。" - -msgid "" -"If you have previously prepared block storage with a bootable file system " -"image, it is even possible to boot from persistent block storage. The " -"following command boots an image from the specified volume. It is similar to " -"the previous command, but the image is omitted and the volume is now " -"attached as /dev/vda:" -msgstr "" -"ブート可能なファイルシステムイメージでブロックストレージを事前に準備している" -"と、永続ブロックストレージからブートすることもできます。以下のコマンドは、指" -"定したボリュームからイメージを起動します。前のコマンドに似ていますが、イメー" -"ジが省略され、ボリュームが /dev/vda として接続されます。" - -msgid "" -"If you modify the configuration, it reverts the next time you restart " -"nova-network or neutron-server. You " -"must use OpenStack to manage iptables." -msgstr "" -"もしiptablesの構成を変更した場合、次のnova-networkや" -"neutron-serverの再起動時に前の状態に戻ります。iptablesの管" -"理にはOpenStackを使ってください。" - -msgid "" -"If you need even newer versions of the clients, pip can install directly " -"from the upstream git repository using the -e flag. You must " -"specify a name for the Python egg that is installed. For example:" -msgstr "" -"もし新しいバージョンのクライアントが必要な場合、-eフラグを指定す" -"ることで、アップストリームのgitリポジトリから直接導入できます。その際は、" -"Python egg名を指定しなければいけません。例えば、" - -msgid "" -"If you need to reboot a compute node due to planned maintenance (such as a " -"software or hardware upgrade), first ensure that all hosted instances have " -"been moved off the node. If your cloud is utilizing shared storage, use the " -"nova live-migration command. First, get a list of instances " -"that need to be moved:compute nodesmaintenancemaintenance/debuggingcompute node " -"planned maintenance" -msgstr "" -"(ソフトウェアやハードウェアのアップグレードのように) 計画されたメンテナンスの" -"ために、コンピュートノードを再起動する必要があれば、まずホストしている全イン" -"スタンスがノード外に移動していることを確認します。クラウドが共有ストレージを" -"利用していれば、nova live-migration コマンドを使用します。初め" -"に、移動させる必要があるインスタンスの一覧を取得します。compute nodesmaintenancemaintenance/" -"debuggingcompute node planned maintenance" - -msgid "" -"If you need to shut down a storage node for an extended period of time (one " -"or more days), consider removing the node from the storage ring. For example:" -"maintenance/debuggingstorage node shut down" -msgstr "" -"ストレージノードを少し長い間 ( 1 日以上) シャットダウンする必要があれば、ノー" -"ドをストレージリングから削除することを検討します。例:maintenance/debuggingstorage node " -"shut down" - -msgid "If you only want to backup a single database, you can instead run:" -msgstr "" -"もし、単一のデータベースのみバックアップする場合は次のように実行します。" - -msgid "" -"If you run FlatDHCPManager, one bridge is on the compute node. If you run " -"VlanManager, one bridge exists for each VLAN." -msgstr "" -"もしFlatDHCPManagerを使っているのであれば、ブリッジはコンピュートノード上に一" -"つです。VlanManagerであれば、VLANごとにブリッジが存在します。" - -msgid "" -"If you support the EC2 API on your cloud, you should also install the " -"euca2ools package or some other EC2 API tool so that you can get the same " -"view your users have. Using EC2 API-based tools is mostly out of the scope " -"of this guide, though we discuss getting credentials for use with it." -msgstr "" -"クラウド上で EC2 API をサポートする場合には、ユーザーと同じビューを表示できる" -"ように、euca2ools パッケージまたはその他の EC2 API ツールもインストールする必" -"要があります。EC2 API ベースのツールの使用に関する内容の大半は本ガイドの対象" -"範囲外となりますが、このツールで使用する認証情報の取得方法についての説明は記" -"載しています。" - -msgid "" -"If you use a configuration-management system, such as Puppet, that ensures " -"the nova-compute service is always running, you can temporarily " -"move the init files:" -msgstr "" -"Puppet などの構成管理システムを使って、nova-compute サービスが確" -"実に実行されているようにしている場合、init ファイルを一時" -"的に移動します。" - -msgid "" -"If you use an external storage plug-in or shared file system with your " -"cloud, you can test whether it works by creating a second share or endpoint. " -"This allows you to test the system before entrusting the new version on to " -"your storage." -msgstr "" -"クラウドに外部ストレージプラグインや共有ファイルシステムを使用している場合、" -"2 つ目の共有やエンドポイントを作成することにより、正常に動作するかどうかをテ" -"ストできます。これにより、ストレージに新しいバージョンを信頼させる前に、シス" -"テムをテストできるようになります。" - -msgid "" -"If you use libvirt version 1.2.2, you may experience " -"intermittent problems with live snapshot creation." -msgstr "" -"libvirt バージョン 1.2.2 を使用している場合、ライブスナッ" -"プショットの作成において、断続的な問題を経験するかもしれません。" - -msgid "" -"If you use separate compute and storage hosts, you can treat your compute " -"hosts as \"stateless.\" As long as you don't have any instances currently " -"running on a compute host, you can take it offline or wipe it completely " -"without having any effect on the rest of your cloud. This simplifies " -"maintenance for the compute hosts." -msgstr "" -"コンピュートホストとストレージホストを分離して使用すると、コンピュートホスト" -"を「ステートレス」として処理できます。コンピュートホストで実行中のインスタン" -"スがなければ、クラウドの他のアイテムに影響を与えることなく、ノードをオフライ" -"ンにしたり、完全に消去したりすることができます。これにより、コンピュートホス" -"トのメンテナンスが簡素化されます。" - -msgid "" -"If you use the GlusterFS native client, no virtual IP is needed, since the " -"client knows all about nodes after initial connection and automatically " -"routes around failures on the client side." -msgstr "" -"GlusterFS ネイティブクライアントを使用する場合には、そのクライアントは初回接" -"続後にノードに関する全情報を認識し、クライアント側の障害を迂回するように自動" -"的にルーティングするため、仮想 IP は必要ありません。" - -msgid "" -"If you use the NFS or SMB adaptor, you will need a virtual IP on which to " -"mount the GlusterFS volumes." -msgstr "" -"NFS または SMB のアダプターを使用する場合には、GlusterFS ボリュームをマウント" -"する仮想 IP が必要となります。" - -msgid "" -"If you want to back up the root file system, you can't simply run the " -"preceding command because it will freeze the prompt. Instead, run the " -"following one-liner, as root, inside the instance:" -msgstr "" -"ルートファイルシステムをバックアップしたければ、プロンプトがフリーズしてしま" -"すので、上のコマンドを単純に実行できません。代わりに、インスタンスの中で " -"root として以下の 1 行を実行します。" - -msgid "" -"If you want to manage your object and block storage within a single system, " -"or if you want to support fast boot-from-volume, you should consider Ceph." -msgstr "" -"単一システムでオブジェクトストレージとブロックストレージを管理する場合、また" -"はボリュームから素早く起動するサポートが必要な場合、Ceph の使用を検討してくだ" -"さい。" - -msgid "" -"If you want to support shared-storage live migration, you need to configure " -"a distributed file system.compute " -"nodesfile system choicefile systemschoice ofstoragefile system choice" -msgstr "" -"共有ストレージのライブマイグレーションをサポートする場合、分散ファイルシステ" -"ムを設定する必要があります。コンピュー" -"トノードファイルシステムの選択ファイルシステム選択ストレージファイルシステムの選択" - -msgid "" -"If you're encountering any sort of networking difficulty, one good initial " -"sanity check is to make sure that your interfaces are up. For example:" -msgstr "" -"もしあなたがネットワークの問題に直面した場合、まず最初にするとよいのは、イン" -"ターフェイスがUPになっているかを確認することです。例えば、" - -msgid "" -"If you're running the Cirros image, it doesn't have the \"host\" program " -"installed, in which case you can use ping to try to access a machine by " -"hostname to see whether it resolves. If DNS is working, the first line of " -"ping would be:" -msgstr "" -"もしあなたがCirrosイメージを使っているのであれば、\"host\"プログラムはインス" -"トールされていません。その場合はpingを使い、ホスト名が解決できているか判断で" -"きます。もしDNSが動いていれば、ping結果の先頭行はこうなるはずです。" - -msgid "" -"If your OpenStack Block Storage nodes are separate from your compute nodes, " -"the same procedure still applies because the same queuing and polling system " -"is used in both services." -msgstr "" -"OpenStack ブロックストレージノードがコンピュートノードから分離いる場合、同じ" -"キュー管理とポーリングのシステムが両方のサービスで使用されるので、同じ手順が" -"適用できます。" - -msgid "" -"If your environment is using Neutron, you can configure security groups " -"settings using the neutron command. Get a list of " -"security groups for the project you are acting in, by using following " -"command:" -msgstr "" -"お使いの環境で Neutron を使用している場合、neutron コマン" -"ドを使用して、セキュリティグループを設定できます。以下のコマンドを使用して、" -"作業しているプロジェクトのセキュリティグループの一覧を取得します。" - -msgid "" -"If your instance failed to obtain an IP through DHCP, some messages should " -"appear in the console. For example, for the Cirros image, you see output " -"that looks like the following:" -msgstr "" -"もしインスタンスがDHCPからのIP取得に失敗していれば、いくつかのメッセージがコ" -"ンソールで確認できるはずです。例えば、Cirrosイメージでは、以下のような出力に" -"なります。" - -msgid "" -"If your instances are still not able to obtain IP addresses, the next thing " -"to check is whether dnsmasq is seeing the DHCP requests from the instance. " -"On the machine that is running the dnsmasq process, which is the compute " -"host if running in multi-host mode, look at /var/log/syslog to see the dnsmasq output. If dnsmasq is seeing the request " -"properly and handing out an IP, the output looks like this:" -msgstr "" -"もしまだインスタンスがIPアドレスを取得できない場合、次はdnsmasqがインスタンス" -"からのDHCPリクエストを見えているか確認します。dnsmasqプロセスが動いているマシ" -"ンで、/var/log/syslogを参照し、dnsmasqの出力を確認します。" -"なお、マルチホストモードで動作している場合は、dnsmasqプロセスはコンピュート" -"ノードで動作します。もしdnsmasqがリクエストを正しく受け取り、処理していれば、" -"以下のような出力になります。" - -msgid "" -"If your new object storage node has a different number of disks than the " -"original nodes have, the command to add the new node is different from the " -"original commands. These parameters vary from environment to environment." -msgstr "" -"新しいオブジェクトストレージノードのディスク数が元々のノードのディスク数と異" -"なる場合には、新しいノードを追加するコマンドが元々のコマンドと異なります。こ" -"れらのパラメーターは環境により異なります。" - -msgid "" -"If your operating system doesn't have a version of fsfreeze available, you can use xfs_freeze instead, which " -"is available on Ubuntu in the xfsprogs package. Despite the \"xfs\" in the " -"name, xfs_freeze also works on ext3 and ext4 if you are using a Linux kernel " -"version 2.6.29 or greater, since it works at the virtual file system (VFS) " -"level starting at 2.6.29. The xfs_freeze version supports the same command-" -"line arguments as fsfreeze." -msgstr "" -"お使いのオペレーティングシステムに利用可能なバージョンの fsfreeze がなければ、代わりに xfs_freeze を使用できます。" -"これは Ubuntu の xfsprogs パッケージにおいて利用可能です。\"xfs\" という名前" -"にもかかわらず、xfs_freeze は Linux カーネル 2.6.29 またはそれ以降を使用して" -"いれば ext3 や ext4 においても動作します。それは 2.6.29 において開始された仮" -"想ファイルシステム (VFS) レベルで動作するためです。この xfs_freeze のバージョ" -"ンは fsfreeze と同じ名前のコマンドライン引数をサポートしま" -"す。" - -msgid "" -"If your preference is to build your own OpenStack expertise internally, a " -"good way to kick-start that might be to attend or arrange a training " -"session. The OpenStack Foundation has a Training Marketplace where you " -"can look for nearby events. Also, the OpenStack community is working to produce open source training materials." -msgstr "" -"自分たち内部の OpenStack 専門性を高めることを優先する場合、それを始める良い方" -"法は、トレーニングに参加または手配することかもしれません。OpenStack " -"Foundation は、お近くのイベントを見つけられる Training Marketplace を開いていま" -"す。また、OpenStack コミュニティーは、オープンソースのトレーニング教材を" -"作成する" -"グループがあります。" - -msgid "Image Catalog and Delivery" -msgstr "イメージカタログと配布" - -msgid "Image Catalog and Delivery services" -msgstr "イメージカタログとイメージ配信のサービス" - -msgid "Image service (glance) back end" -msgstr "Image service (glance) バックエンド" - -msgid "Image service back end" -msgstr "Image service バックエンド" - -msgid "Image service for the image management" -msgstr "イメージ管理のための Image serviceイメージサービス" - -msgid "Image usage" -msgstr "イメージ使用量" - -msgid "Image-management services" -msgstr "イメージ管理サービス" - -msgid "Image: Ubuntu 14.04 LTS" -msgstr "イメージ: Ubuntu 14.04 LTS" - -msgid "Images" -msgstr "イメージ" - -msgid "" -"Imagine a scenario where you have public access to one of your containers, " -"but what you really want is to restrict access to that to a set of IPs based " -"on a whitelist. In this example, we'll create a piece of middleware for " -"swift that allows access to a container from only a set of IP addresses, as " -"determined by the container's metadata items. Only those IP addresses that " -"you explicitly whitelist using the container's metadata will be able to " -"access the container." -msgstr "" -"お使いのコンテナーの 1 つにパブリックにアクセスできるシナリオを想像してくださ" -"い。しかし、本当にやりたいことは、ホワイトリストに基づいてアクセスできる IP " -"を制限することです。この例では、コンテナーのメタデータ項目により決められるよ" -"う、ある IP アドレス群だけからコンテナーにアクセスを許可する、swift 向けのミ" -"ドルウェア部品を作成します。コンテナーのメタデータを使用して、明示的にホワイ" -"トリストに入っている IP アドレスのみが、コンテナーにアクセスできます。" - -msgid "" -"Immediately after create, the security group has only an allow egress rule. " -"To make it do what we want, we need to add some rules:" -msgstr "" -"作成後すぐでは、セキュリティグループは送信ルールのみを許可します。やりたいこ" -"とを行うために、いくつかのルールを追加する必要があります。" - -msgid "Implementing Periodic Tasks" -msgstr "定期タスクの実装" - -msgid "Importance: <Bug impact>" -msgstr "Importance: <バグ影響度>" - -msgid "" -"In nova.conf, vlan_interface specifies " -"what interface OpenStack should attach all VLANs to. The correct setting " -"should have been: " -msgstr "" -"nova.conf 中で、vlan_interface は " -"OpenStack が全ての VLAN をアタッチすべきインターフェースがどれかを指定する。" -"正しい設定はこうだった。" - -msgid "" -"In OpenStack user interfaces and documentation, a group of users is referred " -"to as a project or tenant. " -"These terms are interchangeable.user " -"managementterminology fortenantdefinition ofprojectsdefinition of" -msgstr "" -"OpenStack ユーザーインターフェースとドキュメントでは、ユーザーのグループは " -"プロジェクト または テナント と" -"呼ばれます。これらの用語は同義です。" -"ユーザー管理用語テナント定義プロジェクト定義" - -msgid "" -"In a default OpenStack deployment, there is a single nova-network service that runs within the cloud (usually on the cloud controller) " -"that provides services such as network address translation (NAT), DHCP, and " -"DNS to the guest instances. If the single node that runs the nova-" -"network service goes down, you cannot access your instances, and the " -"instances cannot access the Internet. The single node that runs the " -"nova-network service can become a bottleneck if excessive " -"network traffic comes in and goes out of the cloud.networksmulti-hostmulti-host networkinglegacy networking " -"(nova)benefits of multi-host networking" -msgstr "" -"デフォルトの OpenStack デプロイメントでは、単一の nova-network " -"サービスがクラウド内 (通常はクラウドコントローラー) で実行され、ネットワーク" -"アドレス変換 (NAT)、DHCP、DNS などのサービスをゲストインスタンスにd提供しま" -"す。nova-network サービスを実行する単一のノードが停止した場合に" -"は、インスタンスにアクセスできなくなり、またインスタンスはインターネットにア" -"クセスできません。クラウドでネットワークトラフィックが過剰に送受信されると、" -"nova-network サービスを実行する単一ノードがボトルネックと" -"なる可能性があります。ネットワークマルチホストマルチホストネットワークレガシーネットワーク(nova)マルチホストネットワークの利点" - -msgid "" -"In a multi-tenant cloud environment, users sometimes want to share their " -"personal images or snapshots with other projects.projectssharing images betweenimagessharing between projects This can " -"be done on the command line with the glance tool by the " -"owner of the image." -msgstr "" -"マルチテナントクラウド環境において、ユーザーはときどき、自分のイメージやス" -"ナップショットを他のプロジェクトと共有したいことがあります。projectssharing images betweenimagessharing between projects これは、" -"イメージの所有者がコマンドラインから glance ツールを使用す" -"ることによりできます。" - -msgid "" -"In addition to the open source technologies, there are a number of " -"proprietary solutions that are officially supported by OpenStack Block " -"Storage.storagestorage driver support They are " -"offered by the following vendors:" -msgstr "" -"オープンソースのテクノロジーに加え、OpenStack Block Storage で正式にサポート" -"される専用ソリューションが多数存在します。ストレージストレージドライバーのサポート以下のベンダーによりサポートされています。" - -msgid "" -"In addition to this book, there are many other sources of information about " -"OpenStack. The OpenStack " -"website is a good starting point, with OpenStack Docs and OpenStack API Docs providing technical " -"documentation about OpenStack. The OpenStack wiki contains a lot of general " -"information that cuts across the OpenStack projects, including a list of " -"recommended tools. Finally, there are a number of blogs aggregated " -"at Planet OpenStack." -"OpenStack communityadditional information" -msgstr "" -"この本以外にも、OpenStack に関する情報源はたくさんあります。 OpenStack ウェブサイト は最初に見る" -"といいでしょう。ここには、OpenStack ドキュメント と OpenStack API ドキュメント があり、OpenStack に関す" -"る技術情報が提供されています。OpenStack wiki には、OpenStack の各プロジェクトの範囲" -"にとどらまらない汎用的な情報が多数あり、例えば 推奨ツール といった情報も" -"載っています。最後に、Planet OpenStack には多くのブログが集められています。OpenStack communityadditional information" - -msgid "" -"In addition, consider remote power control as well. While IPMI usually " -"controls the server's power state, having remote access to the PDU that the " -"server is plugged into can really be useful for situations when everything " -"seems wedged." -msgstr "" -"さらに、リモート電源管理装置も検討してください。通常、IPMI はサーバーの電源状" -"態を制御しますが、サーバーが接続されている PDU にリモートアクセスできれば、す" -"べてが手詰まりに見えるような状況で非常に役に立ちます。" - -msgid "" -"In addition, database migrations are now tested with the Turbo Hipster tool. " -"This tool tests database migration performance on copies of real-world user " -"databases." -msgstr "" -"さらに、データベースの移行が Turbo Hipster ツールを用いてテストされます。この" -"ツールは、実世界のユーザーのデータベースのコピーにおいて、データベースの移行" -"パフォーマンスをテストします。" - -msgid "" -"In an OpenStack cloud, the dnsmasq process acts as the DNS server for the " -"instances in addition to acting as the DHCP server. A misbehaving dnsmasq " -"process may be the source of DNS-related issues inside the instance. As " -"mentioned in the previous section, the simplest way to rule out a " -"misbehaving dnsmasq process is to kill all the dnsmasq processes on the " -"machine and restart nova-network. However, be aware that " -"this command affects everyone running instances on this node, including " -"tenants that have not seen the issue. As a last resort, as root:" -msgstr "" -"OpenStackクラウドにおいて、dnsmasqプロセスはDHCPサーバに加えてDNSサーバーの役" -"割を担っています。dnsmasqの不具合は、インスタンスにおけるDNS関連問題の原因と" -"なりえます。前節で述べたように、dnsmasqの不具合を解決するもっともシンプルな方" -"法は、マシン上のすべてのdnsmasqプロセスをkillし、nova-networkを再起動することです。しかしながら、このコマンドは該当ノード上で動い" -"ているすべてのインスタンス、特に問題がないテナントにも影響します。最終手段と" -"して、rootで以下を実行します。" - -msgid "" -"In general, the questions you should ask when selecting storage are as " -"follows:" -msgstr "一般的に、ストレージを選択する際に以下の点を考慮する必要があります。" - -msgid "" -"In most cases, the error is the result of something in libvirt's XML file " -"(/etc/libvirt/qemu/instance-xxxxxxxx.xml) that no longer " -"exists. You can enforce re-creation of the XML file as well as rebooting the " -"instance by running the following command:" -msgstr "" -"多くの場合、エラーが libvirt の XML ファイル (/etc/libvirt/qemu/" -"instance-xxxxxxxx.xml) の何かになります。以下のコマンドを実行すること" -"により、インスタンスを再起動するのと同時に、強制的に XML ファイルを再作成でき" -"ます:" - -msgid "" -"In our experience, most operators don't sit right next to the servers " -"running the cloud, and many don't necessarily enjoy visiting the data " -"center. OpenStack should be entirely remotely configurable, but sometimes " -"not everything goes according to plan.provisioning/deploymentremote management" -msgstr "" -"経験上、多くのオペレーターはクラウドを動かすサーバのそばにいるわけではありま" -"せんし、多くの人が必ずしも楽しんでデータセンターに訪問してるわけではありませ" -"ん。OpenStackは、完全にリモート設定できるはずですが、計画通りにいかないことも" -"あります。プロビジョニング/デプロイメ" -"ント遠隔管理" - -msgid "" -"In some cases, some operations should be restricted to administrators only. " -"Therefore, as a further example, let us consider how this sample policy file " -"could be modified in a scenario where we enable users to create their own " -"flavors:" -msgstr "" -"いくつかの場合では、いくつかの操作を管理者のみに制限すべきです。そこで、次の" -"例では、ユーザーが自分のフレーバーを作成できるようにするシナリオの場合に、こ" -"のサンプルのポリシーファイルをどのように変更すればよいかを示します。" - -msgid "" -"In some scenarios, instances are running but are inaccessible through SSH " -"and do not respond to any command. The VNC console could be displaying a " -"boot failure or kernel panic error messages. This could be an indication of " -"file system corruption on the VM itself. If you need to recover files or " -"inspect the content of the instance, qemu-nbd can be used to mount the disk." -"datainspecting/" -"recovering failed instances" -msgstr "" -"いくつかのシナリオでは、インスタンスが実行中であるにも関わらず、SSH 経由でア" -"クセスできず、あらゆるコマンドに反応がありません。VNC コンソールがブート失敗" -"やカーネルパニックのエラーメッセージを表示している可能性があります。これは仮" -"想マシン自身においてファイルシステム破損の意味する可能性があります。ファイル" -"を復旧したりインスタンスの中身を調査したりする必要があれば、qemu-nbd を使って" -"ディスクをマウントできます。datainspecting/recovering failed instances" - -msgid "" -"In the bug comments, you can contribute instructions on how to fix a given " -"bug, and set it to Triaged. Or you can directly fix it: " -"assign the bug to yourself, set it to In progress, " -"branch the code, implement the fix, and propose your change for merging. But " -"let's not get ahead of ourselves; there are bug triaging tasks as well." -msgstr "" -"バグのコメントでは、既知のバグの修正方法に関して案を出したり、バグの状態を " -"Triaged (有効な報告か、対応が必要かなどを分類した状態) " -"にセットすることができます。バグを直接修正することもできます。その場合は、バ" -"グを自分に割り当てて、状態を In progress (進行中) にセッ" -"トし、コードのブランチを作り、マージしてもらうように変更を提案するという流れ" -"になります。でも、先走りし過ぎないようにしましょう。バグを分類するという仕事" -"もありますから。" - -msgid "" -"In the previous version of OpenStack, all nova-compute " -"services required direct access to the database hosted on the cloud " -"controller. This was problematic for two reasons: security and performance. " -"With regard to security, if a compute node is compromised, the attacker " -"inherently has access to the database. With regard to performance, " -"nova-compute calls to the database are single-threaded " -"and blocking. This creates a performance bottleneck because database " -"requests are fulfilled serially rather than in parallel.conductorsdesign considerationsconductor " -"services" -msgstr "" -"以前のバージョンのOpenStackではすべてのnova-computeサービ" -"スはクラウドコントローラーに搭載されたデータベースに直接アクセスする必要があ" -"りました。これはセキュリティとパフォーマンスという2つの問題を抱えていました。" -"セキュリティに関してはもしコンピュートノードに侵入された場合、アタッカーは" -"データベースにアクセスできてしまいます。パフォーマンスに関しては、" -"nova-computeはデータベースの呼び出しをシングルスレッドで行" -"い、他の呼び出しをブロックする点です。データベースリクエストはシリアルリクエ" -"ストよりパラレルリクエストの方が効率がいいのでこの点はパフォーマンスのボトル" -"ネックになります。コンダクター設計上の考慮事項コンダクターサービス" - -msgid "" -"In the remainder of this guide, we assume that you have successfully " -"deployed an OpenStack cloud and are able to perform basic operations such as " -"adding images, booting instances, and attaching volumes." -msgstr "" -"このガイドの残りの部分では、OpenStack クラウドのデプロイが無事に行われ、イ" -"メージの追加、インスタンスの起動、ボリュームの追加が行えるようになっているも" -"のとします。" - -msgid "" -"In the very common case where the underlying snapshot is done via LVM, the " -"filesystem freeze is automatically handled by LVM." -msgstr "" -"ベースとするスナップショットが LVM 経由で取得されている非常に一般的な場合、" -"ファイルシステムのフリーズは、LVM により自動的に処理されます。" - -msgid "" -"In this case, gre-1 is a tunnel from IP 10.10.128.21, which " -"should match a local interface on this node, to IP 10.10.128.16 on the " -"remote side." -msgstr "" -"この場合、gre-1 が IP 10.10.128.21 からリモートの IP " -"10.10.128.16 へのトンネルです。これは、このノードのローカルインターフェースと" -"一致します。" - -msgid "" -"In this case, looking at the fault message shows " -"NoValidHost, indicating that the scheduler was unable to " -"match the instance requirements." -msgstr "" -"この場合、fault メッセージに NoValidHost が表示されています。NoValidHost はスケジューラー" -"がインスタンスの要件を満たせなかったことを意味します。" - -msgid "" -"In this chapter, we discuss some of the choices you need to consider when " -"building out your compute nodes. Compute nodes form the resource core of the " -"OpenStack Compute cloud, providing the processing, memory, network and " -"storage resources to run instances." -msgstr "" -"本章では、コンピュートノードの構築時に考慮する必要のある選択肢について説明し" -"ます。コンピュートノードは、OpenStack Compute クラウドのリソースコアを構成" -"し、プロセッシング、メモリー、ネットワーク、ストレージの各リソースを提供して" -"インスタンスを実行します。" - -msgid "" -"In this chapter, we'll give some examples of network implementations to " -"consider and provide information about some of the network layouts that " -"OpenStack uses. Finally, we have some brief notes on the networking services " -"that are essential for stable operation." -msgstr "" -"この章では、いくつかのネットワーク構成の例を挙げながら、OpenStackを使用した" -"ネットワークレイアウトについての情報を提供します。最後に、安定稼働のたの重要" -"な音とトワークサービスに関する注意事項を記載します。" - -msgid "" -"In this error, a nova service has failed to connect to the RabbitMQ server " -"because it got a connection refused error." -msgstr "" -"このエラーでは、novaサービスがRabbitMQへの接続に失敗していました。接続が拒否" -"されたというエラーが出力されています。" - -msgid "" -"In this example, cinder-volumes failed to start and has " -"provided a stack trace, since its volume back end has been unable to set up " -"the storage volume—probably because the LVM volume that is expected from the " -"configuration does not exist." -msgstr "" -"この例では、ボリュームのバックエンドがストレージボリュームをセットアップがで" -"きなかったため、cinder-volumes が起動に失敗し、スタックト" -"レースを出力しています。おそらく、設定ファイルで指定された LVM ボリュームが存" -"在しないためと考えられます。" - -msgid "In this example, these locations have the following IP addresses:" -msgstr "例では、この環境には以下のIPアドレスが存在します" - -msgid "" -"In this instance, having an out-of-band access into nodes running OpenStack " -"components is a boon. The IPMI protocol is the de facto standard here, and " -"acquiring hardware that supports it is highly recommended to achieve that " -"lights-out data center aim." -msgstr "" -"この場合、OpenStack が動くノードに対して外側からアクセスできるようにすること" -"が重要です。ここでは、IPMIプロトコルが事実上標準となっています。完全自動の" -"データセンタを実現するために、IPMIをサポートしたハードウェアを入手することを" -"強く推奨します。" - -msgid "" -"In this option, each compute node is specified with a significant amount of " -"disk space, but a distributed file system ties the disks from each compute " -"node into a single mount." -msgstr "" -"このオプションでは、各コンピュートノードに、多くのディスク容量が指定されてい" -"ますが、分散ファイルシステムにより、それぞれのコンピュートノードからのディス" -"クが 1 つのマウントとしてまとめられます。" - -msgid "" -"In this option, each compute node is specified with enough disks to store " -"the instances it hosts.file systemsnonshared" -msgstr "" -"このオプションでは、ホストするインスタンスを格納するのに十分なディスク容量が" -"各コンピュートノードに指定されます。" -"ファイルシステム非共有" - -msgid "" -"In this option, the disks storing the running instances are hosted in " -"servers outside of the compute nodes.shared storagefile systemsshared" -msgstr "" -"このオプションでは、実行中のインスタンスを格納するディスクはコンピュートノー" -"ド外のサーバーでホストされます。共有ス" -"トレージファイル" -"システム共有" - -msgid "In-band remote management" -msgstr "帯域内リモート管理" - -msgid "In-memory key-value Store (a simplified internal storage structure)" -msgstr "メモリ内のキーバリュー型ストア(シンプルな内部ストレージ構造)" - -msgid "" -"Indicates new terms, URLs, email addresses, filenames, and file extensions." -msgstr "" -"新しい用語、URL、電子メール、ファイル名、ファイルの拡張子を意味します。" - -msgid "" -"Indicates which resources to use first; for example, spreading out where " -"instances are launched based on an algorithm" -msgstr "" -"どのリソースを最初に使用するかを示します。例えば、インスタンスをどこで起動す" -"るかをアルゴリズムに乗っ取って展開します。" - -msgid "" -"Indicates which users can do what actions on certain cloud resources; quota " -"management is spread out among services, howeverauthentication" -msgstr "" -"ある特定のクラウドリソースで誰が何をしようとしているか示します。しかし、ク" -"オータ管理は全体のサービスに展開されます。認証" - -msgid "Influencing the Roadmap" -msgstr "ロードマップへの影響" - -msgid "Information Available to You" -msgstr "利用できる情報" - -msgid "Initial deployment" -msgstr "初期デプロイメント" - -msgid "" -"Initially, the connection setup should revolve around keeping the " -"connectivity simple and straightforward in order to minimize deployment " -"complexity and time to deploy. The deployment shown in aims to have 1 10G connectivity available to all compute nodes, " -"while still leveraging bonding on appropriate nodes for maximum performance." -msgstr "" -"デプロイメントの複雑度と所要時間を最小限に抑えるためには、初めに、接続性を単" -"純かつ簡潔に保つことを中心に接続の設定を展開する必要があります。 に示したデプロイメントでは、全コンピュートノードに 10G " -"の接続を 1 つずつ提供する一方で、適切なノードにはボンディングを活用してパ" -"フォーマンスを最大化することを目的としています。" - -msgid "Injected file content bytes" -msgstr "注入ファイルのコンテンツ (バイト)" - -msgid "Injected file path bytes" -msgstr "注入ファイルのパス (バイト)" - -msgid "Injected files" -msgstr "注入ファイル" - -msgid "" -"Inside the book sprint room with us each day was our book sprint facilitator " -"Adam Hyde. Without his tireless support and encouragement, we would have " -"thought a book of this scope was impossible in five days. Adam has proven " -"the book sprint method effectively again and again. He creates both tools " -"and faith in collaborative authoring at www.booksprints.net." -msgstr "" -"私たちは、ブックスプリント部屋の中で、ファシリテーターの Adam Hyde と毎日過" -"ごしました。彼の精力的なサポートと励ましがなければ、この範囲のドキュメントを " -"5 日間で作成できなかったでしょう。Adam は、ブックスプリントの手法が効果的であ" -"ることを何回も実証しました。彼は、www.booksprints.net にあるコラボレーションにおいて、ツールと信" -"念の両方を作成しました。" - -msgid "Inspecting API Calls" -msgstr "API コールの検査" - -msgid "Inspecting and Recovering Data from Failed Instances" -msgstr "故障したインスタンスからの検査とデータ復旧" - -msgid "Install git: " -msgstr "git をインストールします: " - -msgid "" -"Install the keystone and swift clients on your " -"local machine:" -msgstr "" -"ローカルマシンに keystoneswift クライアントをイ" -"ンストールします。" - -msgid "Installation Guide for Red Hat Enterprise Linux 7 and CentOS 7" -msgstr "インストールガイド Red Hat Enterprise Linux 7、CentOS 7 版" - -msgid "" -"Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 22" -msgstr "インストールガイド Red Hat Enterprise Linux 7、CentOS 7、Fedora 22" - -msgid "Installation Guide for Ubuntu 14.04 (LTS) Server" -msgstr "インストールガイド Ubuntu 14.04 (LTS) Server 版" - -msgid "" -"Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise Server 12" -msgstr "インストールガイド openSUSE 13.2、SUSE Linux Enterprise Server 12 版" - -msgid "Installing the Tools" -msgstr "ツールの導入" - -msgid "" -"Installs a virtual environment and runs tests.script modules" -msgstr "" -"仮想環境をインストールし、テストを実行します。スクリプトモジュール" - -msgid "Instance \"ignores\" the response and re-sends the renewal request." -msgstr "インスタンスはそのレスポンスを「無視」して、更新リクエストを再送する。" - -msgid "Instance Boot Failures" -msgstr "インスタンスの起動失敗" - -msgid "Instance Storage Solutions" -msgstr "インスタンスストレージのソリューション" - -msgid "" -"Instance begins sending a renewal request to 255.255.255.255 " -"since it hasn't heard back from the cloud controller." -msgstr "" -"インスタンスはクラウドコントローラーからのレスポンスを受信しなかったため、更" -"新リクエストを255.255.255.255に送信し始める。" - -msgid "Instance metadata" -msgstr "インスタンスメタデータ" - -msgid "Instance tries to renew IP." -msgstr "インスタンスはIPアドレスを更新しようとする。" - -msgid "Instance user data" -msgstr "インスタンスのユーザーデータ" - -msgid "Instances" -msgstr "インスタンス" - -msgid "" -"Instances are the running virtual machines within an OpenStack cloud. This " -"section deals with how to work with them and their underlying images, their " -"network properties, and how they are represented in the database.user traininginstances" -msgstr "" -"インスタンスは OpenStack クラウドの中で実行中の仮想マシンです。このセクション" -"は、インスタンス、インスタンスが使用するイメージ、インスタンスのネットワーク" -"プロパティを扱うための方法について取り扱います。また、それらがデータベースで" -"どのように表現されているかについて取り扱います。user traininginstances" - -msgid "Instances in the Database" -msgstr "データベースにあるインスタンス" - -msgid "Intelligent Alerting" -msgstr "インテリジェントなアラート" - -msgid "" -"Intelligent alerting can be thought of as a form of continuous integration " -"for operations. For example, you can easily check to see whether the Image " -"service is up and running by ensuring that the glance-api and " -"glance-registry processes are running or by seeing whether " -"glace-api is responding on port 9292.monitoringintelligent alertingalertsintelligentlogging/monitoringintelligent " -"alertinglogging/" -"monitoringintelligent alerting" -msgstr "" -"インテリジェントなアラート機能により、ある種の運用の継続的インテグレーション" -"と考えることができます。例えば、glance-api プロセスと " -"glance-registry プロセスが動作していること、または glace-" -"api が 9292 ポートで応答していることを確認することにより、Image " -"service が起動して動作していることを簡単に確認できます。monitoringintelligent alertingalertsintelligentlogging/monitoringintelligent " -"alertinglogging/" -"monitoringintelligent alerting" - -msgid "" -"Intelligent alerting takes considerably more time to plan and implement than " -"the other alerts described in this chapter. A good outline to implement " -"intelligent alerting is:" -msgstr "" -"インテリジェントなアラートは、この章で述べられているの他のアラートよりも計" -"画、実装にかなり時間を要します。インテリジェントなアラートを実装する流れは次" -"のようになります。" - -msgid "Interacting with share networks." -msgstr "共有ネットワークに接続する" - -msgid "Internal network connectivity" -msgstr "内部ネットワーク接続性" - -msgid "Introduction to OpenStack" -msgstr "OpenStack の概要" - -msgid "Is_Public" -msgstr "Is_Public" - -msgid "" -"Isolated tenant networks implement some form of isolation of layer 2 traffic " -"between distinct networks. VLAN tagging is key concept, where traffic is " -"“tagged” with an ordinal identifier for the VLAN. Isolated network " -"implementations may or may not include additional services like DHCP, NAT, " -"and routing." -msgstr "" -"独立テナントネットワークは個々のネットワーク間のレイヤー2トラフィックをいくつ" -"かの方法で分離する事で実装します。タグVLANはキーコンセプトでトラフィックをど" -"こに流すかを”タグ”で決定します。独立テナントネットワークにはDHCPやNAT、ルー" -"ティングと言った追加のサービスが含まれるかもしれませんし、含まれないかもしれ" -"ません。" - -msgid "Issues with Live Migration" -msgstr "ライブマイグレーションに関する問題" - -msgid "" -"It is also possible to add and remove security groups when an instance is " -"running. Currently this is only available through the command-line tools. " -"Here is an example:" -msgstr "" -"インスタンスを実行中にセキュリティグループを追加および削除することもできま" -"す。現在、コマンドラインツールからのみ利用可能です。例:" - -msgid "" -"It is also possible to add project members and adjust the project quotas. " -"We'll discuss those actions later, but in practice, it can be quite " -"convenient to deal with all these operations at one time." -msgstr "" -"プロジェクトメンバーの追加やプロジェクトのクォータの調節も可能です。このよう" -"なアクションについては後ほど説明しますが、実際にこれらの操作を扱うと非常に便" -"利です。" - -msgid "" -"It is also possible to run multiple hypervisors in a single deployment using " -"host aggregates or cells. However, an individual compute node can run only a " -"single hypervisor at a time.hypervisorsrunning multiple" -msgstr "" -"ホストアグリゲートやセルを使用すると、1 つのデプロイメントで複数のハイパーバ" -"イザーを実行することも可能です。しかし、個別のコンピュートノードでは 1 度につ" -"き 1 つのハイパーバイザーしか実行することができません。hypervisors複数実行" - -msgid "" -"It is essentially an object storage system that manages disks and aggregates " -"the space and performance of disks linearly in hyper scale on commodity " -"hardware in a smart way. On top of its object store, Sheepdog provides " -"elastic volume service and http service. Sheepdog does not assume anything " -"about kernel version and can work nicely with xattr-supported file systems." -msgstr "" -"ディスクを管理すること、コモディティーハードウェアでスマートにハイパースケー" -"ルするディスクの容量および性能を集約することがオブジェクトストレージシステム" -"の本質です。そのオブジェクトストアの上で、Sheepdog は伸縮自在なボリュームサー" -"ビスと HTTP サービスを提供します。Sheepdog は、カーネルバージョンについて何も" -"前提無く、xattr をサポートするファイルシステムできちんと動作します。" - -msgid "" -"It is important to note that powering off an instance does not terminate it " -"in the OpenStack sense." -msgstr "" -"注意すべき大事な点は、インスタンスの電源オフは、OpenStack 的な意味でのインス" -"タンスの終了ではないということです。" - -msgid "It is possible to define other roles, but doing so is uncommon." -msgstr "他の役割を定義できますが、一般的にはそうしません。" - -msgid "" -"It is possible to watch packets on internal interfaces, but it does take a " -"little bit of networking gymnastics. First you need to create a dummy " -"network device that normal Linux tools can see. Then you need to add it to " -"the bridge containing the internal interface you want to snoop on. Finally, " -"you need to tell Open vSwitch to mirror all traffic to or from the internal " -"port onto this dummy port. After all this, you can then run " -"tcpdump on the dummy interface and see the traffic on the " -"internal port." -msgstr "" -"内部インターフェースにおいてパケットを監視することもできますが、少しネット" -"ワークを操作する必要があります。まず、通常の Linux ツールが参照できるダミー" -"ネットワークデバイスを作成する必要があります。次に、監視したい内部インター" -"フェースを含むブリッジにそれを追加する必要があります。最後に、内部ポートのす" -"べての通信をこのダミーポートにミラーするよう Open vSwitch に通知する必要があ" -"ります。これをすべて終えた後、ダミーインターフェースで tcpdump を実行して、内部ポートの通信を参照できます。" - -msgid "It may be possible to share the external storage for other purposes." -msgstr "外部ストレージを他の用途と共有できる可能性があります。" - -msgid "" -"It turns out the reason that this compute node locked up was a hardware " -"issue. We removed it from the DAIR cloud and called Dell to have it " -"serviced. Dell arrived and began working. Somehow or another (or a fat " -"finger), a different compute node was bumped and rebooted. Great." -msgstr "" -"コンピュートノードがロックアップした原因はハードウェアの問題だったことが判明" -"した。我々はそのハードウェアを DAIR クラウドから取り外し、修理するよう Dell " -"に依頼した。Dell が到着して作業を開始した。何とかかんとか(あるいはタイプミ" -"ス)で、異なるコンピュートノードを落としてしまい、再起動した。素晴らしい。" - -msgid "" -"It was funny to read the report. It was full of people who had some strange " -"network problem but didn't quite explain it in the same way." -msgstr "" -"レポートを読むのは楽しかった。同じ奇妙なネットワーク問題にあった人々であふれ" -"ていたが、全く同じ説明はなかった。" - -msgid "" -"It's also helpful to allocate a specific numeric range for custom and " -"private flavors. On UNIX-based systems, nonsystem accounts usually have a " -"UID starting at 500. A similar approach can be taken with custom flavors. " -"This helps you easily identify which flavors are custom, private, and public " -"for the entire cloud." -msgstr "" -"カスタムフレーバーとプライベートフレーバーに特別な数値範囲を割り当てることも" -"有用です。UNIX 系システムでは、システムアカウント以外は通常 500 から始まりま" -"す。同様の方法をカスタムフレーバーにも使用できます。これは、フレーバーがカス" -"タムフレーバー、プライベートフレーバー、パブリックフレーバーであるかをクラウ" -"ド全体で簡単に識別する役に立ちます。" - -msgid "" -"It's also possible to use virtual machines for all or some of the services " -"that the cloud controller manages, such as the message queuing. In this " -"guide, we assume that all services are running directly on the cloud " -"controller." -msgstr "" -"クラウドコントローラーが管理するすべての、または一部のサービス、たとえばメッ" -"セージキューに対して仮想マシンを使うことも可能です。このガイドでは、すべての" -"サービスが直接クラウドコントローラー上で実行されるものと仮定します。" - -msgid "" -"It's worth mentioning this directory in the context of failed compute nodes. " -"This directory contains the libvirt KVM file-based disk images for the " -"instances that are hosted on that compute node. If you are not running your " -"cloud in a shared storage environment, this directory is unique across all " -"compute nodes./var/lib/nova/instances " -"directorymaintenance/debugging/var/lib/nova/" -"instances" -msgstr "" -"コンピュートノードの故障の話題に関連して、このディレクトリについては説明して" -"おく価値があるでしょう。このディレクトリには、コンピュートノードにホストされ" -"ているインスタンス用の libvirt KVM のファイル形式のディスクイメージが置かれま" -"す。共有ストレージ環境でクラウドを実行していなければ、このディレクトリはコン" -"ピュートノード全体で一つしかありません。/var/lib/nova/instances directorymaintenance/debugging/var/lib/nova/instances" - -msgid "Italic" -msgstr "斜体" - -msgid "" -"Items to monitor for RabbitMQ include the number of items in each of the " -"queues and the processing time statistics for the server." -msgstr "" -"RabbitMQで監視すべき項目としては、各キューでのアイテムの数と、サーバーでの処" -"理時間の統計情報があります。" - -msgid "Jan 19, 2012" -msgstr "2012年1月19日" - -msgid "Jan 31, 2013" -msgstr "2013年1月31日" - -msgid "Joe Topjian" -msgstr "Joe Topjian" - -msgid "" -"Joe has designed and deployed several clouds at Cybera, a nonprofit where " -"they are building e-infrastructure to support entrepreneurs and local " -"researchers in Alberta, Canada. He also actively maintains and operates " -"these clouds as a systems architect, and his experiences have generated a " -"wealth of troubleshooting skills for cloud environments." -msgstr "" -"Joe は Cybera で複数のクラウドの設計と構築を行って来ました。 Cybera は、非営" -"利でカナダのアルバータ州の起業家や研究者を支援する電子情報インフラを構築して" -"います。また、システムアーキテクトとしてこれらのクラウドの維持・運用を活発に" -"行なっており、その経験からクラウド環境でのトラブルシューティングの豊富な知識" -"を持っています。" - -msgid "Join the OpenStack Community" -msgstr "OpenStack コミュニティーに参加する" - -msgid "" -"Jon has been piloting an OpenStack cloud as a senior technical architect at " -"the MIT Computer Science and Artificial Intelligence Lab for his researchers " -"to have as much computing power as they need. He started contributing to " -"OpenStack documentation and reviewing the documentation so that he could " -"accelerate his learning." -msgstr "" -"Jon は MIT Computer Science and Artificial Intelligence Lab で上級技術アーキ" -"テクトとして OpenStack クラウドを運用し、研究者が必要なだけの計算能力を使える" -"ようにしています。 OpenStack の勉強を加速しようと思い、OpenStack ドキュメント" -"への貢献とドキュメントのレビューを始めました。" - -msgid "Jonathan Proulx" -msgstr "Jonathan Proulx" - -msgid "Jump to the VM specific chain." -msgstr "仮想マシン固有チェインへのジャンプ。" - -msgid "Jun 22, 2012" -msgstr "2012年6月22日" - -msgid "Jun 6, 2013" -msgstr "2013年6月6日" - -msgid "Jun 9, 2014" -msgstr "2014年6月9日" - -msgid "Juno" -msgstr "Juno" - -msgid "" -"Just as important as a backup policy is a recovery policy (or at least " -"recovery testing)." -msgstr "" -"バックアップポリシーと同じくらい大事なことは、リカバリーポリシーです (少なく" -"ともリカバリーのテストは必要です)。" - -msgid "" -"Just running sync is not enough to ensure that the file system " -"is consistent. We recommend that you use the fsfreeze tool, " -"which halts new access to the file system, and create a stable image on disk " -"that is suitable for snapshotting. The fsfreeze tool supports " -"several file systems, including ext3, ext4, and XFS. If your virtual machine " -"instance is running on Ubuntu, install the util-linux package to get " -"fsfreeze:" -msgstr "" -"ファイルシステムが整合性を持つことを保証するためには、単に sync " -"を実行するだけでは不十分です。fsfreeze ツールを使用することを推" -"奨します。これは、ファイルシステムに対する新規アクセスを停止し、スナップ" -"ショットに適した安定したイメージをディスクに作成します。fsfreeze は ext3, ext4 および XFS を含むいくつかのファイルシステムをサポートしま" -"す。仮想マシンのインスタンスが Ubuntu において実行されていれば、" -"fsfreeze を取得するために util-linux パッケージをインス" -"トールします:" - -msgid "Justification" -msgstr "理由" - -msgid "KVM" -msgstr "KVM" - -msgid "" -"KVM is the supported hypervisor of choice for Red Hat Enterprise Linux (and " -"included in distribution). It is feature complete and free from licensing " -"charges and restrictions." -msgstr "" -"KVM は、Red Hat Enterprise Linux に最適なサポート対象ハイパーバイザーです (ま" -"た、ディストリビューションにも含まれます)。機能が完成されており、ライセンスの" -"料金や制限は課せられません。" - -msgid "Kerberos" -msgstr "Kerberos" - -msgid "Key pairs" -msgstr "キーペア" - -msgid "Keystone" -msgstr "Keystone" - -msgid "" -"Keystone is handled a little differently. To modify the logging level, edit " -"the /etc/keystone/logging.conf file and look at the " -"logger_root and handler_file sections." -msgstr "" -"Keystoneは少し異なる動作をします。ロギングレベルを変更するためには、" -"/etc/keystone/logging.confを編集し、 logger_roothandler_file を修正する必要があります。" - -msgid "Keystone services" -msgstr "Keystone サービス" - -msgid "Kilo" -msgstr "Kilo" - -msgid "" -"L3-agent router namespaces are named qrouter-<" -"router_uuid>, and dhcp-agent name spaces are " -"named qdhcp-<net_uuid>. This output shows a network node with four networks " -"running dhcp-agents, one of which is also running an l3-agent router. It's " -"important to know which network you need to be working in. A list of " -"existing networks and their UUIDs can be obtained by running " -"neutron net-list with administrative credentials." -msgstr "" -"qrouter-<router_uuid> と" -"いう名前の L3 エージェントのルーター名前空間および DHCP エージェントの名前空" -"間は、qdhcp-<net_uuid> という名前です。この出力は、DHCP エージェントを実行し" -"ている 4 つのネットワークを持つネットワークノードを表しています。また、1 つ" -"は L3 エージェントルーターも実行しています。作業する必要のあるネットワークを" -"理解することは重要です。既存のネットワークおよび UUID の一覧は、管理クレデン" -"シャルを持って neutron net-list を実行することにより得られ" -"ます。" - -msgid "LDAP" -msgstr "LDAP" - -msgid "LDAP (such as OpenLDAP or Microsoft's Active Directory)" -msgstr "LDAP (OpenLDAP や Microsoft の Active Directory)" - -msgid "LVM" -msgstr "LVM" - -msgid "" -"LVM does not provide any replication. Typically, " -"administrators configure RAID on nodes that use LVM as block storage to " -"protect against failures of individual hard drives. However, RAID does not " -"protect against a failure of the entire host." -msgstr "" -"LVMはレプリケーションを提供しません。通常、管理者はブ" -"ロックストレージとしてLVMを利用するホスト上にRAIDを構成し、ここのハードディス" -"ク障害からブロックストレージを保護します。しかしRAIDではホストそのものの障害" -"には対応できません。" - -msgid "LVM/iSCSI" -msgstr "LVM/iSCSI" - -msgid "" -"LVMLVM (Logical Volume Manager)" -msgstr "" -"LVMLVM (Logical Volume Manager)" - -msgid "LXC" -msgstr "LXC" - -msgid "Large instances" -msgstr "大きなインスタンス" - -msgid "Lay of the Land" -msgstr "環境の把握" - -msgid "" -"Learn more about how to contribute to the OpenStack docs at OpenStack Documentation " -"Contributor Guide." -msgstr "" -"OpenStack ドキュメントに貢献する方法は OpenStack Documentation Contributor " -"Guide にあります。" - -msgid "Liberty" -msgstr "Liberty" - -msgid "" -"Like all major system upgrades, your upgrade could fail for one or more " -"reasons. You should prepare for this situation by having the ability to roll " -"back your environment to the previous release, including databases, " -"configuration files, and packages. We provide an example process for rolling " -"back your environment in ." -"upgradingprocess " -"overviewrollbackspreparing forupgradingpreparation for" -msgstr "" -"すべてのシステムのメジャーアップグレードと同じく、いくつかの理由により、アッ" -"プグレードに失敗する可能性があります。データベース、設定ファイル、パッケージ" -"など、お使いの環境をロールバックできるようにしておくことにより、この状況に備" -"えるべきです。お使いの環境をロールバックするためのプロセス例がにあります。upgradingprocess overviewrollbackspreparing forupgradingpreparation for" - -msgid "" -"Limit the total size (in bytes) or number of objects that can be stored in a " -"single container." -msgstr "" -"1 つのコンテナーに保存できる、オブジェクトの容量 (バイト単位) や個数の合計を" -"制限します。" - -msgid "" -"Limit the total size (in bytes) that a user has available in the Object " -"Storage service." -msgstr "" -"ユーザーが Object Storage サービス で" -"利用できる合計容量 (バイト単位) を制限します。" - -msgid "Limitations applied by Administrator" -msgstr "管理者により適用される制限" - -msgid "" -"Linux network namespaces are a kernel feature the networking service uses to " -"support multiple isolated layer-2 networks with overlapping IP address " -"ranges. The support may be disabled, but it is on by default. If it is " -"enabled in your environment, your network nodes will run their dhcp-agents " -"and l3-agents in isolated namespaces. Network interfaces and traffic on " -"those interfaces will not be visible in the default namespace.network namespaces, troubleshootingnamespaces, " -"troubleshootingtroubleshootingnetwork namespaces" -msgstr "" -"Linux のネットワーク名前空間は、ネットワークサービスが、重複する IP アドレス" -"範囲を持つ、複数の独立した L2 ネットワークをサポートするために使用する、カー" -"ネルの機能です。この機能のサポートが無効化されている可能性がありますが、デ" -"フォルトで有効になっています。お使いの環境で有効化されている場合、ネットワー" -"クノードが DHCP エージェントと L3 エージェントを独立した名前空間で動作しま" -"す。ネットワークインターフェース、それらのインターフェースにおける通信は、デ" -"フォルトの名前空間で見えなくなります。network namespaces, troubleshootingnamespaces, " -"troubleshootingtroubleshootingnetwork namespaces" - -msgid "List all default quotas for all tenants, as follows:" -msgstr "" -"全テナントに対するクォータのデフォルト値を全て表示するには、以下のようにしま" -"す。" - -msgid "List of individual code changes under review" -msgstr "レビュー中の個々のコードの変更の一覧" - -msgid "List the currently set quota values for a tenant, as follows:" -msgstr "テナントの現在のクォータ値を一覧表示します。" - -msgid "Live Migration back end" -msgstr "ライブマイグレーションのバックエンド" - -msgid "Live Snapshots" -msgstr "ライブスナップショット" - -msgid "" -"Live migration can also be done with nonshared storage, using a feature " -"known as KVM live block migration. While an earlier " -"implementation of block-based migration in KVM and QEMU was considered " -"unreliable, there is a newer, more reliable implementation of block-based " -"live migration as of QEMU 1.4 and libvirt 1.0.2 that is also compatible with " -"OpenStack. However, none of the authors of this guide have first-hand " -"experience using live block migration.block migration" -msgstr "" -"ライブマイグレーションは、KVM ライブブロックマイグレーション という機能を使用して、非共有ストレージでも行うことができます。KVM " -"や QEMU でのブロックベースのマイグレーションは当初、信頼できませんでしたが、" -"OpenStack との互換性もある QEMU 1.4 および libvirt 1.0.2 では、より新しく、信" -"頼性の高いブロックベースのライブマイグレーション実装ができるようになっていま" -"す。ただし、本書の執筆者は、ライブブロックマイグレーションを実際に使用してい" -"ません。ブロックマイグレーション" - -msgid "" -"Live snapshots is a feature that allows users to snapshot the running " -"virtual machines without pausing them. These snapshots are simply disk-only " -"snapshots. Snapshotting an instance can now be performed with no downtime " -"(assuming QEMU 1.3+ and libvirt 1.0+ are used).live snapshots" -msgstr "" -"ライブスナップショットは、ユーザーが実行中の仮想マシンのスナップショットを一" -"時停止なしで取得できる機能です。このスナップショットは単にディスクのみのス" -"ナップショットです。現在はインスタンスのスナップショットは停止時間なしで実行" -"できます (QEMU 1.3+ と libvirt 1.0+ が使用されていることが前提です)。" -"live snapshots" - -msgid "" -"Log in and set up DevStack. Here's an example of the commands you can use to " -"set up DevStack on a virtual machine:" -msgstr "" -"ログインして、DevStack をセットアップします。これは、仮想マシンに DevStack を" -"セットアップするために使用できるコマンドの例です。" - -msgid "Log in as an administrative user." -msgstr "管理ユーザーとしてログインします。" - -msgid "Log in to the instance: " -msgstr "インスタンスにログインします: " - -msgid "Log location" -msgstr "ログの場所" - -msgid "Logging" -msgstr "ロギング" - -msgid "Logging and Monitoring" -msgstr "ロギングと監視" - -msgid "" -"Logging is detailed more fully in . " -"However, it is an important design consideration to take into account before " -"commencing operations of your cloud.logging/monitoringcompute nodes andcompute nodeslogging" -msgstr "" -"ロギングの詳細は、 で包括的に記載してい" -"ます。ただし、クラウドの運用を開始する前に、重要な設計に関する事項について考" -"慮していく必要があります。logging/" -"monitoringコンピュートノードおよびコンピュートノードロギング" - -msgid "" -"Logical separation within your nova deployment for physical isolation or " -"redundancy." -msgstr "" -"物理的な隔離や冗長性のために、Nova デプロイメントの中で論理的な分離が必要な場" -"合" - -msgid "Look at your OpenStack service catalog:" -msgstr "" -"それではOpenStack サービスカタログを見てみましょう。" - -msgid "" -"Look for any errors or traces in the log file. For more information, see " -"." -msgstr "" -"何らかのエラーまたはトレースをログファイルで探します。詳細は を参照してください。" - -msgid "" -"Look for the vnet NIC. You can also reference nova.conf " -"and look for the flat_interface_bridge option." -msgstr "" -"vnet NICを探してください。また、nova.confの" -"flat_interface_bridgeオプションも参考になります。" - -msgid "Lorin Hochstein" -msgstr "Lorin Hochstein" - -msgid "" -"Loss of the database leads to errors. As a result, we recommend that you " -"cluster your database to make it failure tolerant. Configuring and " -"maintaining a database cluster is done outside OpenStack and is determined " -"by the database software you choose to use in your cloud environment. MySQL/" -"Galera is a popular option for MySQL-based databases." -msgstr "" -"データベースの消失はエラーにつながります。結論としては、データベースを冗長化" -"するためにクラスター構成とする事を推奨します。使クラウド環境で使用するデータ" -"ベースをクラスタ化しをOpenStack外に配置します。MySQLベースのデータベースでは" -"MySQL/Galeraが人気です。" - -msgid "Lustre" -msgstr "Lustre" - -msgid "MIT CSAIL" -msgstr "MIT CSAIL" - -msgid "" -"MTU is maximum transmission unit. It specifies the maximum number of bytes " -"that the interface accepts for each packet. If two interfaces have two " -"different MTUs, bytes might get chopped off and weird things happensuch as " -"random session lockups." -msgstr "" -"MTU とは最大転送単位(Maximum Transmission Unit)である。これは、各パケットに" -"対してそのインターフェースが受け取る最大バイト数を指定する。もし2つのイン" -"ターフェースが異なる MTU であった場合、バイトは尻切れトンボとなって変なことが" -"起こり始める…例えばセッションのランダムなロックアップとか。" - -msgid "" -"Maintaining an OpenStack cloud requires that you manage multiple physical " -"servers, and this number might grow over time. Because managing nodes " -"manually is error prone, we strongly recommend that you use a configuration-" -"management tool. These tools automate the process of ensuring that all your " -"nodes are configured properly and encourage you to maintain your " -"configuration information (such as packages and configuration options) in a " -"version-controlled repository.configuration managementnetworksconfiguration managementmaintenance/" -"debuggingconfiguration management" -msgstr "" -"OpenStack クラウドをメンテナンスするには、複数の物理サーバーを管理することが" -"必要です。そして、この数は日々増えていきます。ノードを手動で管理することはエ" -"ラーを起こしやすいので、構成管理ツールを使用することを強く推奨します。これら" -"のツールはすべてのノードが適切に設定されていることを保証するプロセスを自動化" -"します。また、これらを使うことで、(パッケージや設定オプションといった) 構成情" -"報のバージョン管理されたリポジトリでの管理が行いやすくなります。configuration managementnetworksconfiguration managementmaintenance/debuggingconfiguration management" - -msgid "Maintenance, Failures, and Debugging" -msgstr "メンテナンス、故障およびデバッグ" - -msgid "" -"Make a full database backup of your production data. As of Kilo, database " -"downgrades are not supported, and the only method available to get back to a " -"prior database version will be to restore from backup." -msgstr "" -"本番データの完全バックアップを取得します。Kilo 時点では、データベースのダウン" -"グレードはサポートされません。以前のバージョンのデータベースに戻す唯一の方法" -"は、バックアップからリストアすることです。" - -msgid "Make another storage endpoint on the same system" -msgstr "同じシステムに別のストレージエンドポイントの作成" - -msgid "Make sure you're in the devstack directory:" -msgstr "devstack ディレクトリーにいることを確認します。" - -msgid "Make sure you're in the devstack directory:" -msgstr "devstack ディレクトリーにいることを確認します。" - -msgid "Manage Access To Shares" -msgstr "共有へのアクセス権の管理" - -msgid "Manage Shares" -msgstr "共有の管理" - -msgid "Manage a Share Network" -msgstr "共有ネットワークの管理" - -msgid "Manage a share network" -msgstr "共有ネットワークの管理" - -msgid "Manage access to shares" -msgstr "共有へのアクセス権の管理" - -msgid "Manage repositories" -msgstr "リポジトリーの管理" - -msgid "Managed by…" -msgstr "管理元" - -msgid "Management Network" -msgstr "管理ネットワーク" - -msgid "Managing Projects" -msgstr "プロジェクトの管理" - -msgid "Managing Projects and Users" -msgstr "プロジェクトとユーザーの管理" - -msgid "Manually Disassociating a Floating IP" -msgstr "Floating IP の手動割り当て解除" - -msgid "" -"Many OpenStack projects allow for customization of specific features using a " -"driver architecture. You can write a driver that conforms to a particular " -"interface and plug it in through configuration. For example, you can easily " -"plug in a new scheduler for Compute. The existing schedulers for Compute are " -"feature full and well documented at Scheduling. However, depending on your user's use cases, the " -"existing schedulers might not meet your requirements. You might need to " -"create a new scheduler.customizationOpenStack Compute (nova) Schedulerschedulerscustomization ofDevStackcustomizing OpenStack " -"Compute (nova) scheduler" -msgstr "" -"多くの OpenStack のプロジェクトでは、ドライバ・アーキテクチャーを使うことに" -"よって、特定の機能をカスタマイズすることができます。特定のインターフェースに" -"適合するドライバを書き、環境定義によって組み込むことができます。例えば、簡単" -"に Compute に新しいスケジューラーを組み込むことができます。Compute の既存のス" -"ケジューラーは、完全な機能を持ち、Scheduling によくまとめられています。しかし、あなたのユーザのユー" -"ス・ケースに依存して、既存のスケジューラで要件が満たせないかもしれません。こ" -"の場合は、新しいスケジューラーを作成する必要があるでしょう。customizationOpenStack Compute " -"(nova) Schedulerschedulerscustomization ofDevStackcustomizing OpenStack Compute (nova) scheduler" - -msgid "" -"Many OpenStack projects implement a driver layer, and each of these drivers " -"will implement its own configuration options. For example, in OpenStack " -"Compute (nova), there are various hypervisor drivers implemented—libvirt, " -"xenserver, hyper-v, and vmware, for example. Not all of these hypervisor " -"drivers have the same features, and each has different tuning requirements." -"hypervisorsdifferences betweendriversdifferences between" -msgstr "" -"多くの OpenStack プロジェクトではドライバー層が実装され、各ドライバーはドライ" -"バー固有の設定オプションが実装されています。例えば、OpenStack Compute (nova) " -"では、libvirt, xenserver, hyper-v, vmware などの種々のハイパーバイザードライ" -"バーが実装されていますが、これらのハイパーバイザードライバーすべてが同じ機能" -"を持っている訳ではなく、異なるチューニング要件もドライバー毎に異なります。" -"hypervisorsdifferences betweendriversdifferences between" - -msgid "" -"Many deployments use the SQL database; however, LDAP is also a popular " -"choice for those with existing authentication infrastructure that needs to " -"be integrated." -msgstr "" -"多くのデプロイメントで SQL データベースが使われていますが、既存の認証インフラ" -"とインテグレーションする必要のある環境では、LDAP もポピュラーな選択肢です。" - -msgid "" -"Many individual efforts keep a community book alive. Our community members " -"updated content for this book year-round. Also, a year after the first " -"sprint, Jon Proulx hosted a second two-day mini-sprint at MIT with the goal " -"of updating the book for the latest release. Since the book's inception, " -"more than 30 contributors have supported this book. We have a tool chain for " -"reviews, continuous builds, and translations. Writers and developers " -"continuously review patches, enter doc bugs, edit content, and fix doc bugs. " -"We want to recognize their efforts!" -msgstr "" -"数多くの方々の努力がコミュニティのドキュメントを維持しています。私たちのコ" -"ミュニティーのメンバーは、一年を通じて、このドキュメントの内容を更新しまし" -"た。また、最初のスプリントの 1 年後、Jon Proulx さんが 2 回目となる 2 日間の" -"ミニスプリントを主催しました。これは、MIT で行われ、最新リリースに向けた更新" -"を目標としました。ドキュメント作成以降、30 人以上の貢献者がこのドキュメントを" -"サポートしてきました。レビュー、継続的ビルド、翻訳のツールチェインがありま" -"す。執筆者や開発者は継続的に、パッチをレビューし、ドキュメントバグを記入し、" -"内容を編集し、そのバグを修正します。その方々の努力を認めたいと思います。" - -msgid "" -"Many operators use separate compute and storage hosts. Compute services and " -"storage services have different requirements, and compute hosts typically " -"require more CPU and RAM than storage hosts. Therefore, for a fixed budget, " -"it makes sense to have different configurations for your compute nodes and " -"your storage nodes. Compute nodes will be invested in CPU and RAM, and " -"storage nodes will be invested in block storage." -msgstr "" -"多くの運用者はコンピュートホストとストレージホストを分離して使用しています。" -"コンピュートサービスとストレージサービスには異なる要件があり、コンピュートホ" -"ストでは通常はストレージホストよりも多くの CPU と RAM が必要です。そのため、" -"一定の予算の中では、コンピュートホストとストレージホストの構成が異なることは" -"理にかなっています。コンピュートホストでは、CPU や RAM を、ストレージノードで" -"はブロックストレージを多く使用します。" - -msgid "" -"Many sites run with users being associated with only one project. This is a " -"more conservative and simpler choice both for administration and for users. " -"Administratively, if a user reports a problem with an instance or quota, it " -"is obvious which project this relates to. Users needn't worry about what " -"project they are acting in if they are only in one project. However, note " -"that, by default, any user can affect the resources of any other user within " -"their project. It is also possible to associate users with multiple projects " -"if that makes sense for your organization.Project Members tabuser managementassociating users " -"with projects" -msgstr "" -"多くのサイトは一つのプロジェクトのみに割り当てられているユーザーで実行してい" -"ます。これは、管理者にとってもユーザーにとっても、より保守的で分かりやすい選" -"択です。管理の面では、ユーザーからインスタンスやクォータに関する問題の報告が" -"あった場合、どのプロジェクトに関するものかが明確です。ユーザーが一つのプロ" -"ジェクトのみに所属している場合、ユーザーがどのプロジェクトで操作しているのか" -"を気にする必要がありません。ただし、既定の設定では、どのユーザーも同じプロ" -"ジェクトにいる他のユーザーのリソースに影響を与えることができることに注意して" -"ください。あなたの組織にとって意味があるならば、ユーザーを複数のプロジェクト" -"に割り当てることも可能です。Project " -"Members tabuser " -"managementassociating users with projects" - -msgid "Mar 20, 2015" -msgstr "2015年3月20日" - -msgid "" -"Matching gre-<n> interfaces to tunnel endpoints is " -"possible by looking at the Open vSwitch state:" -msgstr "" -"gre-<n> インターフェースとトンネルエンドポイントを一致させ" -"ることは、おそらく Open vSwitch の状態を見ることになります。" - -msgid "Max number of share snapshots" -msgstr "共有のスナップショットの最大数" - -msgid "Max number of shared networks" -msgstr "共有ネットワークの最大数" - -msgid "Max number of shares" -msgstr "共有の最大数" - -msgid "Max total amount of all snapshots" -msgstr "すべてのスナップショットの合計数" - -msgid "May 9, 2013" -msgstr "2013年5月9日" - -msgid "Megabytes of instance RAM allowed per tenant." -msgstr "テナントごとのインスタンスの RAM 容量(メガバイト単位)" - -msgid "Memcached (a distributed memory object caching system)" -msgstr "Memcached (分散メモリーオブジェクトキャッシュシステム)" - -msgid "Memory" -msgstr "メモリ" - -msgid "Memory Size: 4 GB RAM" -msgstr "メモリー容量: 4 GB RAM" - -msgid "Memory usage" -msgstr "メモリー使用量" - -msgid "Memory: 128 GB" -msgstr "メモリー: 128 GB" - -msgid "Memory: 32 GB" -msgstr "メモリー: 32 GB" - -msgid "Memory: 64 GB" -msgstr "メモリー: 64 GB" - -msgid "Memory_MB" -msgstr "Memory_MB" - -msgid "Message Queue" -msgstr "メッセージキュー" - -msgid "Message Service (zaqar)" -msgstr "Message Service (zaqar)" - -msgid "Message queue" -msgstr "メッセージキュー" - -msgid "Message queue and database services" -msgstr "メッセージキューとデータベースのサービス" - -msgid "Message queue services" -msgstr "メッセージキューサービス" - -msgid "Metadata items" -msgstr "メタデータ項目" - -msgid "Microsoft Active Directory" -msgstr "Microsoft Active Directory" - -msgid "" -"Migrations of instances from one node to another are more complicated and " -"rely on features that may not continue to be developed." -msgstr "" -"ノード間のインスタンスのマイグレーションがより複雑になり、今後開発が継続され" -"ない可能性のある機能に依存することになります。" - -msgid "Milestone: Milestone the bug was fixed in" -msgstr "Milestone: このバグの修正が入ったマイルストーン" - -msgid "Model: Dell R620" -msgstr "モデル: Dell R620" - -msgid "Model: Dell R720xd" -msgstr "モデル: Dell R720xd" - -msgid "" -"Modifying users is also done from this Users page. If you have a large " -"number of users, this page can get quite crowded. The Filter search box at " -"the top of the page can be used to limit the users listing. A form very " -"similar to the user creation dialog can be pulled up by selecting Edit from " -"the actions dropdown menu at the end of the line for the user you are " -"modifying." -msgstr "" -"ユーザー情報の変更は、この \"ユーザー” ページから実行することもできます。かな" -"り多くのユーザーがいるならば、このページにはたくさんのユーザーが表示されるこ" -"とでしょう。ページの上部にある \"フィルター\" 検索ボックスを使うと、表示され" -"るユーザーの一覧を絞り込むことができます。変更しようとしているユーザーの行末" -"にあるアクションドロップダウンメニューの ”編集” を選択することにより、ユー" -"ザー作成ダイアログと非常に似ているフォームを表示できます。" - -msgid "Monitoring" -msgstr "監視" - -msgid "" -"Monitoring the resource usage and user growth will enable you to know when " -"to procure. details some useful " -"metrics." -msgstr "" -"リソース利用状況の監視とユーザー増加の監視によって、(追加機材の)調達時期を" -"知ることができます。 でいくつかの有用な" -"監視項目を詳しく解説します。" - -msgid "Monthly" -msgstr "月次" - -msgid "MooseFS" -msgstr "MooseFS" - -msgid "" -"More complex rule sets can be built up through multiple invocations of " -"neutron security-group-rule-create. For example, if you " -"want to pass both http and https traffic, do this:" -msgstr "" -"より複雑なルールセットは、neutron security-group-rule-create の複数呼び出しにより構築できます。例えば、HTTP 通信と HTTPS 通信を通" -"過させたい場合、このようにします。" - -msgid "More complex to set up." -msgstr "少し複雑な構成。" - -msgid "More proxies means more bandwidth, if your storage can keep up." -msgstr "" -"ストレージ側の性能が十分であれば、proxy server の増加は帯域の増加になります。" - -msgid "" -"Most OpenStack services communicate with each other using the " -"message queue.messagesdesign considerationsdesign considerationsmessage queues For example, " -"Compute communicates to block storage services and networking services " -"through the message queue. Also, you can optionally enable notifications for " -"any service. RabbitMQ, Qpid, and 0mq are all popular choices for a message-" -"queue service. In general, if the message queue fails or becomes " -"inaccessible, the cluster grinds to a halt and ends up in a read-only state, " -"with information stuck at the point where the last message was sent. " -"Accordingly, we recommend that you cluster the message queue. Be aware that " -"clustered message queues can be a pain point for many OpenStack deployments. " -"While RabbitMQ has native clustering support, there have been reports of " -"issues when running it at a large scale. While other queuing solutions are " -"available, such as 0mq and Qpid, 0mq does not offer stateful queues. Qpid is " -"the messaging system of choice for " -"Red Hat and its derivatives. Qpid does not have native clustering " -"capabilities and requires a supplemental service, such as Pacemaker or " -"Corsync. For your message queue, you need to determine what level of data " -"loss you are comfortable with and whether to use an OpenStack project's " -"ability to retry multiple MQ hosts in the event of a failure, such as using " -"Compute's ability to do so.0mqQpidRabbitMQmessage queue" -msgstr "" -"多くのOpenStackサービスはメッセージキューを使用してお互" -"いと通信をしています。メッセージ設計上の考慮事項設計上の考慮事項メッセージキュー例えば、コンピュートはメッセージキューを通じてブロック" -"ストレージサービスとネットワークサービスと通信をしています。また、どのような" -"サービスでも通知を有効にする事ができます。メッセージキューサービスの選択とし" -"てRabbitMQ、Qpid、0mqがポピュラーです。一般的には、メッセージキューが障害ある" -"いはアクセス不能となった場合、クラスターはメッセージを最後に送信した状態のま" -"まスタックした状態で停止しリードオンリーの状態となります。ですので、メッセー" -"ジキューはクラスター公正にする事をお勧めします。クラスタ化されたメッセージ" -"キューは多くのOpenStack構成で弱点になることを覚えておく必要があります。" -"RabbitMQは標準でクラスタ対応していますが大規模になった場合にいくつかの問題が" -"報告されています。0mqやQPIDといった他のメッセージングソリューションも存在して" -"いますが、0mqはステートフルなキューを提供していません。QpidはRed Hat系ディス" -"トリビューションでのメッセージングシ" -"ステムの選択となります。Qpidは標準でクラスタをサポートしておらず、Pacemakerや" -"Corosyncといった補助的なサービスが必要になります。メッセージキューに関して" -"は、コンピュータの機能を利用する際にするように許容範囲なデータ損失のレベルと" -"イベント失敗時に複数のMQホストを試行するOpenStackの機能を利用するかどうかを決" -"める必要があります。0mqQpidRabbitMQメッセージキュー" - -msgid "" -"Most block storage drivers allow the instance to have direct access to the " -"underlying storage hardware's block device. This helps increase the overall " -"read/write IO. However, support for utilizing files as volumes is also well " -"established, with full support for NFS, GlusterFS and others." -msgstr "" -"多くのストレージドライバはインスタンスが直接ストレージハードウェアのブロック" -"デバイスへアクセスできるようにします。これは リード/ライト I/O 性能の向上に役" -"立ちます。しかしながら、ボリュームとしてのファイルの利用も、NFS、GlusterFS な" -"どの完全サポートとともに、確立された手法です。" - -msgid "" -"Most instances are size m1.medium (two virtual cores, 50 GB of storage)." -msgstr "" -"ほとんどのインスタンスのサイズは m1.medium (仮想コア数2、ストレージ50GB) とし" -"ます。" - -msgid "" -"Most services use the convention of writing their log files to " -"subdirectories of the /var/log directory, as listed in .cloud controllerslog informationlogging/" -"monitoringlog location" -msgstr "" -"多くのサービスは、慣習に従い、ログファイルを /var/log ディレクト" -"リーのサブディレクトリーに書き込みます。 に一覧化されています。cloud controllerslog informationlogging/" -"monitoringlog location" - -msgid "Mount the qemu-nbd device." -msgstr "qemu-nbd デバイスをマウントします。" - -msgid "" -"Much of OpenStack is driver-oriented, so you can plug in different solutions " -"to the base set of services. This chapter describes some advanced " -"configuration topics." -msgstr "" -"ほとんどの OpenStack は、ドライバーを用いて動作します。そのため、サービスの基" -"本セットに別のソリューションをプラグインできます。この章は、いくつかの高度な" -"設定に関する話題を取り扱います。" - -msgid "Multi-Host and Single-Host Networking" -msgstr "マルチホストあるいはシングルホストネットワーキング" - -msgid "Multi-NIC Provisioning" -msgstr "マルチNICプロビジョニング" - -msgid "Multithread Considerations" -msgstr "マルチスレッドの課題" - -msgid "" -"Must be IPv4 or IPv6, and addresses " -"represented in CIDR must match the ingress or egress rules." -msgstr "" -"IPv4 または IPv6 である必要があります。" -"CIDR 形式のアドレスが受信ルールまたは送信ルールに一致する必要があります。" - -msgid "MySQL" -msgstr "MySQL" - -msgid "" -"MySQL is used as the database back end for all databases in the OpenStack " -"environment. MySQL is the supported database of choice for Red Hat " -"Enterprise Linux (and included in distribution); the database is open " -"source, scalable, and handles memory well." -msgstr "" -"MySQL は、OpenStack 環境の全データベースのデータベースバックエンドとして使用" -"されます。MySQL は、Red Hat Enterprise Linux に最適なサポート対象データベース" -"です (また、ディストリビューションにも同梱されています)。このデータベースは、" -"オープンソースで、拡張が可能な上、メモリーの処理を効率的に行います。" - -msgid "MySQL*" -msgstr "MySQL*" - -msgid "NFS" -msgstr "NFS" - -msgid "NFS (default for Linux)" -msgstr "NFS (Linux でのデフォルト)" - -msgid "NTP" -msgstr "NTP" - -msgid "Nagios" -msgstr "Nagios" - -msgid "" -"Nagios alerts you with a WARNING when any disk on the compute node is 80 " -"percent full and CRITICAL when 90 percent is full." -msgstr "Naigosは、80%のディスク使用率でWARNING、90%でCRITICALを警告します。" - -msgid "" -"Nagios checks that at least one nova-compute service is " -"running at all times." -msgstr "" -"Nagiosは常に 1 つ以上の nova-compute サービスが動作してい" -"るかをチェックします。" - -msgid "" -"Nagios is an open source monitoring service. It's capable of executing " -"arbitrary commands to check the status of server and network services, " -"remotely executing arbitrary commands directly on servers, and allowing " -"servers to push notifications back in the form of passive monitoring. Nagios " -"has been around since 1999. Although newer monitoring services are " -"available, Nagios is a tried-and-true systems administration staple." -"Nagios" -msgstr "" -"Nagios は、オープンソースソフトウェアの監視サービスです。任意のコマンドを実行" -"して、サーバーやネットワークサービスの状態を確認できます。また、任意のコマン" -"ドをリモートのサーバーで直接実行できます。サーバーが受動的な監視形態で通知を" -"送信することもできます。Nagios は 1999 年ごろにできました。より当たらし監視" -"サービスがありますが、Nagios は実績豊富なシステム管理ツールです。Nagios" - -msgid "Name" -msgstr "名前" - -msgid "Name: devstack" -msgstr "名前: devstack" - -msgid "NeCTAR" -msgstr "NeCTAR" - -msgid "NeCTAR website" -msgstr "NeCTAR Web サイト" - -msgid "NeCTAR-RC GitHub" -msgstr "NeCTAR-RC GitHub" - -msgid "NetApp" -msgstr "NetApp" - -msgid "Network" -msgstr "ネットワーク" - -msgid "Network Configuration" -msgstr "ネットワーク設定" - -msgid "Network Configuration in the Database for nova-network" -msgstr "nova-network 用データベースにあるネットワーク設定" - -msgid "Network Considerations" -msgstr "ネットワークの考慮事項" - -msgid "Network Design" -msgstr "ネットワーク設計" - -msgid "Network I/O" -msgstr "ネットワーク I/O" - -msgid "Network Inspection" -msgstr "ネットワークの検査" - -msgid "Network Topology" -msgstr "ネットワークトポロジー" - -msgid "Network Troubleshooting" -msgstr "ネットワークのトラブルシューティング" - -msgid "" -"Network configuration is a very large topic that spans multiple areas of " -"this book. For now, make sure that your servers can PXE boot and " -"successfully communicate with the deployment server.networksconfiguration of" -msgstr "" -"ネットワーク設定は、本書でも複数の箇所で取り上げられている大きいトピックで" -"す。ここでは、お使いのサーバが PXEブートでき、デプロイメントサーバと正常に通" -"信できることを確認しておいてください。.ネットワーク設定" - -msgid "Network deployment model" -msgstr "ネットワーク構成モデル" - -msgid "Network manager" -msgstr "ネットワークマネージャー" - -msgid "Network node" -msgstr "ネットワークノード" - -msgid "" -"Network nodes are responsible for doing all the virtual networking needed " -"for people to create public or private networks and uplink their virtual " -"machines into external networks. Network nodes:" -msgstr "" -"ネットワークノードは、ユーザーがパブリックまたはプライベートネットワークを作" -"成し、仮想マシンを外部ネットワークにアップリンクするために必要なすべての仮想" -"ネットワーキングを実行する役割を果たします。ネットワークノードは以下のような" -"操作を実行します。" - -msgid "Network traffic is distributed to the compute nodes." -msgstr "ネットワークトラフィックをコンピュートノード全体に分散できる。" - -msgid "" -"Network troubleshooting can unfortunately be a very difficult and confusing " -"procedure. A network issue can cause a problem at several points in the " -"cloud. Using a logical troubleshooting procedure can help mitigate the " -"confusion and more quickly isolate where exactly the network issue is. This " -"chapter aims to give you the information you need to identify any issues for " -"either nova-network or OpenStack Networking (neutron) " -"with Linux Bridge or Open vSwitch.OpenStack Networking (neutron)troubleshootingLinux Bridgetroubleshootingnetwork " -"troubleshootingtroubleshooting" -msgstr "" -"ネットワークのトラブルシューティングは、残念ながら、非常に難しくややこしい作" -"業です。ネットワークの問題は、クラウドのいくつかの場所で問題となりえます。論" -"理的な問題解決手順を用いることは、混乱の緩和や迅速な切り分けに役立つでしょ" -"う。この章は、nova-network、Linux ブリッジや Open vSwitch " -"を用いた OpenStack Networking (neutron) に関する何らかの問題を識別するために" -"必要となる情報を提供することを目的とします。" - -msgid "Network usage (bandwidth and IP usage)" -msgstr "ネットワーク使用量 (帯域および IP 使用量)" - -msgid "Network: five 10G network ports" -msgstr "ネットワーク: 10G ネットワークポート x 5" - -msgid "Network: four 10G network ports (For future proofing expansion)" -msgstr "ネットワーク: 10G ネットワークポート x 4 (将来を保証する拡張性のため)" - -msgid "Network: two 10G network ports" -msgstr "ネットワーク: 10G のネットワークポート x 2" - -msgid "Networking" -msgstr "ネットワーク" - -msgid "Networking Guide" -msgstr "ネットワークガイド" - -msgid "Networking configuration just for PXE booting" -msgstr "PXE ブート用のネットワーク設定" - -msgid "Networking deployment options" -msgstr "ネットワーク構成オプション" - -msgid "" -"Networking failure is isolated to the VMs running on the affected hypervisor." -msgstr "ネットワーク障害は影響を受けるハイパーバイザーのVMに限定されます。" - -msgid "" -"Networking in OpenStack is a complex, multifaceted challenge. See .compute " -"nodesnetworking" -msgstr "" -"OpenStack でのネットワークは複雑で、多くの課題があります。を参照してください。コンピュートノードネットワーキング" - -msgid "Networking layout" -msgstr "ネットワークのレイアウト" - -msgid "Networking service" -msgstr "ネットワークサービス" - -msgid "Neutron equivalent" -msgstr "Neutronでの実装" - -msgid "Neutron network paths" -msgstr "Neutron ネットワーク経路" - -msgid "New API Versions" -msgstr "新しい API " - -msgid "NewValue" -msgstr "NewValue" - -msgid "Nexenta" -msgstr "Nexenta" - -msgid "" -"Next, create /etc/rsyslog.d/client.conf with the " -"following line:" -msgstr "" -"次に、/etc/rsyslog.d/client.conf を作成して、以下の行を" -"書き込みます。" - -msgid "Next, find the fixed IP entry for that UUID:" -msgstr "次に、そのUUIDから固定IPのエントリーを探します。" - -msgid "" -"Next, manually detach and reattach the volumes, where X is the proper mount " -"point:" -msgstr "" -"次に、ボリュームを手動で切断し、再接続します。ここで X は適切なマウントポイン" -"トです。" - -msgid "Next, migrate them one by one:" -msgstr "次に、それらを一つずつマイグレーションします。" - -msgid "" -"Next, open a new shell to the instance and then ping the external host where " -"tcpdump is running. If the network path to the external " -"server and back is fully functional, you see something like the following:" -msgstr "" -"次に、新しいシェルを開いて tcpdump の動いている外部ホスト" -"へpingを行います。もし外部サーバーとのネットワーク経路に問題がなければ、以下" -"のように表示されます。" - -msgid "" -"Next, physically remove the disk from the server and replace it with a " -"working disk." -msgstr "" -"次に、ディスクを物理的にサーバーから取り外し、正常なディスクと入れ替えます。" - -msgid "Next, redistribute the ring files to the other nodes:" -msgstr "次に、ring ファイルを他のノードに再配布します。" - -msgid "" -"Next, shut down your compute node, perform your maintenance, and turn the " -"node back on. You can reenable the nova-compute service by " -"undoing the previous commands:" -msgstr "" -"続けて、コンピュートノードを停止し、メンテナンスを実行し、ノードを元に戻しま" -"す。先のコマンドを逆に実行することにより、nova-compute サービス" -"を再び有効化できます:" - -msgid "" -"Next, the libvirtd daemon was run on the command line. Finally " -"a helpful error message: it could not connect to d-bus. As ridiculous as it " -"sounds, libvirt, and thus nova-compute, relies on d-bus and " -"somehow d-bus crashed. Simply starting d-bus set the entire chain back on " -"track, and soon everything was back up and running." -msgstr "" -"次に、libvirtd デーモンをコマンドラインにおいて実行しました。最" -"終的に次のような役に立つエラーメッセージが得られました。d-bus に接続できませ" -"んでした。このため、滑稽に聞こえるかもしれませんが、libvirt 、その結果として " -"nova-compute も D-Bus に依存していて、どういう訳か D-Bus がク" -"ラッシュしました。単に D-Bus を開始するだけで、一連のプログラムがうまく動くよ" -"うになり、すぐに全部が元に戻り動作状態になりました。" - -msgid "" -"Next, the nova database contains three tables that store usage " -"information." -msgstr "" -"次に nova データベースは 利用情報に関して3つのテーブルを持ってい" -"ます。" - -msgid "" -"Next, the internal bridge, br-int, contains int-eth1-br, which pairs with phy-eth1-br to connect to the physical " -"network shown in the previous bridge, patch-tun, which is used " -"to connect to the GRE tunnel bridge and the TAP devices that connect to the " -"instances currently running on the system:" -msgstr "" -"次に、内部ブリッジ br-intint-eth1-br を持ちま" -"す。この int-eth1-br は、phy-eth1-br とペアになり、" -"前のブリッジ patch-tun で示された物理ネットワークに接続されま" -"す。この patch-tun は、GRE トンネルブリッジを接続するために使用" -"され、システムにおいて現在動作しているインスタンスに接続される TAP デバイスで" -"す。" - -msgid "" -"Next, the packets from either input go through the integration bridge, again " -"just as on the compute node." -msgstr "" -"次に、何かしらの入力パケットは統合ブリッジ経由で送信されます。繰り返します" -"が、コンピュートノードと同じようなものです。" - -msgid "" -"Next, update the nova database to indicate that all instances that used to " -"be hosted on c01.example.com are now hosted on c02.example.com:" -msgstr "" -"次に、c01.example.com においてホストされていたすべてのインスタンスが、今度は " -"c02.example.com でホストされることを伝えるために nova データベースを更新しま" -"す。" - -msgid "No DHCP overhead." -msgstr "DHCPによるオーバーヘッドがありません。" - -msgid "Node connectivity" -msgstr "ノードの接続性" - -msgid "Node diagrams" -msgstr "ノードの図" - -msgid "Node type" -msgstr "ノード種別" - -msgid "Node types" -msgstr "ノードのタイプ" - -msgid "" -"Not all packets have a size of 1500. Running the command " -"over SSH might only create a single packets less than 1500 bytes. However, " -"running a command with heavy output, such as requires " -"several packets of 1500 bytes." -msgstr "" -"すべてのパケットサイズが 1500 に収まるわけではない。SSH 経由の " -" コマンド実行は 1500 バイト未満のサイズのパケット1つで収まる" -"かもしれない。しかし、 のように多大な出力を行うコマンドを実" -"行する場合、1500 バイトのパケットが複数必要とある。" - -msgid "" -"Not entirely network specific, but it contains information about the " -"instance that is utilizing the fixed_ip and optional " -"floating_ip." -msgstr "" -"ネットワーク特有のテーブルではありませんが、fixed_ipと" -"floating_ipを使っているインスタンスの情報を管理します。" - -msgid "Not yet available" -msgstr "まだ利用不可" - -msgid "" -"Note that the arguments are positional, and the from-port " -"and to-port arguments specify the allowed local port " -"range connections. These arguments are not indicating source and destination " -"ports of the connection. More complex rule sets can be built up through " -"multiple invocations of nova secgroup-add-rule. For " -"example, if you want to pass both http and https traffic, do this:" -msgstr "" -"引数の順番が決まっていることに注意してください。そして、from-portto-port の引数は許可されるローカルのポート範囲" -"を指定し、接続の送信元ポートと宛先ポートではないことに注意してください。" -"nova secgroup-add-rule を複数回呼び出すことで、より複雑な" -"ルールセットを構成できます。たとえば、http と https の通信を通過させたい場合:" - -msgid "Nov 29, 2012" -msgstr "2012年11月29日" - -msgid "" -"Nova-compute and nova-conductor services, which run on the compute nodes, " -"are only needed to run services on that node, so availability of those " -"services is coupled tightly to the nodes that are available. As long as a " -"compute node is up, it will have the needed services running on top of it." -msgstr "" -"コンピュートノード上で実行される nova-compute サービスと nova-conductor サー" -"ビスは、そのノード上でのみサービスを実行する必要があるので、これらのサービス" -"の可用性はそのノードの稼働状態と密接に連結しています。コンピュートノードが稼" -"働している限りは、そのノード上で必要なサービスが実行されます。" - -msgid "" -"Now that you have an OpenStack development environment, you're free to hack " -"around without worrying about damaging your production deployment. provides a working environment for running " -"OpenStack Identity, Compute, Block Storage, Image service, the OpenStack " -"dashboard, and Object Storage as the starting point." -msgstr "" -"以上で OpenStack の開発環境を準備できましたので、運用環境にダメージを与えるこ" -"とを心配せずに自由にハックできます。 は、手始め" -"として OpenStack Identity、Compute、Block Storage、Image service、dashboard、" -"Object Storage を実行するための作業環境を提供します。" - -msgid "" -"Now that you see the myriad designs for controlling your cloud, read more " -"about the further considerations to help with your design decisions." -msgstr "" -"クラウドコントローラの設計は無数にあります、クラウドコントローラの設計の助け" -"として更なる考慮事項をお読みください。" - -msgid "" -"Now try the command from Step 10 again and it succeeds. There are no objects " -"in the container, so there is nothing to list; however, there is also no " -"error to report." -msgstr "" -"ここで手順 10 のコマンドを再び試みて、続行します。コンテナーにオブジェクトが" -"ありません。そのため、一覧には何もありません。しかしながら、レポートするエ" -"ラーもありません。" - -msgid "Now you can import a previously backed-up database:" -msgstr "以前にバックアップしたデータベースをインポートします。" - -msgid "" -"Now you can refer to your token on the command line as $TOKEN." -msgstr "" -"これで、コマンドラインでトークンを $TOKEN として参照できる" -"ようになりました。" - -msgid "Number of Block Storage snapshots allowed per tenant." -msgstr "テナントごとのブロックストレージスナップショット数" - -msgid "Number of Block Storage volumes allowed per tenant" -msgstr "テナントごとのブロックストレージボリューム数" - -msgid "Number of bytes allowed per injected file path." -msgstr "injected file のパス長の最大バイト数" - -msgid "Number of content bytes allowed per injected file." -msgstr "injected file あたりの最大バイト数" - -msgid "" -"Number of fixed IP addresses allowed per tenant. This number must be equal " -"to or greater than the number of allowed instances." -msgstr "" -"テナント毎の固定 IP アドレスの最大数。この数はテナント毎の最大インスタンス数" -"以上にしなければなりません。" - -msgid "Number of floating IP addresses allowed per tenant." -msgstr "テナントごとの最大 Floating IP 数" - -msgid "Number of injected files allowed per tenant." -msgstr "injected file の最大数" - -msgid "Number of instance cores allowed per tenant." -msgstr "テナントごとのインスタンスのコア数" - -msgid "Number of instances allowed per tenant." -msgstr "テナントごとの最大インスタンス数" - -msgid "Number of key pairs allowed per user." -msgstr "ユーザーごとの最大キーペア数" - -msgid "Number of metadata items allowed per instance." -msgstr "インスタンスごとのメタデータ項目数" - -msgid "Number of physical cores" -msgstr "物理コア数" - -msgid "Number of rules per security group." -msgstr "セキュリティグループごとのセキュリティルール数" - -msgid "Number of security groups per tenant." -msgstr "テナントごとのセキュリティグループ数" - -msgid "Number of virtual CPUs presented to the instance." -msgstr "インスタンスに存在する仮想 CPU 数。" - -msgid "Number of virtual cores per instance" -msgstr "インスタンスごとの仮想コア数" - -msgid "Number of volume gigabytes allowed per tenant" -msgstr "テナントごとのボリューム容量の最大値(単位はギガバイト)" - -msgid "" -"OK, so where is the MTU issue coming from? Why haven't we seen this in any " -"other deployment? What's new in this situation? Well, new data center, new " -"uplink, new switches, new model of switches, new servers, first time using " -"this model of servers… so, basically everything was new. Wonderful. We toyed " -"around with raising the MTU at various areas: the switches, the NICs on the " -"compute nodes, the virtual NICs in the instances, we even had the data " -"center raise the MTU for our uplink interface. Some changes worked, some " -"didn't. This line of troubleshooting didn't feel right, though. We shouldn't " -"have to be changing the MTU in these areas." -msgstr "" -"OK。では MTU の問題はどこから来るのか?なぜ我々は他のデプロイでこの問題に遭遇" -"しなかったのか?この状況は何が新しいのか?えっと、新しいデータセンター、新し" -"い上位リンク、新しいスイッチ、スイッチの新機種、新しいサーバー、サーバーの新" -"機種…つまり、基本的に全てが新しいものだった。素晴らしい。我々は様々な領域で " -"MTU の増加を試してみた。スイッチ、コンピュータのNIC、インスタンスの仮想NIC、" -"データセンターの上位リンク用のインターフェースのMTUまでいじってみた。いくつか" -"の変更ではうまくいったが、他はダメだった。やはり、この線の障害対策はうまく" -"いってないようだった。我々はこれらの領域のMTUは変更すべきではないようだ。" - -msgid "OR" -msgstr "OR" - -msgid "Object" -msgstr "オブジェクトストレージ" - -msgid "Object Storage" -msgstr "オブジェクトストレージ" - -msgid "Object Storage cluster internal communications" -msgstr "Object Storage クラスタ内の通信" - -msgid "" -"Object Storage is very \"chatty\" among servers hosting data—even a small " -"cluster does megabytes/second of traffic, which is predominantly, “Do you " -"have the object?”/“Yes I have the object!” Of course, if the answer to the " -"aforementioned question is negative or the request times out, replication of " -"the object begins." -msgstr "" -"オブジェクトストレージは、データをホストするサーバーの中で非常に呼び出しが多" -"く、小さいクラスターでも一秒に MB 単位のトラフィックがあります。「オブジェク" -"トがありますか?はい、あります。」当然、この質問に対する答えがNoの場合、また" -"は要求がタイムアウトした場合、オブジェクトの複製が開始されます。" - -msgid "" -"Object Storage's network patterns might seem unfamiliar at first. Consider " -"these main traffic flows: objectsstorage decisions andcontainersstorage decisions " -"andaccount " -"server" -msgstr "" -"オブジェクトストレージのネットワークパターンは、最初は見慣れないかもしれませ" -"ん。これらのメイントラフィックの流れを見てみましょう。 objectsストレージの決定コンテナーストレージの決定アカウントサーバー" - -msgid "Object storage" -msgstr "オブジェクトストレージ" - -msgid "Obtain the UUID of the image:" -msgstr "イメージの UUID を取得します。" - -msgid "" -"Obtain the UUID of the project with which you want to share your image. " -"Unfortunately, non-admin users are unable to use the keystone command to do this. The easiest solution is to obtain the UUID " -"either from an administrator of the cloud or from a user located in the " -"project." -msgstr "" -"イメージを共有したいプロジェクトの UUID を取得します。残念ながら、非管理者" -"は、これを実行するために keystone コマンドを使用できませ" -"ん。最も簡単な解決方法は、クラウドの管理者やそのプロジェクトのユーザーから " -"UUID を教えてもらうことです。" - -msgid "Obtain the tenant ID, as follows:" -msgstr "テナント ID を取得します。" - -msgid "" -"Obtaining consistent snapshots of Windows VMs is conceptually similar to " -"obtaining consistent snapshots of Linux VMs, although it requires additional " -"utilities to coordinate with a Windows-only subsystem designed to facilitate " -"consistent backups." -msgstr "" -"Windows 仮想マシンの整合性あるスナップショットの取得は、Linux マシンのスナッ" -"プショットの場合と同じようなものです。ただし、整合性バックアップを働かせるた" -"めに、Windows 専用のサブコマンドと調整するための追加機能を必要とします。" - -msgid "Oct 12, 2012" -msgstr "2012年10月12日" - -msgid "Oct 16, 2014" -msgstr "2014年10月16日" - -msgid "Oct 17, 2013" -msgstr "2013年10月17日" - -msgid "Oct 2, 2014" -msgstr "2014年10月2日" - -msgid "Oct 21, 2010" -msgstr "2010年10月21日" - -msgid "Oct, 2015" -msgstr "2012年10月" - -msgid "Off Compute Node Storage—Shared File System" -msgstr "コンピュートノード外のストレージ (共有ファイルシステム)" - -msgid "Off compute node storage—shared file system" -msgstr "コンピュートノード外のストレージ (共有ファイルシステム)" - -msgid "" -"Offers each service's REST API access, where the API endpoint catalog is " -"managed by the Identity service" -msgstr "" -"それぞれのサービス用のAPIアクセスを提供します。APIエンドポイントカタログの場" -"所は Identity サービスが管理しています。" - -msgid "" -"Often, an extra (such as 1 GB) interface on compute or storage nodes is used " -"for system administrators or monitoring tools to access the host instead of " -"going through the public interface." -msgstr "" -"システム管理者や監視ツールがパブリックインターフェース経由の代わりにコン" -"ピュートノードやストレージノードのアクセスに使用するための追加インターフェー" -"ス(1GBなど)" - -msgid "" -"Oisin Feeley read it, made some edits, and provided emailed feedback right " -"when we asked." -msgstr "" -"Oisin Feeley は、このマニュアルを読んで、いくつかの編集をし、私たちが問い合わ" -"せをした際には、E-mailでのフィードバックをくれました。" - -msgid "On Compute Node Storage—Nonshared File System" -msgstr "コンピュートノード上のストレージ (非共有ファイルシステム)" - -msgid "On Compute Node Storage—Shared File System" -msgstr "コンピュートノード上のストレージ (共有ファイルシステム)" - -msgid "" -"On Wednesday night we had a fun happy hour with the Austin OpenStack Meetup " -"group and Racker Katie Schmidt took great care of our group." -msgstr "" -"水曜日の夜、オースチン OpenStack ミートアップグループと楽しく幸せな時間を過ご" -"し、Racker Katie Schmidt は私たちのグループを素晴らしい世話をしてくれました。" - -msgid "" -"On all distributions, you must perform some final tasks to complete the " -"upgrade process.upgradingfinal steps" -msgstr "" -"すべてのディストリビューションにおいて、アップグレードプロセスを完了するため" -"に、いくつかの最終作業を実行する必要があります。upgradingfinal steps" - -msgid "On all nodes:" -msgstr "すべてのノード:" - -msgid "On compute node storage—nonshared file system" -msgstr "コンピュートノード上のストレージ (非共有ファイルシステム)" - -msgid "On compute node storage—shared file system" -msgstr "コンピュートノード上のストレージ (共有ファイルシステム)" - -msgid "" -"On compute nodes and nodes running nova-network, use the " -"following command to see information about interfaces, including information " -"about IPs, VLANs, and whether your interfaces are up:ip a commandinterface states, checkingtroubleshootingchecking interface states" -msgstr "" -"コンピュートノードおよび nova-network を実行しているノード" -"において、以下のコマンドを使用して、IP、VLAN、起動状態などのインターフェース" -"に関する情報を参照します。ip a " -"commandinterface states, checkingtroubleshootingchecking interface " -"states" - -msgid "" -"On each host that will house block storage, an administrator must initially " -"create a volume group dedicated to Block Storage volumes. Blocks are created " -"from LVM logical volumes." -msgstr "" -"ブロックストレージを収容する各ホストでは、管理者は事前にブロックストレージ専" -"用のボリュームグループを作成しておく必要があります。ブロックストレージはLVM論" -"理ボリュームから作られます。" - -msgid "" -"On my last day in Kelowna, I was in a conference call from my hotel. In the " -"background, I was fooling around on the new cloud. I launched an instance " -"and logged in. Everything looked fine. Out of boredom, I ran and all of the sudden the instance locked up." -msgstr "" -"ケロウナの最終日、私はホテルから電話会議に参加していた。その裏で、私は新しい" -"クラウドをいじっていた。私はインスタンスを1つ起動し、ログインした。全ては正" -"常に思えた。退屈しのぎに、私は を実行したところ、突然そのイ" -"ンスタンスがロックアップしてしまった。" - -msgid "On the command line, do this:" -msgstr "コマンドラインで、このようにします。" - -msgid "On the compute node, add the following to your NRPE configuration:" -msgstr "コンピュートノード上では、次のようなNRPE設定を追加します。" - -msgid "On the compute node:" -msgstr "コンピュートノード上" - -msgid "On the external server:" -msgstr "外部サーバー上" - -msgid "" -"On the first day, we filled white boards with colorful sticky notes to start " -"to shape this nebulous book about how to architect and operate clouds:" -"" -msgstr "" -"最初の日に、アイデアを色とりどりのポストイットでホワイトボードいっぱいに書き" -"出し、クラウドを設計し運用するという漠然とした話題を扱った本の作成を開始しま" -"した。" - -msgid "On the instance:" -msgstr "インスタンス上" - -msgid "" -"On the integration bridge, networks are distinguished using internal VLANs " -"regardless of how the networking service defines them. This allows instances " -"on the same host to communicate directly without transiting the rest of the " -"virtual, or physical, network. These internal VLAN IDs are based on the " -"order they are created on the node and may vary between nodes. These IDs are " -"in no way related to the segmentation IDs used in the network definition and " -"on the physical wire." -msgstr "" -"内部ブリッジにおいて、ネットワークサービスがどのように定義されているかによら" -"ず、ネットワークは内部 VLAN を使用して区別されます。これにより、同じホストに" -"あるインスタンスが、仮想、物理、ネットワークを転送することなく直接通信できる" -"ようになります。これらの内部 VLAN ID は、ノードにおいて作成された順番に基づ" -"き、ノード間で異なる可能性があります。これらの ID は、ネットワーク定義および" -"物理結線において使用されるセグメント ID にまったく関連しません。" - -msgid "" -"Once access to a flavor has been restricted, no other projects besides the " -"ones granted explicit access will be able to see the flavor. This includes " -"the admin project. Make sure to add the admin project in addition to the " -"original project." -msgstr "" -"フレーバーへのアクセスが制限されると、明示的にアクセスを許可されたプロジェク" -"ト以外は、フレーバーを参照できなくなります。これには admin プロジェクトも含ま" -"れます。元のプロジェクトに加えて、きちんと admin プロジェクトを追加してくださ" -"い。" - -msgid "" -"Once allocated, a floating IP can be assigned to running instances from the " -"dashboard either by selecting Associate Floating IP " -"from the actions drop-down next to the IP on the Floating IPs tab of the Access & Security page or by " -"making this selection next to the instance you want to associate it with on " -"the Instances page. The inverse action, " -"Dissociate Floating IP, is available from the " -"Floating IPs tab of the Access & " -"Security page and from the Instances page." -msgstr "" -"一度確保すると、Floating IP を実行中のインスタンスに割り当てることができま" -"す。ダッシュボードでは、アクセスとセキュリティページの " -"Floating IPs タブにある IP の隣にある、アクションドロッ" -"プダウンからFloating IP の割り当てを選択することにより" -"実行できます。または、インスタンスページにおいて割り当て" -"たいインスタンスの隣にある、リストからこれを選択することにより実行できます。" -"逆の動作 Floating IP の割り当て解除アクセ" -"スとセキュリティページのFloating IPs タブから" -"実行できます。インスタンスのページから利用できません。" - -msgid "Once the files are restored, start everything back up:" -msgstr "ファイルをリストア後、サービスを起動します。" - -msgid "" -"Once the volume has been frozen, do not attempt to read from or write to the " -"volume, as these operations hang. The operating system stops every I/O " -"operation and any I/O attempts are delayed until the file system has been " -"unfrozen." -msgstr "" -"ボリュームがフリーズ状態になったら、ボリュームの読み書き命令が止まってしまう" -"ので、ボリュームの読み書きを行わないようにしてください。オペレーティングシス" -"テムがすべての I/O 操作を停止し、すべての I/O 試行がファイルシステムがフリー" -"ズ解除されるまで遅延させられます。" - -msgid "" -"Once you have a Launchpad account, reporting a bug is as simple as " -"identifying the project or projects that are causing the issue. Sometimes " -"this is more difficult than expected, but those working on the bug triage " -"are happy to help relocate issues if they are not in the right place " -"initially:" -msgstr "" -"Launchpad のアカウントを作った後は、バグを報告するのは、問題の原因となったプ" -"ロジェクトを特定するのと同じくらい簡単です。場合によっては、思っていたよりも" -"難しいこともありますが、最初のバグ報告が適切な場所でなかったとしても、バグの" -"分類を行う人がバグ報告を適切なプロジェクトに割り当て直してくれます。" - -msgid "" -"Once you have both pieces of information, run the glance " -"command:" -msgstr "情報がそろったら、glance コマンドを実行します。" - -msgid "" -"Once you have completed the inspection, unmount the mount point and release " -"the qemu-nbd device:" -msgstr "" -"調査を完了すると、マウントポイントをアンマウントし、qemu-nbd デバイスを解放し" -"ます。" - -msgid "" -"Once you have issued the fsfreeze command, it is safe to " -"perform the snapshot. For example, if your instance was named mon-" -"instance and you wanted to snapshot it to an image named " -"mon-snapshot, you could now run the following:" -msgstr "" -"fsfreeze コマンドを発行すると、スナップショットを実行して" -"も安全です。たとえば、インスタンスが mon-instance という名" -"前で、mon-snapshot という名前のイメージにスナップショット" -"を取得したければ、以下のとおり実行します:" - -msgid "" -"Once you have reproduced the issue (or are 100 percent confident that this " -"is indeed a valid bug) and have permissions to do so, set:" -msgstr "" -"にセットされます。問題を再現できた場合 (もしくはこの報告がバグであると 100% " -"確信を持てる場合) で、セットする権限を持っている場合には、" - -msgid "" -"Once you mount the disk file, you should be able to access it and treat it " -"as a collection of normal directories with files and a directory structure. " -"However, we do not recommend that you edit or touch any files because this " -"could change the access control lists (ACLs) that are used to determine " -"which accounts can perform what operations on files and directories. " -"Changing ACLs can make the instance unbootable if it is not already." -"access control list (ACL)" -msgstr "" -"ディスクファイルをマウントすると、それにアクセスでき、ファイルとディレクトリ" -"構造を持つ通常のディレクトリのように取り扱えます。しかしながら、どのファイル" -"の編集も操作もしないことをお薦めします。なぜなら、それによりアクセス制御リス" -"ト (ACL) が変更されたり、起動できるインスタンスが起動できなくなる場合があるか" -"らです。ACL は、アカウントがファイルやディレクトリーにどの操作を実行できるの" -"か判断するために使用されます。access " -"control list (ACL)" - -msgid "" -"Once you've determined which namespace you need to work in, you can use any " -"of the debugging tools mention earlier by prefixing the command with " -"ip netns exec <namespace>. For example, to see what " -"network interfaces exist in the first qdhcp namespace returned above, do " -"this:" -msgstr "" -"作業する必要のある名前空間を決めると、コマンドの前に ip netns exec " -"<namespace> を付けることにより、前に言及したデバッグツールを" -"すべて使用できます。例えば、上で返された最初の qdhcp 名前空間に存在するネット" -"ワークインターフェースを参照する場合、このように実行します。" - -msgid "" -"Once you've gathered this information, creating the user in the dashboard is " -"just another web form similar to what we've seen before and can be found by " -"clicking the Users link in the Identity navigation bar and then clicking the " -"Create User button at the top right." -msgstr "" -"一度この情報を収集すると、ダッシュボードでのユーザーの作成は、これまでに見て" -"きた他の Web フォームと同じです。\"ユーザー管理\" ナビゲーションバーの \"ユー" -"ザー\" リンクにあります。そして、右上にある \"ユーザーの作成\" ボタンをクリッ" -"クします。" - -msgid "" -"One choice that always comes up is whether to virtualize. Some services, " -"such as nova-compute, swift-proxy and swift-" -"object servers, should not be virtualized. However, control servers " -"can often be happily virtualized—the performance penalty can usually be " -"offset by simply running more of the service." -msgstr "" -"仮想化するかどうかについてはいつも問題になります。 nova-computeswift-proxyswift-object といったいくつか" -"のサーバーは仮想化にすべきではありません。しかし、制御系のサーバについては仮" -"想化にすると幸せになります。それによる性能のペナルティは単純により多くのサー" -"ビスを動かす事で相殺する事ができます。" - -msgid "" -"One cloud controller acted as a gateway to all compute nodes. VlanManager " -"was used for the network config. This means that the cloud controller and " -"all compute nodes had a different VLAN for each OpenStack project. We used " -"the -s option of to change the packet size. We watched as " -"sometimes packets would fully return, sometimes they'd only make it out and " -"never back in, and sometimes the packets would stop at a random point. We " -"changed to start displaying the hex dump of the packet. We " -"pinged between every combination of outside, controller, compute, and " -"instance." -msgstr "" -"1つのクラウドコントローラーが全コンピュートノードのゲートウェイの役割を果た" -"していた。ネットワーク設定には VlanManager が使われていた。これは、クラウドコ" -"ントローラーと全コンピュートノードで、各 OpenStack プロジェクトが異なる VLAN " -"を持つことを意味する。パケットサイズ変更のため、 の -s オプ" -"ションを使用していた。パケットが全て戻ってくる時もあれば、パケットが出ていっ" -"たきり全く戻って来ない時もあれば、パケットはランダムな場所で止まってしまう時" -"もある、という状況だった。 を変更し、パケットの16進ダンプを表" -"示するようにした。外部、コントローラー、コンピュート、インスタンスのあらゆる" -"組み合わせの間で ping を実行した。" - -msgid "" -"One common networking problem is that an instance boots successfully but is " -"not reachable because it failed to obtain an IP address from dnsmasq, which " -"is the DHCP server that is launched by the nova-network " -"service.DHCP (Dynamic Host " -"Configuration Protocol)debuggingtroubleshootingnova-network DHCP" -msgstr "" -"よくあるネットワークの問題に、インスタンスが起動しているにも関わらず、dnsmasq" -"からのIPアドレス取得に失敗し、到達できないという現象があります。dnsmasqは" -"nova-networkサービスから起動されるDHCPサーバです。" -"DHCP (Dynamic Host Configuration " -"Protocol)debuggingtroubleshootingnova-network " -"DHCP" - -msgid "" -"One great, although very in-depth, way of troubleshooting network issues is " -"to use tcpdump. We recommended using tcpdump at several points along the network path to correlate where a " -"problem might be. If you prefer working with a GUI, either live or by using " -"a tcpdump capture, do also check out Wireshark." -"tcpdump" -msgstr "" -"ネットワーク問題の解決を徹底的に行う方法のひとつは、tcpdumpです。tcpdumpを使い、ネットワーク経路上の数点、問" -"題のありそうなところから情報を収集することをおすすめします。もしGUIが好みであ" -"れば、Wiresharkを試してみてはいかがでしょう。tcpdump" - -msgid "" -"One interesting example is modifying the table of images and the owner of " -"that image. This can be easily done if you simply display the unique ID of " -"the owner. Image servicedatabase queriesThis example goes " -"one step further and displays the readable name of the owner:" -msgstr "" -"興味深い例の一つは、イメージとそのイメージの所有者の表の表示内容を変更するこ" -"とです。これは、所有者のユニーク ID を表示するようにするだけで実現できます。" -"Image serviceデー" -"タベースクエリーこの例はさらに一歩進め、所有者の読み" -"やすい形式の名前を表示します:" - -msgid "" -"One key element of systems administration that is often overlooked is that " -"end users are the reason systems administrators exist. Don't go the BOFH " -"route and terminate every user who causes an alert to go off. Work with " -"users to understand what they're trying to accomplish and see how your " -"environment can better assist them in achieving their goals. Meet your users " -"needs by organizing your users into projects, applying policies, managing " -"quotas, and working with them.systems " -"administrationuser management" -msgstr "" -"システム管理の見過ごされがちな大事な要素の一つに、エンドユーザのためにシステ" -"ム管理者が存在するという点があります。BOFH (Bastard Operator From Hell; 「地" -"獄から来た最悪の管理者」) の道に入って、問題の原因となっているユーザーを全員" -"停止させるようなことはしないでください。ユーザーがやりたいことを一緒になって" -"理解し、どうするとあなたの環境がユーザーが目的を達成するのにもっと支援できる" -"かを見つけてください。ユーザーをプロジェクトの中に組織して、ポリシーを適用し" -"て、クォータを管理して、彼らと一緒に作業することにより、ユーザーのニーズを満" -"たしてください。systems " -"administrationuser management" - -msgid "" -"One last test is to launch a second instance and see whether the two " -"instances can ping each other. If they can, the issue might be related to " -"the firewall on the compute node.path " -"failurestroubleshootingdetecting path failures" -msgstr "" -"最後のテストは、2 つ目のインスタンスを起動して、2 つのインスタンスがお互いに " -"ping できることを確認することです。もしできる場合、問題はコンピュートノードの" -"ファイアウォールに関連するものでしょう。path failurestroubleshootingdetecting path failures" - -msgid "" -"One morning, a compute node failed to run any instances. The log files were " -"a bit vague, claiming that a certain instance was unable to be started. This " -"ended up being a red herring because the instance was simply the first " -"instance in alphabetical order, so it was the first instance that " -"nova-compute would touch." -msgstr "" -"ある朝、あるノードでインスタンスの実行がすべて失敗するようになりました。ログ" -"ファイルがすこしあいまいでした。特定のインスタンスが起動できなかったことを示" -"していました。これは最終的に偽の手掛かりであることがわかりました。単にそのイ" -"ンスタンスがアルファベット順で最初のインスタンスだったので、nova-" -"compute が最初に操作したのがそのインスタンスだったというだけでし" -"た。" - -msgid "" -"One of the long-time complaints surrounding OpenStack Networking was the " -"lack of high availability for the layer 3 components. The Juno release " -"introduced Distributed Virtual Router (DVR), which aims to solve this " -"problem." -msgstr "" -"OpenStack Networking を長く取り巻く不満の 1 つは、L3 コンポーネントの高可用性" -"の不足でした。Juno リリースは、これを解決することを目指した、分散仮想ルー" -"ター (DVR) を導入しました。" - -msgid "" -"One of the most complex aspects of an OpenStack cloud is the networking " -"configuration. You should be familiar with concepts such as DHCP, Linux " -"bridges, VLANs, and iptables. You must also have access to a network " -"hardware expert who can configure the switches and routers required in your " -"OpenStack cloud." -msgstr "" -"OpenStack クラウドの最も複雑な点の一つにネットワーク設定があります。DHCP、" -"Linux ブリッジ、VLAN、iptables といった考え方をよく理解していなければなりませ" -"ん。OpenStack クラウドで必要となるスイッチやルータを設定できるネットワーク" -"ハードウェアの専門家と話をする必要もあります。" - -msgid "" -"One of the most requested features since OpenStack began (for components " -"other than Object Storage, which tends to \"just work\"): easier upgrades. " -"In all recent releases internal messaging communication is versioned, " -"meaning services can theoretically drop back to backward-compatible " -"behavior. This allows you to run later versions of some components, while " -"keeping older versions of others." -msgstr "" -"より簡単なアップグレードは、OpenStack の開始以来、もっとも要望されている機能" -"の 1 つです。Object Storage 以外のコンポーネントは「作業中」です。最近のリ" -"リースでは、内部メッセージ通信がバージョン付けされています。サービスが理論的" -"には後方互換性の動作まで戻れることを目的としています。これにより、古いバー" -"ジョンをいくつか残しながら、いくつかのコンポーネントの新しいバージョンを実行" -"できるようになります。" - -msgid "" -"One way to plan for cloud controller or storage proxy maintenance is to " -"simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy affects " -"fewer users. If your cloud controller or storage proxy is too important to " -"have unavailable at any point in time, you must look into high-availability " -"options.cloud controllersplanned maintenance ofmaintenance/debuggingcloud " -"controller planned maintenance" -msgstr "" -"クラウドコントローラーやストレージプロキシのメンテナンスを計画する一つの方法" -"は、単に午前 1 時や 2 時のような利用の少ない時間帯に実行することです。この戦" -"略はあまり多くのユーザーに影響を与えません。クラウドコントローラーやストレー" -"ジプロキシが、いかなる時間帯においても、サービスが利用できないことによる影響" -"が大きければ、高可用性オプションについて検討する必要があります。cloud controllersplanned " -"maintenance ofmaintenance/debuggingcloud controller " -"planned maintenance" - -msgid "" -"Open a bug in Launchpad and mark it as a \"security bug.\" This makes the " -"bug private and accessible to only the Vulnerability Management Team." -msgstr "" -"Launchpad でバグ報告を行い、「security bug」のマークを付ける。マークを付ける" -"と、そのバグは一般には公開されなくなり、脆弱性管理チームだけが参照できるよう" -"になります。" - -msgid "" -"Open vSwitch, as used in the previous OpenStack Networking examples is a " -"full-featured multilayer virtual switch licensed under the open source " -"Apache 2.0 license. Full documentation can be found at the project's website. In practice, given " -"the preceding configuration, the most common issues are being sure that the " -"required bridges (br-int, br-tun, and br-ex) exist and have the proper ports connected to them.Open vSwitchtroubleshootingtroubleshootingOpen vSwitch" -msgstr "" -"前の OpenStack Networking の例で使用されていたように、Open vSwitch は、オープ" -"ンソースの Apache 2.0 license にてライセンスされている、完全な機能を持つマル" -"チレイヤー仮想スイッチです。ドキュメント全体はプロジェクトの Web サイトにあります。実際のところ、" -"前述の設定を用いた場合、最も一般的な問題は、必要となるブリッジ (br-" -"intbr-tunbr-ex) が存在し、それらに接続さ" -"れる適切なポートを持つことを確認することです。Open vSwitchtroubleshootingtroubleshootingOpen vSwitch" - -msgid "OpenStack" -msgstr "OpenStack" - -msgid "OpenStack API Guide" -msgstr "OpenStack API ガイド" - -msgid "OpenStack Admin User Guide" -msgstr "OpenStack 管理ユーザーガイド" - -msgid "OpenStack Block Storage" -msgstr "OpenStack Block Storage" - -msgid "OpenStack Block Storage (cinder)" -msgstr "OpenStack Block Storage (cinder)" - -msgid "" -"OpenStack Block Storage - Updating the Block Storage service only requires " -"restarting the service." -msgstr "" -"OpenStack Block Storage - Block Storage サービスの更新は、サービスの再起動の" -"みを必要とします。" - -msgid "" -"OpenStack Block Storage also allows creating snapshots of volumes. Remember " -"that this is a block-level snapshot that is crash consistent, so it is best " -"if the volume is not connected to an instance when the snapshot is taken and " -"second best if the volume is not in use on the instance it is attached to. " -"If the volume is under heavy use, the snapshot may have an inconsistent file " -"system. In fact, by default, the volume service does not take a snapshot of " -"a volume that is attached to an image, though it can be forced to. To take a " -"volume snapshot, either select Create Snapshot from the " -"actions column next to the volume name on the dashboard volume page, or run " -"this from the command line:" -msgstr "" -"OpenStack Block Storage では、ボリュームのスナップショットを作成することもで" -"きます。これはブロックレベルのスナップショットであることを覚えておいてくださ" -"い。これはクラッシュに対する整合性があります。そのため、スナップショットが取" -"得されるとき、ボリュームがインスタンスに接続されていないことが最良です。ボ" -"リュームが接続されたインスタンスにおいて使用されていなければ、次に良いです。" -"ボリュームが高負荷にある場合、スナップショットによりファイルシステムの不整合" -"が起こる可能性があります。実際、デフォルト設定では、Volume Service はイメージ" -"に接続されたボリュームのスナップショットを取得しません。ただし、強制的に実行" -"することができます。ボリュームのスナップショットを取得するには、ダッシュボー" -"ドの \"ボリューム” ページにおいて、ボリューム名の隣にあるアクション項目から" -"スナップショットの作成を選択します。または、コマンドライ" -"ンから次のようにします:" - -msgid "OpenStack Cloud Administrator Guide" -msgstr "OpenStack クラウド管理者ガイド" - -msgid "OpenStack Cloud Computing Cookbook" -msgstr "OpenStack Cloud Computing Cookbook" - -msgid "OpenStack Compute (nova)" -msgstr "OpenStack Compute (nova)" - -msgid "" -"OpenStack Compute - Edit the configuration file and restart the service." -msgstr "OpenStack Compute - 設定ファイルを編集して、サービスを再起動します。" - -msgid "" -"OpenStack Compute cells are designed to allow running the cloud in a " -"distributed fashion without having to use more complicated technologies, or " -"be invasive to existing nova installations. Hosts in a cloud are partitioned " -"into groups called cells. Cells are configured in a " -"tree. The top-level cell (\"API cell\") has a host that runs the nova-" -"api service, but no nova-compute services. Each child " -"cell runs all of the other typical nova-* services found in a " -"regular installation, except for the nova-api service. Each " -"cell has its own message queue and database service and also runs nova-" -"cells, which manages the communication between the API cell and child " -"cells.scalingcells and regionscellscloud segregationregion" -msgstr "" -"OpenStack Compute のセルによって、より複雑な技術を持ち込むことなしに、また既" -"存のNovaシステムに悪影響を与えることなしに、クラウドを分散された環境で運用す" -"ることができます。1つのクラウドの中のホストは、セル と" -"呼ばれるグループに分割されます。セルは、木構造に構成されてます。最上位のセル " -"(「API セル」) はnova-apiサービスを実行するホストを持ちますが、" -"nova-compute サービスを実行するホストは持ちません。それぞれの子" -"セルは、nova-apiサービス以外の、普通のNovaシステムに見られる他の" -"すべての典型的な nova-* サービスを実行します。それぞれのセルは自" -"分のメッセージキューとデータベースサービスを持ち、またAPIセルと子セルの間の通" -"信を制御する nova-cells サービスを実行します。スケーリングセルとリージョンセルクラウド分割リージョン" - -msgid "" -"OpenStack Compute uses an SQL database to store and retrieve stateful " -"information. MySQL is the popular database choice in the OpenStack community." -"databasesdesign " -"considerationsdesign considerationsdatabase choice" -msgstr "" -"OpenStackコンピュートはステートフルな情報を保存、取得するためにSQLデータベー" -"スを使用します。MySQLはOpenStackコミュニティでポピュラーな選択です。" -"データベース設計" -"上の考慮事項設" -"計上の考慮事項データベースの選択" - -msgid "" -"OpenStack Compute with nova-network provides predefined " -"network deployment models, each with its own strengths and weaknesses. The " -"selection of a network manager changes your network topology, so the choice " -"should be made carefully. You also have a choice between the tried-and-true " -"legacy nova-network settings or the neutron project for OpenStack Networking. Both offer " -"networking for launched instances with different implementations and " -"requirements.networksdeployment optionsnetworksnetwork managersnetwork designnetwork topologydeployment options" -msgstr "" -" nova-networkを使用したOpenStackコンピュートはいくつかのあ" -"らかじめ定義されたネットワークの実装モデルを提供しますが、それぞれ強みと弱み" -"があります。ネットワークマネージャの選択はあなたのネットワークトポロジーを変" -"更するので、慎重に選択するべきです。また、実証済みでレガシーなnova-" -"networkによる設定か、OpenStackネットワーキングのためのneutronプロジェクトを採用するか決定する必要があり" -"ます。インスタンスのネットワークの実装方法それぞれに異なる実装と要件がありま" -"す。ネットワーク" -"開発オプション" -"ネットワーク絵ネットワークマネージャネットワークデザインネットワークトポロジーdeployment " -"options" - -msgid "OpenStack Compute, including networking components." -msgstr "OpenStack Compute、ネットワークコンポーネントを含む。" - -msgid "OpenStack Configuration Reference" -msgstr "OpenStack 設定レファレンス" - -msgid "OpenStack End User Guide" -msgstr "OpenStack エンドユーザーガイド" - -msgid "OpenStack Foundation" -msgstr "OpenStack Foundation" - -msgid "OpenStack Guides" -msgstr "OpenStack の各種ガイド" - -msgid "OpenStack High Availability Guide" -msgstr "OpenStack 高可用性ガイド" - -msgid "" -"OpenStack Identity - Clear any expired tokens before synchronizing the " -"database." -msgstr "" -"OpenStack Identity - データベースの同期前に期限切れのトークンを削除します。" - -msgid "" -"OpenStack Identity provides authentication decisions and user attribute " -"information, which is then used by the other OpenStack services to perform " -"authorization. The policy is set in the policy.json " -"file. For information on how to " -"configure these, see .Identityauthentication decisionsIdentityplug-in support" -msgstr "" -"OpenStack Identity は、認証の判定とユーザの属性情報を提供する場となり、他の " -"OpenStack サービスから認証のために使用されます。ポリシーは policy." -"json で記述されます。これらを設定するための情報については、 を参照" -"してください。Identity認証の判定Identityプラグインサポート" - -msgid "" -"OpenStack Identity supports different plug-ins for authentication decisions " -"and identity storage. Examples of these plug-ins include:" -msgstr "" -"Identity は、バックエンド認証と情報保持のために種々のプラグインをサポートして" -"います。これらの選択肢は、現在は以下の物が含まれています。" - -msgid "OpenStack Image service" -msgstr "OpenStack Image service" - -msgid "OpenStack Installation Guides" -msgstr "OpenStack インストールガイド" - -msgid "" -"OpenStack Logical Architecture ()" -msgstr "" -"OpenStack 論理アーキテクチャー ()" - -msgid "OpenStack Networking" -msgstr "OpenStack Networking" - -msgid "" -"OpenStack Networking - Edit the configuration file and restart the service." -msgstr "" -"OpenStack Networking - 設定ファイルを編集して、サービスを再起動します。" - -msgid "" -"OpenStack Networking has many more degrees of freedom than nova-" -"network does because of its pluggable back end. It can be " -"configured with open source or vendor proprietary plug-ins that control " -"software defined networking (SDN) hardware or plug-ins that use Linux native " -"facilities on your hosts, such as Open vSwitch or Linux Bridge.troubleshootingOpenStack traffic" -msgstr "" -"OpenStack Networking は、バックエンドをプラグインできるので、nova-" -"network よりも自由度が大きいです。SDN ハードウェアを制御するオープ" -"ンソースやベンダー製品のプラグイン、ホストで動作する Open vSwitch や Linux " -"Bridge などの Linux ネイティブの機能を使用するプラグインを用いて設定できま" -"す。troubleshootingOpenStack traffic" - -msgid "" -"OpenStack Networking offers sophisticated networking functionality, " -"including Layer 2 (L2) network segregation and provider networks." -msgstr "" -"OpenStack Networking は、レイヤー 2 (L2) ネットワークの分離やプロバイダーネッ" -"トワークを含む高度なネットワーク機能を提供します。" - -msgid "" -"OpenStack Networking with neutron and OpenStack Compute " -"with nova-network have the ability to assign multiple NICs to instances. For " -"nova-network this can be done on a per-request basis, with each additional " -"NIC using up an entire subnet or VLAN, reducing the total number of " -"supported projects.MultiNicnetwork designnetwork topologymulti-NIC " -"provisioning" -msgstr "" -"neutron を用いた OpenStack Networking と nova-network を用" -"いた OpenStack Compute は、インスタンスに複数の NIC を割り当てる機能を持ちま" -"す。nova-network の場合、サブネットや VLAN 全体まで使用できる追加 NIC をそれ" -"ぞれ使用して、これはリクエストごとに実行できます。これにより合計プロジェクト" -"数を減らせます。MultiNicnetwork designnetwork topologymulti-NIC " -"provisioning" - -msgid "" -"OpenStack Networking's ovs-agent, l3-agent, dhcp-agent, and metadata-agent services run on the network nodes, as lsb " -"resources inside of Pacemaker. This means that in the case of network node " -"failure, services are kept running on another node. Finally, the " -"ovs-agent service is also run on all compute nodes, and " -"in case of compute node failure, the other nodes will continue to function " -"using the copy of the service running on them." -msgstr "" -"OpenStack Networking の ovs-agentl3-agentdhcp-agent、および metadata-agent のサービスは、ネットワークノード上で Pacemaker 内の lsb リソースとして稼働します。これは、ネットワークノードに障害が発生した" -"場合にサービスが別のノードで稼働し続けることを意味します。最後に " -"ovs-agent サービスも全コンピュートノードで稼働し、コン" -"ピュートノードに障害が発生した場合に、その他のノードが、そこで実行されている" -"サービスのコピーを使用して機能し続けます。" - -msgid "OpenStack Object Storage" -msgstr "OpenStack オブジェクトストレージ" - -msgid "OpenStack Object Storage (swift)" -msgstr "OpenStack Object Storage (swift)" - -msgid "" -"OpenStack Object Storage provides a highly scalable, highly available " -"storage solution by relaxing some of the constraints of traditional file " -"systems. In designing and procuring for such a cluster, it is important to " -"understand some key concepts about its operation. Essentially, this type of " -"storage is built on the idea that all storage hardware fails, at every " -"level, at some point. Infrequently encountered failures that would hamstring " -"other storage systems, such as issues taking down RAID cards or entire " -"servers, are handled gracefully with OpenStack Object Storage.scalingObject Storage and" -msgstr "" -"OpenStack Object Storage は、従来のファイルシステムの制約の一部を緩和すること" -"で、高可用性かつ高拡張性のストレージソリューションを提供します。このようなク" -"ラスターの設計、調達には、操作に関する主なコンセプトを理解することが重要で" -"す。基本的に、このタイプのストレージハードウェアはすべて、どこかの段階で、ど" -"のレベルであっても故障するというコンセプトをベースに、この種類のストレージは" -"構築されています。 RAID カードやサーバー全体での問題など、他のストレージシス" -"テムに影響を与える障害に遭遇することがまれにあります。このような場合、" -"OpenStack Object Storageでは滞りなく処理されます。スケーリングオブジェクトストレージ" - -msgid "" -"OpenStack Object Storage, known as swift when reading the code, is based on " -"the Python Paste " -"framework. The best introduction to its architecture is A Do-It-Yourself " -"Framework. Because of the swift project's use of this framework, you " -"are able to add features to a project by placing some custom code in a " -"project's pipeline without having to change any of the core code.Paste frameworkPythonswiftswift middlewareObject Storagecustomization ofcustomizationObject StorageDevStackcustomizing Object Storage (swift)" -msgstr "" -"OpenStack Object Storage は、コード参照時に swift としても知られ、Python " -"Paste フレームワークに基" -"づいています。そのアーキテクチャーは、A Do-It-Yourself Framework から始" -"めると最も良いでしょう。swift プロジェクトはこのフレームワークを使用している" -"ので、コアのコードを変更することなく、プロジェクトのパイプラインにカスタム" -"コードをいくつか配置することにより、プロジェクトに機能を追加できます。" -"Paste frameworkPythonswiftswift middlewareObject Storagecustomization ofcustomizationObject StorageDevStackcustomizing Object " -"Storage (swift)" - -msgid "OpenStack Operations Guide" -msgstr "OpenStack 運用ガイド" - -msgid "OpenStack Ops Guide" -msgstr "OpenStack 運用ガイド" - -msgid "OpenStack Orchestration" -msgstr "OpenStack Orchestration" - -msgid "OpenStack Security Guide" -msgstr "OpenStack セキュリティガイド" - -msgid "OpenStack Shared File System Storage (manila)" -msgstr "OpenStack Shared File System Storage (manila)" - -msgid "OpenStack Storage Concepts" -msgstr "ストレージのコンセプト" - -msgid "OpenStack Telemetry" -msgstr "OpenStack Telemetry" - -msgid "" -"OpenStack Telemetry - In typical environments, updating the Telemetry " -"service only requires restarting the service." -msgstr "" -"OpenStack Telemetry - 一般的な環境では、Telemetry を更新するために、サービス" -"の再起動のみが必要となります。" - -msgid "" -"OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows " -"you to increase the number of instances you can have running on your cloud, " -"at the cost of reducing the performance of the instances.RAM overcommitCPUs (central processing units)overcommittingovercommittingcompute nodesovercommitting OpenStack Compute uses the following ratios by " -"default:" -msgstr "" -"OpenStack は、コンピュートノードで CPU および RAM をオーバーコミットすること" -"ができます。これにより、インスタンスのパフォーマンスを落とすことでクラウドで" -"実行可能なインスタンス数を増やすことができます。RAM オーバーコミットCPU (central processing unit)overcommittingオーバーコミットコンピュートノードオーバーコミット" -" OpenStack Compute はデフォルトで以下の比率を使用しま" -"す。" - -msgid "" -"OpenStack believes in open source, open design, and open development, all in " -"an open community that encourages participation by anyone. The long-term " -"vision for OpenStack is to produce a ubiquitous open source cloud computing " -"platform that meets the needs of public and private cloud providers " -"regardless of size. OpenStack services control large pools of compute, " -"storage, and networking resources throughout a data center." -msgstr "" -"OpenStack は、オープンソース、オープン設計、オープン開発を信じています。すべ" -"ては、あらゆる人の参加を奨励するオープンコミュニティにより行われています。" -"OpenStack の長期ビジョンは、規模によらず、パブリッククラウドやプライベートク" -"ラウドのプロバイダーの要求を満たす、ユビキタスなオープンソースのクラウドコン" -"ピューティングソフトウェアを作成することです。OpenStack のサービスは、データ" -"センター全体のコンピュート、ストレージ、ネットワークの大規模なリソースプール" -"を制御します。" - -msgid "" -"OpenStack can be deployed on any hardware supported by an OpenStack-" -"compatible Linux distribution." -msgstr "" -"OpenStack は、OpenStack と互換性のある Linux ディストリビューションによりサ" -"ポートされているハードウェアにデプロイすることができます。" - -msgid "" -"OpenStack clouds do not present file-level storage to end users. However, it " -"is important to consider file-level storage for storing instances under " -"/var/lib/nova/instances when designing your cloud, since you " -"must have a shared file system if you want to support live migration." -msgstr "" -"OpenStack クラウドは、ファイルレベルのストレージはエンドユーザーには見えませ" -"んが、クラウドの設計時に /var/lib/nova/instances の配下にインス" -"タンスを格納する、ファイルレベルのストレージを検討してください。これは、ライ" -"ブマイグレーションのサポートには、共有ファイルシステムが必要であるためです。" - -msgid "OpenStack community members" -msgstr "OpenStack コミュニティーメンバー" - -msgid "" -"OpenStack dashboard - In typical environments, updating the dashboard only " -"requires restarting the Apache HTTP service." -msgstr "" -"OpenStack dashboard - 一般的な環境では、ダッシュボードを更新するのに必要な作" -"業は Apache HTTP サービスの再起動のみです。" - -msgid "OpenStack default flavors" -msgstr "OpenStack デフォルトのフレーバー" - -msgid "" -"OpenStack documentation efforts encompass operator and administrator docs, " -"API docs, and user docs.OpenStack " -"communitycontributing to" -msgstr "" -"OpenStack のドキュメント作成活動は、運用管理ドキュメント、API ドキュメント、" -"ユーザドキュメントなどに渡ります。OpenStack communitycontributing to" - -msgid "" -"OpenStack does not currently provide DNS services, aside from the dnsmasq " -"daemon, which resides on nova-network hosts. You could consider " -"providing a dynamic DNS service to allow instances to update a DNS entry " -"with new IP addresses. You can also consider making a generic forward and " -"reverse DNS mapping for instances' IP addresses, such as vm-203-0-113-123." -"example.com.DNS (Domain Name Server, " -"Service or System)DNS service choices" -msgstr "" -"OpenStackは現在のところ nova-networkホストに起動しているdnsmasq" -"デーモンに則したDNSサービスの提供はしていません。インスタンスの新IPアドレスに" -"対するDNSレコードの更新のためにダイナミックDNSの構成を検討する事ができます。" -"また、インスタンスのIPアドレスに対応するvm-203-0-113-123.example.comというよ" -"うな一般的な逆引きDNSサービスの構成も検討する事ができます。DNS (Domain Name Serverサービス、システム)DNS サービスの選択" - -msgid "" -"OpenStack follows a six month release cycle, typically releasing in April/" -"May and October/November each year. At the start of each cycle, the " -"community gathers in a single location for a design summit. At the summit, " -"the features for the coming releases are discussed, prioritized, and " -"planned. shows an example release " -"cycle, with dates showing milestone releases, code freeze, and string freeze " -"dates, along with an example of when the summit occurs. Milestones are " -"interim releases within the cycle that are available as packages for " -"download and testing. Code freeze is putting a stop to adding new features " -"to the release. String freeze is putting a stop to changing any strings " -"within the source code." -msgstr "" -"OpenStack は6ヶ月のリリースサイクルを取っており、通常は4/5月と10/11月にリリー" -"スが行われます。各リリースサイクルの最初に、OpenStack コミュニティは一ヶ所に" -"集まりデザインサミットを行います。サミットでは、次のリリースでの機能が議論さ" -"れ、優先度付けと計画が行われます。 " -"は、リリースサイクルの一例で、サミットが行われて以降のマイルストーンリリー" -"ス、Code Freeze、String Freeze などが記載されています。マイルストーンはリリー" -"スサイクル内での中間リリースで、テスト用にパッケージが作成され、ダウンロード" -"できるようになります。 Code Freeze では、そのリリースに向けての新機能の追加が" -"凍結されます。String Freeze は、ソースコード内の文字列の変更が凍結されること" -"を意味します。" - -msgid "" -"OpenStack images can often be thought of as \"virtual machine templates.\" " -"Images can also be standard installation media such as ISO images. " -"Essentially, they contain bootable file systems that are used to launch " -"instances.user trainingimages" -msgstr "" -"OpenStack のイメージはしばしば「仮想マシンテンプレート」と考えることができま" -"す。イメージは ISO イメージのような標準的なインストールメディアの場合もありま" -"す。基本的に、インスタンスを起動するために使用されるブート可能なファイルシス" -"テムを含みます。user trainingimages" - -msgid "OpenStack internal network" -msgstr "OpenStack 内部ネットワーク" - -msgid "" -"OpenStack is an open source platform that lets you build an Infrastructure " -"as a Service (IaaS) cloud that runs on commodity hardware." -msgstr "" -"OpenStack はオープンソースプラットフォームで、OpenStack を使うと、コモディ" -"ティハードウェア上で動作する Infrastructure as a Service (IaaS) クラウドを自" -"分で構築できます。" - -msgid "" -"OpenStack is designed for horizontal scalability, so you can easily add new " -"compute, network, and storage resources to grow your cloud over time. In " -"addition to the pervasiveness of massive OpenStack public clouds, many " -"organizations, such as PayPal, Intel, and Comcast, build large-scale private " -"clouds. OpenStack offers much more than a typical software package because " -"it lets you integrate a number of different technologies to construct a " -"cloud. This approach provides great flexibility, but the number of options " -"might be daunting at first." -msgstr "" -"OpenStack は水平的にスケールするよう設計されているため、クラウドの拡大に合わ" -"せて新しいコンピュート、ネットワーク、ストレージのリソースを簡単に追加できま" -"す。大規模 OpenStack パブリッククラウドの広播性に加えて、PayPal、Intel、" -"Comcast などの多くの組織が大規模なプライベートクラウドを構築しています。" -"OpenStack は、クラウドを構築するためのいくつかの異なる技術を統合できるので、" -"一般的なソフトウェアよりも多くのものを提供します。このアプローチにより、素晴" -"らしい柔軟性を提供しますが、始めは数多くのオプションにより圧倒されるかもしれ" -"ません。" - -msgid "" -"OpenStack is designed to be massively horizontally scalable, which allows " -"all services to be distributed widely. However, to simplify this guide, we " -"have decided to discuss services of a more central nature, using the concept " -"of a cloud controller. A cloud controller is just a " -"conceptual simplification. In the real world, you design an architecture for " -"your cloud controller that enables high availability so that if any node " -"fails, another can take over the required tasks. In reality, cloud " -"controller tasks are spread out across more than a single node.design considerationscloud " -"controller servicescloud controllersconcept of" -msgstr "" -"OpenStackはすべてのサービスが広く分散できるよう水平方向に大規模にスケーリング" -"できるように設計されています。しかし、このガイドではシンプルにクラ" -"ウドコントローラーの利用についてより中心的な性質を持つサービスにつ" -"いて議論する事にしました。クラウドコントローラーという言葉はその概念をシンプ" -"ルに表現した物に過ぎません。実際にはあなたはクラウドコントローラーは冗長構成" -"としてどのノードが障害となっても他のノードで運用ができるような設計にデザイン" -"します。実際にはクラウドコントローラーのタスクは1つ以上のノードにまたがって展" -"開されます。設計上の考慮点クラウドコントローラーサービスクラウドコントローラーコンセプト" - -msgid "" -"OpenStack is designed to increase in size in a straightforward manner. " -"Taking into account the considerations that we've mentioned in this chapter—" -"particularly on the sizing of the cloud controller—it should be possible to " -"procure additional compute or object storage nodes as needed. New nodes do " -"not need to be the same specification, or even vendor, as existing nodes." -"capabilityscaling andweightcapacity planningscalingcapacity planning" -msgstr "" -"OpenStack は、直線的に規模が拡大するよう設計されています。本章で議論したこ" -"と、とくにクラウドコントローラーのサイジングを考慮しながら、追加のコンピュー" -"トノードやオブジェクトストレージノードを必要に応じて調達できます。新しいノー" -"ドは、既存のノードと同じ仕様・ベンダーである必要がありません。capabilityscaling andweightcapacity " -"planningscalingcapacity planning" - -msgid "" -"OpenStack is founded on a thriving community that is a source of help and " -"welcomes your contributions. This chapter details some of the ways you can " -"interact with the others involved." -msgstr "" -"OpenStack は非常に活発なコミュニティに支えられており、あなたの参加をいつでも" -"待っています。この章では、コミュニティーの他の人と交流する方法について詳しく" -"説明します。" - -msgid "" -"OpenStack is intended to work well across a variety of installation flavors, " -"from very small private clouds to large public clouds. To achieve this, the " -"developers add configuration options to their code that allow the behavior " -"of the various components to be tweaked depending on your needs. " -"Unfortunately, it is not possible to cover all possible deployments with the " -"default configuration values.advanced " -"configurationconfiguration optionsconfiguration optionswide availability of" -msgstr "" -"OpenStack は、非常に小さなプライベートクラウドから大規模なパブリッククラウド" -"まで様々な構成でうまく動くことを意図して作られています。これを実現するため、" -"開発者はコードに設定オプションを用意し、要件にあわせて種々のコンポーネントの" -"振る舞いを細かく調整できるようにしています。しかし、残念ながら、デフォルトの" -"設定値ですべてのデプロイメントに対応することはできません。advanced configurationconfiguration " -"optionsconfiguration optionswide availability of" - -msgid "OpenStack log locations" -msgstr "OpenStack のログの場所" - -msgid "" -"OpenStack might not do everything you need it to do out of the box. To add a " -"new feature, you can follow different paths.customizationpaths available" -msgstr "" -"OpenStack はあなたが必要とするすべてのことをしてくれるわけではないかもしれま" -"せん。新しい機能を追加するために、いくつかの方法に従うことができます。" -"customizationpaths available" - -msgid "OpenStack modules are one of the following types:" -msgstr "OpenStack のモジュールは、以下の種別のいずれかです。" - -msgid "OpenStack on OpenStack (TripleO)" -msgstr "OpenStack on OpenStack (TripleO)" - -msgid "OpenStack package repository" -msgstr "OpenStack パッケージリポジトリ" - -msgid "" -"OpenStack produces a great deal of useful logging information, however; but " -"for the information to be useful for operations purposes, you should " -"consider having a central logging server to send logs to, and a log parsing/" -"analysis system (such as logstash)." -msgstr "" -"OpenStack は、便利なロギング情報を多く生成しますが、運用目的でその情報を有効" -"活用するには、ログの送信先となる中央ロギングサーバーや、ログの解析/分析システ" -"ム (logstash など) の導入を検討する必" -"要があります。" - -msgid "" -"OpenStack provides a rich networking environment, and this chapter details " -"the requirements and options to deliberate when designing your cloud." -"network designfirst stepsdesign considerationsnetwork " -"design" -msgstr "" -"OpenStackは様々なネットワーク環境を提供します。この章ではクラウドを設計する際" -"に必要な事項と慎重に決定すべきオプションの詳細を説明します。ネットワーク設計第1段階設計上の考慮事項" -"ネットワーク設計" - -msgid "OpenStack release" -msgstr "OpenStack リリース" - -msgid "OpenStack segregation methods" -msgstr "OpenStack 分離の手法" - -msgid "" -"OpenStack services use the standard logging levels, at increasing severity: " -"DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE. That is, messages " -"only appear in the logs if they are more \"severe\" than the particular log " -"level, with DEBUG allowing all log statements through. For example, TRACE is " -"logged only if the software has a stack trace, while INFO is logged for " -"every message including those that are only for information.logging/monitoringlogging levels" -msgstr "" -"OpenStack サービスは標準のロギングレベルを利用しています。重要度のレベルは次" -"の通りです(重要度の低い順): DEBUG、INFO、AUDIT、WARNING、ERROR、CRTICAL、" -"TRACE。特定のログレベルより「重要」な場合のみメッセージはログに出力されます。" -"ログレベルDEBUGの場合、すべてのログが出力されます。TRACEの場合、ソフトウェア" -"がスタックトレースを持つ場合にのみログに出力されます。INFOの場合、情報のみの" -"メッセージも含めて出力されます。.logging/monitoringlogging levels" - -msgid "OpenStack storage" -msgstr "OpenStack ストレージ" - -msgid "" -"OpenStack truly welcomes your ideas (and contributions) and highly values " -"feedback from real-world users of the software. By learning a little about " -"the process that drives feature development, you can participate and perhaps " -"get the additions you desire.OpenStack communityworking with roadmapsinfluencing" -msgstr "" -"OpenStack は、あなたのアイディア (およびコントリビューション) を本当に歓迎し" -"ています。また、実世界のソフトウェアのユーザーからのフィードバックに高い価値" -"をおきます。機能開発を推進するプロセスについて少し理解することにより、参加で" -"き、あなたの要望を追加できるかもしれません。OpenStack communityworking with roadmapsinfluencing" - -msgid "" -"OpenStack volumes are persistent block-storage devices that may be attached " -"and detached from instances, but they can be attached to only one instance " -"at a time. Similar to an external hard drive, they do not provide shared " -"storage in the way a network file system or object store does. It is left to " -"the operating system in the instance to put a file system on the block " -"device and mount it, or not. block " -"storagestorageblock storageuser trainingblock storage" -msgstr "" -"OpenStack のボリュームは、インスタンスから接続および切断できる、永続的なブ" -"ロックストレージデバイスです。ただし、一度に接続できるのは 1 インスタンスだけ" -"です。外部ハードディスクと似ています。ネットワークファイルシステムやオブジェ" -"クトストアがしているような共有ストレージは提供されません。ブロックデバイス上" -"にファイルシステムを構築し、それをマウントするかどうかは、インスタンス内のオ" -"ペレーティングシステムに任されます。block storagestorageblock storageuser trainingblock storage" - -msgid "" -"OpenStack's collection of different components interact with each other " -"strongly. For example, uploading an image requires interaction from " -"nova-api, glance-api, glance-registry, keystone, and potentially swift-proxy. As a result, it " -"is sometimes difficult to determine exactly where problems lie. Assisting in " -"this is the purpose of this section.logging/monitoringtailing logsmaintenance/debuggingdetermining component affected" -msgstr "" -"OpenStack は、異なるコンポーネント同士が互いに強く連携して動作しています。た" -"とえば、イメージのアップロードでは、nova-api, glance-api, glance-registry, keystone が連携する必要があります。 " -"swift-proxy も関係する場合があります。その結果、時として問題が発" -"生している箇所を正確に特定することが難しくなります。これを支援することがこの" -"セクションの目的です。logging/" -"monitoringtailing logsmaintenance/debuggingdetermining component affected" - -msgid "" -"OpenStack, like any network application, has a number of standard " -"considerations to apply, such as NTP and DNS.network designservices for networking" -msgstr "" -"多くのネットワークアプリケーションがそうであるようにOpenStackでもNTPやDNS と" -"言った適用するための数多くの検討事項があります。ネットワークデザインネットワークサービス" - -msgid "OpenStack-Specific Resources" -msgstr "OpenStack 固有のリソース" - -msgid "OpenStack.org case study" -msgstr "OpenStack.org ケーススタディー" - -msgid "Operate with consistency groups" -msgstr "整合性グループの操作" - -msgid "Operating with a share" -msgstr "共有の運用" - -msgid "Operation based" -msgstr "操作ベース" - -msgid "Operations" -msgstr "運用" - -msgid "Operators list" -msgstr "運用者向けメーリングリスト" - -msgid "Option 1" -msgstr "オプション 1" - -msgid "Option 2" -msgstr "オプション 2" - -msgid "Option 3" -msgstr "オプション 3" - -msgid "Optional Extensions" -msgstr "さらなる拡張" - -msgid "" -"Optional property that allows created servers to have a different " -"bandwidthbandwidthcapping cap from that defined in " -"the network they are attached to. This factor is multiplied by the rxtx_base " -"property of the network. Default value is 1.0 (that is, the same as the " -"attached network)." -msgstr "" -"作成したサーバーが接続されたネットワークにおける定義と異なる帯域帯域制限制限を持てるようにするプロパティ。これはオプションです。この要素は" -"ネットワークの rxtx_base プロパティの倍数です。既定の値は 1.0 です (つまり、" -"接続されたネットワークと同じです)。" - -msgid "Optional swap space allocation for the instance." -msgstr "インスタンスに割り当てられるスワップ空間。これはオプションです。" - -msgid "" -"Options must be carefully configured for live migration to work with " -"networking services." -msgstr "" -"ネットワークサービスが正しく動くようにライブマイグレーションを設定するために" -"は注意してオプションを構成する必要があります。" - -msgid "Other CLI Options" -msgstr "他の CLI オプション" - -msgid "Other backup considerations include:" -msgstr "さらにバックアップの考慮点として以下があげられます。" - -msgid "" -"Other services follow the same process, with their respective directories " -"and databases." -msgstr "" -"他のサービスもそれぞれのディレクトリとデータ" -"ベースで同じ処理となります。" - -msgid "Out-of-band remote management" -msgstr "帯域外管理リモート管理" - -msgid "Output from /var/log/nova/nova-api.log on Grizzly:" -msgstr "" -"Grizzly における /var/log/nova/nova-api.log の出力:" - -msgid "Output from /var/log/nova/nova-api.log on Havana:" -msgstr "" -"Havana における /var/log/nova/nova-api.log の出力:" - -msgid "Overcommitting" -msgstr "オーバーコミット" - -msgid "Overhead" -msgstr "オーバーヘッド" - -msgid "Overview" -msgstr "概要" - -msgid "PC" -msgstr "PC" - -msgid "" -"Pacemaker is the clustering software used to ensure the availability of " -"services running on the controller and network nodes: " -msgstr "" -"Pacemaker とは、コントローラーノードおよびネットワークノードで実行されている" -"サービスの可用性を確保するために使用するクラスタリングソフトウェアです: " -"" - -msgid "" -"Packets leaving the subnet go via this address, which could be a dedicated " -"router or a nova-network service." -msgstr "" -"パケットが出て行く際に通るIPアドレスで、これは専用のルータかnova-" -"network サービスです。" - -msgid "" -"Packets, now tagged with the external VLAN tag, then exit onto the physical " -"network via eth1. The Layer2 switch this interface is connected " -"to must be configured to accept traffic with the VLAN ID used. The next hop " -"for this packet must also be on the same layer-2 network." -msgstr "" -"パケットは、いま外部 VLAN タグを付けられ、eth1 経由で物理ネット" -"ワークに出ていきます。このインターフェースが接続されている L2 スイッチは、使" -"用される VLAN ID を持つ通信を許可するよう設定する必要があります。このパケット" -"の次のホップも、同じ L2 ネットワーク上になければいけません。" - -msgid "Parameter in nova.conf" -msgstr "nova.conf のパラメーター" - -msgid "Part I:" -msgstr "パート I:" - -msgid "Part II:" -msgstr "パート II:" - -msgid "Parting Thoughts for Provisioning and Deploying OpenStack" -msgstr "OpenStack のプロビジョニングおよびデプロイメントの概念" - -msgid "Parting Thoughts on Architecture Examples" -msgstr "アーキテクチャー例についての章の結び" - -msgid "" -"Partition all drives in the same way in a horizontal fashion, as shown in " -"." -msgstr "" -" にあるように、すべてのドライブを同" -"じように並列してパーティショニングにします。" - -msgid "Partition setup of drives" -msgstr "ドライブのパーティション設定" - -msgid "" -"Partitioning, which provides greater flexibility for layout of operating " -"system and swap space, as described below." -msgstr "" -"パーティショニング。以下に説明されている通り、オペレーティングシステムと " -"Swap 領域のレイアウトにおける柔軟性がはるかに高くになります。" - -msgid "Password" -msgstr "パスワード" - -msgid "Perform a backup" -msgstr "バックアップの実行" - -msgid "" -"Perform some cleaning of the environment prior to starting the upgrade " -"process to ensure a consistent state. For example, instances not fully " -"purged from the system after deletion might cause indeterminate behavior." -msgstr "" -"確実に整合性のある状態にするために、アップグレード作業を始める前にいくつか環" -"境のクリーンアップを実行します。例えば、削除後に完全削除されていないインスタ" -"ンスにより、不確実な挙動を引き起こす可能性があります。" - -msgid "Perform source NAT on outgoing traffic." -msgstr "送信方向にソース NAT 実行。" - -msgid "Perform the day-to-day tasks required to administer a cloud." -msgstr "クラウドを管理する上で必要となる日々のタスクの実行。" - -msgid "Performance and Optimizing" -msgstr "パフォーマンスと最適化" - -msgid "" -"Performance increased greatly after deleting the old records and my new " -"deployment continues to behave well." -msgstr "" -"古いレコードの削除後、パフォーマンスが大幅に向上しました。新しい環境は順調に" -"動作しつづけています。" - -msgid "Performance node deployment" -msgstr "パフォーマンスノードのデプロイメント" - -msgid "" -"Periodic tasks are important to understand because of limitations in the " -"threading model that OpenStack uses. OpenStack uses cooperative threading in " -"Python, which means that if something long and complicated is running, it " -"will block other tasks inside that process from running unless it " -"voluntarily yields execution to another cooperative thread.cooperative threading" -msgstr "" -"周期的タスクは、OpenStack が使用しているスレッドモデルにおける制限を理解する" -"上で重要です。OpenStack は Python の協調スレッドを使用しています。このこと" -"は、何か長く複雑な処理が実行された場合、その処理が自発的に別の協調スレッドに" -"実行を譲らない限り、そのプロセス内の他のタスクの実行が停止されることを意味し" -"ます。cooperative threading" - -msgid "Persistent Storage" -msgstr "永続ストレージ" - -msgid "Persistent file-based storage support" -msgstr "永続ファイルベースのストレージサポート" - -msgid "" -"Persistent storage means that the storage resource outlives any other " -"resource and is always available, regardless of the state of a running " -"instance." -msgstr "" -"永続ストレージとは、実行中のインスタンスの状態が何であっても、ストレージリ" -"ソースが他のリソースよりも長く存在して、常に利用できる状態のストレージを指し" -"ます。" - -msgid "Persists until…" -msgstr "データの残存期間" - -msgid "" -"Pick a service endpoint from your service catalog, such as compute. Try a " -"request, for example, listing instances (servers):" -msgstr "" -"サービスカタログから、サービスエンドポイント (例: コンピュート) を選択しま" -"す。要求を試します。例えば、インスタンス (サーバー) の一覧表示を行います。" - -msgid "Place the tenant ID in a variable:" -msgstr "テナント ID を変数に格納します。" - -msgid "Planned Maintenance" -msgstr "計画メンテナンス" - -msgid "Plug and Play OpenStack" -msgstr "プラグアンドプレイ OpenStack" - -msgid "" -"Policies are triggered by an OpenStack policy engine whenever one of them " -"matches an OpenStack API operation or a specific attribute being used in a " -"given operation. For instance, the engine tests the create:compute policy every time a user sends a POST /v2/{tenant_id}/servers request to the OpenStack Compute API server. Policies can be also " -"related to specific API extensions. For instance, if " -"a user needs an extension like compute_extension:rescue, the " -"attributes defined by the provider extensions trigger the rule test for that " -"operation." -msgstr "" -"ポリシーのいずれかが OpenStack API 操作、もしくは指定された操作で使用されてい" -"る特定の属性に一致する場合、ポリシーが OpenStack ポリシーエンジンにより呼び出" -"されます。たとえば、ユーザーが POST /v2/{tenant_id}/servers リク" -"エストを OpenStack Compute API サーバーに送信したときに必ず、エンジンが " -"create:compute ポリシーを評価します。ポリシーは特定の " -"API 拡張に関連づけることもできます。たとえば、ユーザー" -"が compute_extension:rescue のような拡張に対して要求を行った場" -"合、プロバイダー拡張により定義された属性は、その操作に対するルールテストを呼" -"び出します。" - -msgid "" -"Policies specify access criteria for specific operations, possibly with fine-" -"grained control over specific attributes." -msgstr "" -"特定の操作に対するアクセス基準を指定するポリシー。特定の属性に対する詳細な制" -"御も可能です。" - -msgid "Portions of your log files so that you include only relevant excerpts" -msgstr "関連部分のみを抜粋したログファイル" - -msgid "Possible options include:" -msgstr "次のような選択肢があります。" - -msgid "Pre-upgrade considerations" -msgstr "アップグレード前の考慮事項" - -msgid "Pre-upgrade testing environment" -msgstr "テスト環境の事前アップグレード" - -msgid "Preface" -msgstr "はじめに" - -msgid "Prepare any quarterly reports on usage and statistics." -msgstr "使用量と統計に関する四半期レポートを準備します。" - -msgid "Prerequisites" -msgstr "前提" - -msgid "" -"Press CtrlA followed " -"by 0 to go to the first screen window." -msgstr "" -"CtrlA に続けて 0 を押" -"して、1 番目の screen ウィンドウに移動します。" - -msgid "" -"Press CtrlA followed " -"by 0." -msgstr "" -"CtrlA に続けて " -"0 を押します。" - -msgid "" -"Press CtrlA followed " -"by 3 to check the log output." -msgstr "" -"CtrlA に続けて " -"3 を押して、ログ出力を確認します。" - -msgid "" -"Press CtrlA followed " -"by 3 to check the log output. Look at the swift log " -"statements again, and among the log statements, you'll see the lines:" -msgstr "" -"CtrlA に続けて " -"3 を押して、ログ出力を確認します。再び swift のログを確認す" -"ると、ログの中に以下の行があるでしょう。" - -msgid "" -"Press CtrlA followed " -"by 3." -msgstr "" -"CtrlA に続けて " -"3 を押します。" - -msgid "" -"Press CtrlA followed " -"by 9." -msgstr "" -"CtrlA に続けて " -"9 を押します。" - -msgid "" -"Press CtrlA followed " -"by N until you reach the n-sch screen." -msgstr "" -"n-sch 画面が表示されるまで、CtrlA に続けて N を押します。" - -msgid "" -"Press CtrlC to kill " -"the service." -msgstr "" -"CtrlC を押し、サービス" -"を強制停止します。" - -msgid "Press Enter to run it." -msgstr "Enter キーを押し、実行します。" - -msgid "Press Up Arrow to bring up the last command." -msgstr "上矢印キーを押し、最後のコマンドを表示させます。" - -msgid "" -"Press CtrlA followed " -"by 0." -msgstr "" -"CtrlA に続けて 0 を押" -"します。" - -msgid "" -"Press CtrlA followed " -"by 0." -msgstr "" -"CtrlA に続けて " -"0 を押します。" - -msgid "Prevent DHCP Spoofing by VM." -msgstr "仮想マシンによる DHCP スプーフィングの防止。" - -msgid "" -"Previously, all services had an availability zone. Currently, only the " -"nova-compute service has its own availability zone. " -"Services such as nova-scheduler, nova-network, and nova-conductor have always spanned all " -"availability zones." -msgstr "" -"以前のバージョンでは、全サービスにアベイラビリティゾーンがありました。現在" -"は、nova-compute サービスには独自のアベイラビリティゾーン" -"があります。 nova-schedulernova-networknova-conductor などのサービスは、常にすべてのア" -"ベイラビリティゾーンに対応します。" - -msgid "Primary project" -msgstr "主プロジェクト" - -msgid "Priority" -msgstr "優先度" - -msgid "Private Flavors" -msgstr "プライベートフレーバー" - -msgid "Pro Puppet" -msgstr "Pro Puppet" - -msgid "" -"Probably the most important factor in your choice of hypervisor is your " -"current usage or experience. Aside from that, there are practical concerns " -"to do with feature parity, documentation, and the level of community " -"experience." -msgstr "" -"おそらく、ハイパーバイザーの選択で最も重要な要素は、現在の使用法やこれまでの" -"経験でしょう。それ以外では、同等の機能の実用上の懸念、ドキュメント、コミュニ" -"ティでの経験量などあると思います。" - -msgid "Process Monitoring" -msgstr "プロセス監視" - -msgid "" -"Profit. You can now see traffic on patch-tun by running " -"tcpdump -i snooper0." -msgstr "" -"これでうまくいきます。tcpdump -i snooper0 を実行して、" -"patch-tun の通信を参照できます。" - -msgid "" -"Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access to image " -"733d1c44-a2ea-414b-aca7-69decf20d810." -msgstr "" -"これで、プロジェクト 771ed149ef7e4b2b88665cc1c98f77ca がイメージ 733d1c44-" -"a2ea-414b-aca7-69decf20d810 にアクセスできます。" - -msgid "Projects or Tenants?" -msgstr "プロジェクトかテナントか?" - -msgid "Property name" -msgstr "プロパティ名" - -msgid "" -"Provide the front door that people access as well as the API services that " -"all other components in the environment talk to." -msgstr "" -"ユーザーがアクセスするフロントドアに加えて、環境内その他すべてのコンポーネン" -"トが通信する API サービスを提供します。" - -msgid "" -"Provide what is known as \"persistent storage\" through services run on the " -"host as well. This persistent storage is backed onto the storage nodes for " -"reliability." -msgstr "" -"ホストでも実行されるサービスを介して、いわゆる「永続ストレージ」を提供しま" -"す。この永続ストレージは、信頼性のためにストレージノードにバッキングされま" -"す。" - -msgid "" -"Provides a web-based front end for users to consume OpenStack cloud services" -msgstr "" -"利用ユーザ用のOpenStackクラウドサービスのウェブインターフェースを提供します。" - -msgid "" -"Provides best practices and conceptual information about securing an " -"OpenStack cloud" -msgstr "" -"OpenStack クラウドを安全にするためのベストプラクティスと基本的な考え方につい" -"て書かれています" - -msgid "Provisioning and Deployment" -msgstr "プロビジョニングとデプロイメント" - -msgid "Proxy requests to a database" -msgstr "データベースリクエストのプロクシ" - -msgid "Public Addressing Options" -msgstr "パブリックアドレスの選択肢" - -msgid "Public Network" -msgstr "パブリックネットワーク" - -msgid "" -"Public access to swift-proxy, nova-api, " -"glance-api, and horizon come to these addresses, which could be " -"on one side of a load balancer or pointing at individual machines." -msgstr "" -"swift-proxy, nova-api, glance-api, " -"horizon へのパブリックアクセスはこれらのアドレス宛にアクセスしてきます。これ" -"らのアドレスはロードバランサの片側か、個々の機器を指しています。" - -msgid "Public network connectivity for user virtual machines" -msgstr "ユーザーの仮想マシンに対するパブリックネットワーク接続性" - -msgid "Puppet Labs Documentation" -msgstr "Puppet Labs ドキュメント" - -msgid "" -"Put the image ID for the only installed image into an environment variable:" -msgstr "インストール済みイメージのみのイメージ ID を環境変数に設定します。" - -msgid "Python" -msgstr "Python" - -msgid "QEMU" -msgstr "QEMU" - -msgid "" -"QEMU provides a guest agent that can be run in guests running on KVM " -"hypervisors. This guest agent, on Windows VMs, coordinates with the Windows " -"VSS service to facilitate a workflow which ensures consistent snapshots. " -"This feature requires at least QEMU 1.7. The relevant guest agent commands " -"are:" -msgstr "" -"QEMU は、KVM ハイパーバイザーにおいて動作しているゲストで実行できるゲストエー" -"ジェントを提供しています。Windows 仮想マシンの場合、このゲストエージェント" -"は、Windows VSS サービスと連携して、スナップショットの整合性を保証する流れを" -"楽にします。この機能は QEMU 1.7 以降を必要とします。関連するゲストエージェン" -"トのコマンドは次のとおりです。" - -msgid "Qpid" -msgstr "Qpid" - -msgid "Quarterly" -msgstr "四半期ごと" - -msgid "Quota" -msgstr "クォータ" - -msgid "Quotas" -msgstr "クォータ" - -msgid "" -"RAID is not used in this simplistic one-drive setup because generally for " -"production clouds, you want to ensure that if one disk fails, another can " -"take its place. Instead, for production, use more than one disk. The number " -"of disks determine what types of RAID arrays to build." -msgstr "" -"通常、本番環境のクラウドでは、1 つのディスクに問題が発生した場合、別のディス" -"クが必ず稼働するようにするため、RAID は、このシンプルな、ドライブ 1 つの設定" -"では使用されません。本番環境では、ディスクを 1 つ以上使用します。ディスク数に" -"より、どのようなタイプの RAID 配列を構築するか決定します。" - -msgid "RAM" -msgstr "メモリー" - -msgid "RAM allocation ratio: 1.5:1" -msgstr "RAM 割当比: 1.5:1" - -msgid "RDO" -msgstr "RDO" - -msgid "RELEASE_NAME" -msgstr "RELEASE_NAME" - -msgid "RXTX_Factor" -msgstr "RXTX_Factor" - -msgid "RabbitMQ Web Management Interface or rabbitmqctl" -msgstr "RabbitMQ Web管理インターフェイス および rabbitmqctl" - -msgid "RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux and derivatives" -msgstr "" -"Ubuntu には RabbitMQ、Red Hat Enterprise Linux には Qpid、 および派生物" - -msgid "Raid Controller: PERC H710P Integrated RAID Controller, 1 GB NV Cache" -msgstr "" -"RAID コントローラー: PERC H710P Integrated RAID Controller、1 GB NV キャッ" -"シュ" - -msgid "Ramification" -msgstr "派生問題" - -msgid "Rate limits" -msgstr "レートリミット" - -msgid "Rationale" -msgstr "設定指針" - -msgid "" -"Read about how to track the OpenStack roadmap through the open and " -"transparent development processes." -msgstr "" -"オープンかつ透明な OpenStack の開発プロセスからロードマップを把握する方法をま" -"とめています。" - -msgid "" -"Read more detailed instructions for launching an instance from a bootable " -"volume in the OpenStack End User Guide." -msgstr "" -"詳細は、OpenStack エンドユーザーガイドの「ボリュームからのインスタンスの起動」にある説明を参照してください。" - -msgid "" -"Read through the JSON response to get a feel for how the catalog is laid out." -msgstr "JSONレスポンスを読むことで、カタログを把握することができます。" - -msgid "Reading the Logs" -msgstr "ログの読み方" - -msgid "Rebooting a Cloud Controller or Storage Proxy" -msgstr "クラウドコントローラーとストレージプロキシの再起動" - -msgid "Rebooting a Storage Node" -msgstr "ストレージノードの再起動" - -msgid "" -"Recall that a cloud controller node runs several different services. You can " -"install services that communicate only using the message queue internally—" -"nova-scheduler and nova-console—on a new server " -"for expansion. However, other integral parts require more care." -msgstr "" -"クラウドコントローラは、異なるサービスを複数実行することを思い出してくださ" -"い。拡張のための新しいサーバには、nova-schedulernova-" -"console のようなメッセージキューのみを使用して内部通信を行うサービスを" -"インストールすることができます。しかし、その他の不可欠な部分はさらに細心の注" -"意が必要です。" - -msgid "Recovering Backups" -msgstr "バックアップのリカバリー" - -msgid "" -"Recovering backups is a fairly simple process. To begin, first ensure that " -"the service you are recovering is not running. For example, to do a full " -"recovery of nova on the cloud controller, first stop all " -"nova services:recoverybackup/recoverybackup/recoveryrecovering " -"backups" -msgstr "" -"バックアップのリカバリーは単純です。始めにリカバリー対象のサービスが停止して" -"いることを確認します。例を挙げると、クラウドコントローラー上の " -"nova の完全リカバリーを行なう場合、最初に全ての " -"nova サービスを停止します。recoverybackup/recoverybackup/recoveryrecovering backups" - -msgid "Recovery of instances is complicated by depending on multiple hosts." -msgstr "複数の物理ホストが関係するため、インスタンスの復旧が複雑になります。" - -msgid "Red Hat Distributed OpenStack (RDO)" -msgstr "Red Hat Distributed OpenStack (RDO)" - -msgid "Red Hat Enterprise Linux" -msgstr "Red Hat Enterprise Linux" - -msgid "Red Hat Enterprise Linux 6.5" -msgstr "Red Hat Enterprise Linux 6.5" - -msgid "Regions" -msgstr "リージョン" - -msgid "Relatively simple to deploy." -msgstr "比較的シンプルな構成" - -msgid "Release cycle diagram" -msgstr "リリースサイクル図" - -msgid "" -"Release notes are maintained on the OpenStack wiki, and also shown here:" -msgstr "リリースノートは OpenStack Wiki で管理され、以下で公開されています。" - -msgid "Releases" -msgstr "リリース番号" - -msgid "Remote Management" -msgstr "リモート管理" - -msgid "" -"RemoteGroups are a dynamic way of defining the CIDR of allowed sources. The " -"user specifies a RemoteGroup (security group name) and then all the users' " -"other instances using the specified RemoteGroup are selected dynamically. " -"This dynamic selection alleviates the need for individual rules to allow " -"each new member of the cluster." -msgstr "" -"リモートグループは許可されたソースの CIDR を動的に定義する方法です。ユーザー" -"がリモートグループ (セキュリティグループ名) を指定します。これにより、指定さ" -"れたリモートグループを使用する、ユーザーの他のインスタンスが動的にすべて選択" -"されます。この動的な選択により、クラスターのそれぞれの新しいメンバーを許可する、個別のルールの必要性を軽減できま" -"す。" - -msgid "Remove all packages." -msgstr "すべてのパッケージを削除します。" - -msgid "Remove databases." -msgstr "データベースを削除します。" - -msgid "Remove remaining files." -msgstr "残っているファイルを削除します。" - -msgid "Remove shares." -msgstr "共有を削除します。" - -msgid "Remove the repository for the previous release packages." -msgstr "旧リリースのパッケージのリポジトリーを削除します。" - -msgid "" -"Replacement of Open vSwitch Plug-in with Modular Layer 2" -msgstr "" -"Modular Layer 2 プラグインによる " -"Open vSwitch プラグインの置換" - -msgid "Replacing Components" -msgstr "コンポーネントの交換" - -msgid "Replacing a Swift Disk" -msgstr "Swift ディスクの交換" - -msgid "" -"Report a bug in cinder." -msgstr "" -"cinder のバグ報告。" - -msgid "" -"Report a bug in glance." -msgstr "" -"glance のバグ報告。" - -msgid "" -"Report a bug in horizon." -msgstr "" -"horizon のバグ報告。" - -msgid "" -"Report a bug in keystone." -msgstr "" -"keystone のバグ報告。" - -msgid "" -"Report a bug in manila." -msgstr "" -"manila のバグ報告。" - -msgid "" -"Report a bug in neutron." -msgstr "" -"neutron のバグ報告。" - -msgid "" -"Report a bug in nova." -msgstr "" -"nova のバグ報告。" - -msgid "" -"Report a bug in python-cinderclient." -msgstr "" -"python-cinderclient のバグ報告。" - -msgid "" -"Report a bug in python-glanceclient." -msgstr "" -"python-glanceclient のバグ報告。" - -msgid "" -"Report a bug in python-keystoneclient." -msgstr "" -"python-keystoneclient のバグ報告。" - -msgid "" -"Report a bug in python-manilaclient." -msgstr "" -"python-manilaclient のバグ報告。" - -msgid "" -"Report a bug in python-neutronclient." -msgstr "" -"python-neutronclient のバグ報告。" - -msgid "" -"Report a bug in python-novaclient." -msgstr "" -"python-novaclient のバグ報告。" - -msgid "" -"Report a bug in python-openstackclient." -msgstr "" -"python-openstackclient のバグ報告。" - -msgid "" -"Report a bug in python-swiftclient." -msgstr "" -"python-swiftclient のバグ報告。" - -msgid "" -"Report a bug in swift." -msgstr "" -"swift のバグ報告。" - -msgid "" -"Report a bug with the API documentation." -msgstr "" -"API ドキュメント のバグ報告。" - -msgid "" -"Report a bug with the documentation." -msgstr "" -"ドキュメント のバグ報告。" - -msgid "Reporting Bugs" -msgstr "バグ報告" - -msgid "Requests for extension" -msgstr "延長申請" - -msgid "" -"Requires file injection into the instance to configure network interfaces." -msgstr "" -"ネットワークインターフェースの設定にはインスタンスへのファイルの注入が必須で" -"す。" - -msgid "Requires its own DHCP broadcast domain." -msgstr "専用の DHCP ブロードキャストドメインが必要。" - -msgid "Requires many VLANs to be trunked onto a single port." -msgstr "一つのポートに多数の VLAN をトランクが必要。" - -msgid "Reset Share State" -msgstr "共有状態のリセット" - -msgid "Resource Alerting" -msgstr "リソースのアラート" - -msgid "" -"Resource alerting provides notifications when one or more resources are " -"critically low. While the monitoring thresholds should be tuned to your " -"specific OpenStack environment, monitoring resource usage is not specific to " -"OpenStack at all—any generic type of alert will work fine.monitoringresource alertingalertsresourceresourcesresource alertinglogging/" -"monitoringresource alerting" -msgstr "" -"リソースアラート機能は、1 つ以上のリソースが致命的に少なくなった際に通知しま" -"す。閾値監視がお使いの OpenStack 環境で有効化されているべきですが、リソース使" -"用状況の監視は、まったく OpenStack 固有のことではありません。あらゆる汎用のア" -"ラート機能が適切に動作するでしょう。monitoringresource alertingalertsresourceresourcesresource alertinglogging/" -"monitoringresource alerting" - -msgid "Resource based" -msgstr "リソースベース" - -msgid "Resources" -msgstr "情報源" - -msgid "" -"Resources such as memory, disk, and CPU are generic resources that all " -"servers (even non-OpenStack servers) have and are important to the overall " -"health of the server. When dealing with OpenStack specifically, these " -"resources are important for a second reason: ensuring that enough are " -"available to launch instances. There are a few ways you can see OpenStack " -"resource usage.monitoringOpenStack-specific resourcesresourcesgeneric vs. OpenStack-specificlogging/monitoringOpenStack-specific resources The " -"first is through the nova command:" -msgstr "" -"メモリ、ディスク、CPUのような一般的なリソースは、全てのサーバー(OpenStackに関" -"連しないサーバーにも)に存在するため、サーバーの状態監視において重要です。" -"OpenStackの場合、インスタンスを起動するために必要なリソースが確実に存在するか" -"の確認という点でも重要です。OpenStackのリソースを見るためには幾つかの方法が存" -"在します。monitoringOpenStack-specific resourcesresourcesgeneric vs. OpenStack-specificlogging/monitoringOpenStack-specific resources 1 番" -"目は nova コマンド経由です。" - -msgid "Responsible disclosure" -msgstr "Responsible Disclosure(責任ある情報公開)" - -msgid "" -"Restart the swift proxy service to make swift use your " -"middleware. Start by switching to the swift-proxy screen:" -msgstr "" -"swift proxy にこのミドルウェアを使わせるために、Swift プロ" -"キシサービスを再起動します。swift-proxy の screen セッショ" -"ンに切り替えてはじめてください。" - -msgid "" -"Restart the nova scheduler service to make nova use your scheduler. Start by " -"switching to the n-sch screen:" -msgstr "" -"Nova にこのスケジューラーを使わせるために、Nova スケジューラーサービスを再起" -"動します。 n-sch screen セッションに切り替えてはじめてください。" - -msgid "Restore databases from backup." -msgstr "バックアップからデータベースを" - -msgid "" -"Restore databases from the RELEASE_NAME-" -"db-backup.sql backup file that you created with the " -" command during the upgrade process:" -msgstr "" -"アップグレードプロセス中に コマンドを用いて作成した、" -"RELEASE_NAME-db-backup.sql " -"バックアップファイルからデータベースを復元します。" - -msgid "" -"Resume I/O to the disks, similar to the Linux fsfreeze -u " -"operation." -msgstr "" -"ディスクへの I/O を再開します。Linux の fsfreeze -u 処理と" -"似ています。" - -msgid "Resume the instance using virsh:" -msgstr "virsh を使用して、インスタンスを再開します。" - -msgid "Resume the instance." -msgstr "インスタンスを再開します。" - -msgid "" -"Reverse the direction to see the path of a ping reply. From this path, you " -"can see that a single packet travels across four different NICs. If a " -"problem occurs with any of these NICs, a network issue occurs." -msgstr "" -"ping 応答のパスを確認するために、方向を反転させます。この経路説明によって、あ" -"なたはパケットが4つの異なるNICの間を行き来していることがわかったでしょう。こ" -"れらのどのNICに問題が発生しても、ネットワークの問題となるでしょう。" - -msgid "Review and plan any major OpenStack upgrades." -msgstr "OpenStack のメジャーアップグレードの内容を確認し、その計画を立てます。" - -msgid "Review and plan any necessary cloud additions." -msgstr "クラウドの追加の必要性を検討し、計画を立てます。" - -msgid "Review common actions in your cloud." -msgstr "構築したクラウドにおいて一般的なアクションをレビューする" - -msgid "" -"Review the component's configuration file to see how each OpenStack " -"component accesses its corresponding database. Look for either " -"sql_connection or simply connection. The following " -"command uses grep to display the SQL connection string for " -"nova, glance, cinder, and keystone:" -msgstr "" -"コンポーネントの設定ファイルを確認して、それぞれの OpenStack コンポーネントが" -"対応するデータベースにどのようにアクセスするかを把握ください。" -"sql_connection またはただの connection を探します。" -"以下のコマンドは、grep を使用して、nova、glance、cinder、" -"keystone の SQL 接続文字列を表示します。" - -msgid "Review usage and trends over the past quarter." -msgstr "この四半期における使用量および傾向を確認します。" - -msgid "Role" -msgstr "役割" - -msgid "Role-based rules" -msgstr "ロールに基づいたルール" - -msgid "Roll Your Own OpenStack" -msgstr "自分の OpenStack の展開" - -msgid "Roll back configuration files." -msgstr "設定ファイルをロールバックします。" - -msgid "Roll back packages." -msgstr "パッケージをロールバックします。" - -msgid "Roll these tests into an alerting system." -msgstr "それらのテストをアラートシステムに組み込む" - -msgid "Rolling back a failed upgrade" -msgstr "失敗したアップグレードのロールバック" - -msgid "" -"Rough-draft design discussions (\"etherpads\") from the last design summit" -msgstr "直近のデザインサミットからの大まかな設定に関する議論 (etherpad)" - -msgid "Routers for private networks created within OpenStack." -msgstr "OpenStack 内に作成されるプライベートネットワーク用のルーター" - -msgid "" -"Run glance-* servers on the swift-proxy server." -msgstr "swift-proxy サーバでglance-*サーバを稼働する" - -msgid "Run a central dedicated database server." -msgstr "中央データベースサーバを構成する" - -msgid "" -"Run a number of services in a highly available fashion, utilizing Pacemaker " -"and HAProxy to provide a virtual IP and load-balancing functions so all " -"controller nodes are being used." -msgstr "" -"全コントローラーノードが使用されるように Pacemaker と HAProxy を利用して仮想 " -"IP および負荷分散機能を提供して、多数のサービスを高可用性で実行します。" - -msgid "" -"Run all of the environment's networking services, with the exception of the " -"networking API service (which runs on the controller node)." -msgstr "" -"環境の全ネットワークサービスを実行します。ただし、ネットワーク API サービス " -"(コントローラーノードで実行される) を除きます。" - -msgid "Run one VM per service." -msgstr "1サービスにつき1つのVMを稼働させる" - -msgid "Run operating system and scratch space" -msgstr "OS を起動し、空き領域に記録する" - -msgid "Run the DevStack script to create the stack user:" -msgstr "DevStack のスクリプトを実行して、stack ユーザーを作成します。" - -msgid "Run the bare minimum of services needed to facilitate these instances." -msgstr "" -"それらのインスタンスを円滑に稼働するために必要な最低限のサービスを実行しま" -"す。" - -msgid "Run the following command to view the current iptables configuration:" -msgstr "iptablesの現在の構成を見るには、以下のコマンドを実行します。" - -msgid "Run the following command to view the properties of existing images:" -msgstr "" -"既存のイメージのプロパティを表示するために、以下のコマンドを実行します。" - -msgid "Run the stack script that will install OpenStack: " -msgstr "" -"OpenStack をインストールする stack スクリプトを実行します。" - -msgid "Run this on the command line of the following areas:" -msgstr "このコマンドは以下の場所で実行します。" - -msgid "" -"Running sync writes dirty buffers (buffered blocks that have " -"been modified but not written yet to the disk block) to disk." -msgstr "" -"sync を実行することにより、ダーティーバッファー (変更されたが、" -"ディスクに書き込まれていないバッファー済みブロック) をディスクに書き込みま" -"す。" - -msgid "Running Daemons on the CLI" -msgstr "コマンドラインでのデーモンの実行" - -msgid "Running Instances" -msgstr "稼働中のインスタンス" - -msgid "Running a dedicated storage system can be operationally simpler." -msgstr "専用のストレージシステムを動作させることで、運用がシンプルになります。" - -msgid "" -"Running a distributed file system can make you lose your data locality " -"compared with nonshared storage." -msgstr "" -"分散ファイルシステムを実行すると、非共有ストレージに比べデータの局所性が失わ" -"れる可能性があります。" - -msgid "" -"Running a shared file system on a storage system apart from the computes " -"nodes is ideal for clouds where reliability and scalability are the most " -"important factors. Running a shared file system on the compute nodes " -"themselves may be best in a scenario where you have to deploy to preexisting " -"servers for which you have little to no control over their specifications. " -"Running a nonshared file system on the compute nodes themselves is a good " -"option for clouds with high I/O requirements and low concern for reliability." -"scalingfile " -"system choice" -msgstr "" -"信頼性と拡張性が最も重要な要因とするクラウドでは、コンピュートノードと分離し" -"てストレージシステムで共有ファイルシステムを実行することが理想的です。仕様の" -"コントロールをほぼできない、または全くできない既存のサーバーにデプロイシなけ" -"ればならない場合などは、コンピュートノード自体で共有ファイルシステムを実行す" -"るとベストです。また、I/O 要件が高く、信頼性にあまり配慮しなくてもいいクラウ" -"ドには、コンピュートノード上で非共有ファイルシステムを実行すると良いでしょ" -"う。スケーリング" -"ファイルシステムの選択" - -msgid "Running programs have written their contents to disk" -msgstr "実行中のプログラムがコンテンツをディスクに書き込んだこと" - -msgid "" -"Runs as a background process. On Linux platforms, a daemon is usually " -"installed as a service.daemonsbasics of" -msgstr "" -"バックグラウンドプロセスとして実行されます。Linux プラットフォームでは、デー" -"モンは通常サービスとしてインストールされます。デーモン基本" - -msgid "S3" -msgstr "S3" - -msgid "SQL" -msgstr "SQL" - -msgid "SQL database (such as MySQL or PostgreSQL)" -msgstr "SQL データベース (MySQL や PostgreSQL など)" - -msgid "Save the configuration files on all nodes. For example:" -msgstr "すべてのノードで設定ファイルを保存します。例:" - -msgid "Scalable Hardware" -msgstr "スケーラブルハードウェア" - -msgid "Scaling" -msgstr "スケーリング" - -msgid "Scenario" -msgstr "シナリオ" - -msgid "Scheduler Improvements" -msgstr "スケジューラーの改善" - -msgid "Scheduling" -msgstr "スケジューリング" - -msgid "Scheduling services" -msgstr "スケジュールサービス" - -msgid "Scheduling to hosts with trusted hardware support." -msgstr "" -"トラステッドコンピューティング機能に対応したホスト群に対してスケジューリング" -"したい場合" - -msgid "Script" -msgstr "スクリプト" - -msgid "" -"Secondly, DAIR's shared /var/lib/nova/instances " -"directory contributed to the problem. Since all compute nodes have access to " -"this directory, all compute nodes periodically review the _base directory. " -"If there is only one instance using an image, and the node that the instance " -"is on is down for a few minutes, it won't be able to mark the image as still " -"in use. Therefore, the image seems like it's not in use and is deleted. When " -"the compute node comes back online, the instance hosted on that node is " -"unable to start." -msgstr "" -"次に、DAIR の共有された /var/lib/nova/instances が問題を" -"助長した。全コンピュートノードがこのディレクトリにアクセスするため、全てのコ" -"ンピュートノードは定期的に _base ディレクトリを見直していた。あるイメージを使" -"用しているインスタンスが1つだけあり、そのインスタンスが存在するノードが数分" -"間ダウンした場合、そのイメージが使用中であるという印を付けられなくなる。それ" -"ゆえ、イメージは使用中に見えず、削除されてしまったのだ。そのコンピュートノー" -"ドが復帰した際、そのノード上でホスティングされていたインスタンスは起動できな" -"い。" - -msgid "Security Configuration for Compute, Networking, and Storage" -msgstr "Compute、Networking、Storage のセキュリティ設定" - -msgid "Security Groups" -msgstr "セキュリティグループ" - -msgid "Security Information" -msgstr "セキュリティー情報" - -msgid "Security group rules" -msgstr "セキュリティグループルール" - -msgid "Security groups" -msgstr "セキュリティグループ" - -msgid "" -"Security groups are sets of IP filter rules that are applied to an " -"instance's networking. They are project specific, and project members can " -"edit the default rules for their group and add new rules sets. All projects " -"have a \"default\" security group, which is applied to instances that have " -"no other security group defined. Unless changed, this security group denies " -"all incoming traffic." -msgstr "" -"セキュリティグループは、インスタンスのネットワークに適用される、IP フィルター" -"ルールの組です。それらはプロジェクト固有です。プロジェクトメンバーがそれらの" -"グループの標準ルールを編集でき、新しいルールを追加できます。すべてのプロジェ" -"クトが \"default\" セキュリティグループを持ちます。他のセキュリティグループが" -"定義されていないインスタンスには \"default\" セキュリティグループが適用されま" -"す。\"default\" セキュリティグループは、ルールを変更しない限り、すべての受信" -"トラフィックを拒否します。" - -msgid "" -"Security groups for the current project can be found on the OpenStack " -"dashboard under Access & Security. To see details " -"of an existing group, select the edit action for that " -"security group. Obviously, modifying existing groups can be done from this " -"edit interface. There is a Create Security " -"Group button on the main Access & Security page for creating new groups. We discuss the terms used in these " -"fields when we explain the command-line equivalents." -msgstr "" -"現在のプロジェクトのセキュリティグループが、OpenStack dashboard の " -"アクセスとセキュリティ にあります。既存のグループの詳細を表示する" -"には、セキュリティグループの 編集 を選択します。自明です" -"が、この 編集 インターフェースから既存のグループを変更で" -"きます。新しいグループを作成するための セキュリティグループの作成" -" ボタンが、メインの アクセスとセキュリティ " -"ページにあります。同等のコマンドラインを説明するとき、これらの項目において使" -"用される用語について説明します。" - -msgid "" -"Security groups, as discussed earlier, are typically required to allow " -"network traffic to an instance, unless the default security group for a " -"project has been modified to be more permissive.security groupsuser trainingsecurity groups" -msgstr "" -"セキュリティグループは、前に記載したように、プロジェクトのデフォルトのセキュ" -"リティグループがより許可するよう変更されていない限り、インスタンスへのネット" -"ワーク通信を許可するために一般的に必要となります。security groupsuser trainingsecurity groups" - -msgid "Security-supported" -msgstr "セキュリティサポート" - -msgid "See ." -msgstr " を参照してください。" - -msgid "See ." -msgstr " を参照してください。" - -msgid "See ." -msgstr " を参照してください。" - -msgid "See ." -msgstr " を参照してください。" - -msgid "" -"See the How To Contribute page on the wiki for more information on the " -"steps you need to take to submit your first documentation review or change." -msgstr "" -"ドキュメントのレビューや変更を最初に行うのに必要な手続きについての詳しい情報" -"は、Wiki の How To Contribute ページを参照して下さい。" - -msgid "Segregating Your Cloud" -msgstr "クラウドの分離" - -msgid "" -"Select the Identity tab in the left navigation bar." -msgstr "" -"左側にあるナビゲーションバーの ユーザー管理 タブを選択し" -"ます。" - -msgid "Semiannually" -msgstr "1 年に 2 回" - -msgid "Send unmatched traffic to the fallback chain." -msgstr "一致しない通信のフォールバックチェインへの送信。" - -msgid "Sep 22, 2011" -msgstr "2011年9月22日" - -msgid "Sep 22, 2014" -msgstr "2014年9月22日" - -msgid "Sep 27, 2012" -msgstr "2012年9月27日" - -msgid "Separation of Services" -msgstr "サービスの分離" - -msgid "Series" -msgstr "シリーズ" - -msgid "Seriously, Google." -msgstr "マジ?Google。" - -msgid "Server load" -msgstr "サーバー負荷" - -msgid "Servers and Services" -msgstr "サーバーとサービス" - -msgid "Service" -msgstr "サービス" - -msgid "Services" -msgstr "サービス" - -msgid "Services for Networking" -msgstr "ネットワーク関係のサービス" - -msgid "Set Block Storage Quotas" -msgstr "Block Storage のクォータの設定" - -msgid "Set Compute Service Quotas" -msgstr "コンピュートサービスのクォータの設定" - -msgid "Set Image Quotas" -msgstr "イメージクォータの設定" - -msgid "Set Object Storage Quotas" -msgstr "Object Storage のクォータの設定" - -msgid "Set some permissions you can use to view the DevStack screen later:" -msgstr "" -"後から DevStack の画面を表示するために使用できるよう、いくつかのパーミッショ" -"ンを設定します。" - -msgid "Setting with neutron command" -msgstr "neutron コマンドを用いたセットアップ方法" - -msgid "Setting with nova command" -msgstr "nova コマンドを用いたセットアップ方法" - -msgid "" -"Several minutes after nova-network is restarted, you " -"should see new dnsmasq processes running:" -msgstr "" -"nova-networkの再起動から数分後、新たなdnsmasqプロセスが動" -"いていることが確認できるでしょう。" - -msgid "" -"Several options are available for MySQL load balancing, and the supported " -"AMQP brokers have built-in clustering support. Information on how to " -"configure these and many of the other services can be found in .Advanced Message Queuing Protocol (AMQP)" -msgstr "" -"MySQL の負荷分散には複数のオプションがあり、サポートされている AMQP ブロー" -"カーにはクラスタリングサポートが含まれています。これらの設定方法やその他多く" -"のサービスに関する情報は、 を参照してください。Advanced Message Queuing Protocol (AMQP)" - -msgid "" -"Several pre-made images exist and can easily be imported into the Image " -"service. A common image to add is the CirrOS image, which is very small and " -"used for testing purposes.imagesadding To add this image, simply " -"do:" -msgstr "" -"いくつかの構築済みイメージが存在します。簡単に Image service の中にインポート" -"できます。追加する一般的なイメージは、非常に小さく、テスト目的に使用される " -"CirrOS イメージです。イメージ追加このイメージを追加するには、単" -"に次のようにします。" - -msgid "Shared File System storage" -msgstr "Shared File System ストレージ" - -msgid "Shared File Systems Service" -msgstr "Shared File Systems サービス" - -msgid "Shared services" -msgstr "共有サービス" - -msgid "Shared storage using NFS*" -msgstr "NFS を使用する共有ストレージ* " - -msgid "Sharing Images Between Projects" -msgstr "プロジェクト間のイメージの共有" - -msgid "Sheepdog" -msgstr "Sheepdog" - -msgid "" -"Sheepdog is a userspace distributed storage system. Sheepdog scales to " -"several hundred nodes, and has powerful virtual disk management features " -"like snapshot, cloning, rollback, thin provisioning." -msgstr "" -"Sheepdog は、ユーザー空間の分散ストレージシステムです。数百ノードまでスケール" -"します。スナップショット、複製、ロールバック、シンプロビジョニングなどの高度" -"な仮想ディスク管理機能を持ちます。" - -msgid "" -"SheepdogSheepdog" -msgstr "" -"SheepdogSheepdog" - -msgid "Should backups be kept off-site?" -msgstr "オフサイトにバックアップを置くべきか?" - -msgid "" -"Should my persistent storage drives be contained in my compute nodes, or " -"should I use external storage?" -msgstr "" -"永続的ストレージをコンピュートノード内に持つべきか?それとも外部ストレージに" -"持つべきか?" - -msgid "" -"Shows OpenStack administrators how to create and manage resources in an " -"OpenStack cloud with the OpenStack dashboard and OpenStack client commands" -msgstr "" -"OpenStack の管理者が、OpenStack Dashboard と OpenStack クライアントコマンドを使って、OpenStack クラウドのリソー" -"スの作成・管理を行う方法を説明しています。" - -msgid "" -"Shows OpenStack end users how to create and manage resources in an OpenStack " -"cloud with the OpenStack dashboard and OpenStack client commands" -msgstr "" -"OpenStack のエンドユーザーが、OpenStack Dashboard と OpenStack クライアントコ" -"マンドを使って、OpenStack クラウドのリソースの作成・管理を行う方法を説明して" -"います" - -msgid "" -"Shows a policy restricting the ability to manipulate flavors to " -"administrators using the Admin API only.admin API" -msgstr "" -"インスタンスタイプを操作する権限を、管理 API を使用する管理者だけに限定するポ" -"リシーを表します。admin API" - -msgid "" -"Shows a rule that evaluates successfully if the current user is an " -"administrator or the owner of the resource specified in the request (tenant " -"identifier is equal)." -msgstr "" -"現在のユーザーが、管理者、またはリクエストで指定されたリソースの所有者 (テナ" -"ント識別子が同じ) であれば、成功であると評価されるルールを表します。" - -msgid "" -"Shows commands or other text that should be typed literally by the user." -msgstr "" -"ユーザーにより書いてあるとおりに入力されるべきコマンドや文字列を表します。" - -msgid "" -"Shows text that should be replaced with user-supplied values or by values " -"determined by context." -msgstr "" -"ユーザーが指定した値、または文脈により決められる値で置き換えるべき文字列を表" -"します。" - -msgid "" -"Shows the default policy, which is always evaluated if an API operation does " -"not match any of the policies in policy.json." -msgstr "" -"API 操作が policy.json のどのポリシーとも一致しなかった場合に、" -"必ず評価される規定のポリシーを表します。" - -msgid "" -"Shows you how to obtain, create, and modify virtual machine images that are " -"compatible with OpenStack" -msgstr "" -"OpenStack で利用可能な仮想マシンイメージを取得、作成、更新する方法について説" -"明されています" - -msgid "Shutting Down a Storage Node" -msgstr "ストレージノードのシャットダウン" - -msgid "Signature hash method = SHA-256" -msgstr "署名のハッシュ方法 = SHA-256" - -msgid "Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512" -msgstr "署名のハッシュ方法: SHA-224、SHA-256、SHA-384、SHA-512" - -msgid "Signature key type = RSA-PSS" -msgstr "署名の鍵形式 = RSA-PSS" - -msgid "" -"Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, ECC_SECT571R1, " -"ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, and RSA-PSS" -msgstr "" -"署名の鍵形式: DSA、ECC_SECT571K1、ECC_SECT409K1、ECC_SECT571R1、" -"ECC_SECT409R1、ECC_SECP521R1、ECC_SECP384R1、RSA-PSS" - -msgid "Signature verification will occur when Compute boots the signed image" -msgstr "Compute が署名付きイメージを起動するとき、署名が検証されます。" - -msgid "" -"Similarly, if you have an existing cloud and are looking to upgrade from " -"nova-network to OpenStack Networking, you should have the " -"option to delay the upgrade for this period of time. However, each release " -"of OpenStack brings significant new innovation, and regardless of your use " -"of networking methodology, it is likely best to begin planning for an " -"upgrade within a reasonable timeframe of each release." -msgstr "" -"同様に、既存のクラウドを持ち、nova-network から OpenStack " -"Networking にアップグレードするつもりである場合、この期間中のアップグレードを" -"遅らせる選択肢もあるでしょう。しかしながら、OpenStack の各リリースは新しい重" -"要なイノベーションをもたらします。ネットワークの使用法によらず、各リリースの" -"合理的な時間枠の中でアップグレードの計画を始めることが最も良いでしょう。" - -msgid "" -"Similarly, the default RAM allocation ratio of 1.5:1 means that the " -"scheduler allocates instances to a physical node as long as the total amount " -"of RAM associated with the instances is less than 1.5 times the amount of " -"RAM available on the physical node." -msgstr "" -"同様に、RAM 割当比のデフォルト1.5:1 は、インスタンスに関連づけられた RAM の総" -"量が物理ノードで利用できるメモリ量の1.5倍未満であれば、スケジューラーがその物" -"理ノードにインスタンスを割り当てることを意味します。" - -msgid "" -"Since my database contained many recordsover 1 million metadata records and " -"over 300,000 instance records in \"deleted\" or \"errored\" stateseach " -"search took a long time. I decided to clean up the database by first " -"archiving a copy for backup and then performing some deletions using the " -"MySQL client. For example, I ran the following SQL command to remove rows of " -"instances deleted for over a year:" -msgstr "" -"データベースに 100 万以上のメタデータおよび 300,000 インスタンスのレコードが" -"「削除済み」または「エラー」状態で含まれていました。MySQL クライアントを使用" -"して、まずバックアップを取得し、データベースをクリーンアップし、いくつか削除" -"を実行することにしました。例えば、以下の SQL コマンドを実行して、1 年以上の間" -"に削除されたインスタンスの行を削除しました。" - -msgid "" -"Since you've made it this far in the book, you should consider becoming an " -"official individual member of the community and join the OpenStack Foundation. The " -"OpenStack Foundation is an independent body providing shared resources to " -"help achieve the OpenStack mission by protecting, empowering, and promoting " -"OpenStack software and the community around it, including users, developers, " -"and the entire ecosystem. We all share the responsibility to make this " -"community the best it can possibly be, and signing up to be a member is the " -"first step to participating. Like the software, individual membership within " -"the OpenStack Foundation is free and accessible to anyone.OpenStack communityjoining" -msgstr "" -"この本をここまで読んで来たので、あなたはコミュニティーの正式な個人メンバーに" -"なって、OpenStack " -"Foundation に加入する (https://www.openstack.org/join/) ことを考えてい" -"ることでしょう。 OpenStack Foundation は、共有リソースを提供し OpenStack の目" -"的の達成を支援する独立組織で、OpenStack ソフトウェア、およびユーザ、開発者、" -"エコシステム全体といった OpenStack を取り巻くコニュニティーを守り、活力を与" -"え、普及の促進を行います。我々全員がこのコミュニティーをできる限りよいものに" -"していく責任を共有します。メンバーになることはコミュニティーに参加する最初の" -"ステップです。ソフトウェア同様、 OpenStack Foundation の個人会員は無料で誰で" -"もなることができます。OpenStack コミュ" -"ニティー参加" - -msgid "Single nova-network or multi-host?" -msgstr "単一の nova-network またはマルチホスト?" - -msgid "" -"Size your database server accordingly, and scale out beyond one cloud " -"controller if many instances will report status at the same time and " -"scheduling where a new instance starts up needs computing power." -msgstr "" -"データベースサーバーを負荷に応じてサイジングしてください。もし、多数のインス" -"タンスが同時に状態を報告したり、CPU能力が必要な新規インスタンス起動のスケ" -"ジューリングを行う場合は、1台のクラウドコントローラーを超えてスケールアウト" -"してください。" - -msgid "Sizing determined by…" -msgstr "容量の指定" - -msgid "So it was a qemu/kvm bug." -msgstr "つまり、これは qemu/kvm のバグである。" - -msgid "" -"So many OpenStack resources are available online because of the fast-moving " -"nature of the project, but there are also resources listed here that the " -"authors found helpful while learning themselves." -msgstr "" -"プロジェクトの急速な成熟のため、数多くの OpenStack の情報源がオンラインで利用" -"可能ですが、執筆者が学習している間に有用だったリソースを一覧化しています。" - -msgid "" -"So, I found myself wondering what changed in the EC2 API on Havana that " -"might cause this to happen. Was it a bug or a normal behavior that I now " -"need to work around?" -msgstr "" -"そのため、この問題を引き起こしているかもしれない、Havana で EC2 API に行われ" -"た変更を自分で探しはじめました。これはバグなのか、回避策が必要となる通常の動" -"作なのか?" - -msgid "SolidFire" -msgstr "SolidFire" - -msgid "Some of the resources that you want to monitor include:" -msgstr "監視項目に含む幾つかのリソースをあげます。" - -msgid "Some other examples for Intelligent Alerting include:" -msgstr "インテリジェントなアラートのその他の例としては以下があります。" - -msgid "" -"Sometimes a compute node either crashes unexpectedly or requires a reboot " -"for maintenance reasons." -msgstr "" -"コンピュートノードは、予期せずクラッシュしたり、メンテナンスのために再起動が" -"必要になったりすることがときどきあります。" - -msgid "" -"Sometimes a user and a group have a one-to-one mapping. This happens for " -"standard system accounts, such as cinder, glance, nova, and swift, or when " -"only one user is part of a group." -msgstr "" -"ユーザーとグループは、一対一でマッピングされる場合があります。このようなマッ" -"ピングは cinder、glance、nova、swift などの標準システムアカウントや、グループ" -"にユーザーが 1 人しかいない場合に発生します。" - -msgid "" -"Sometimes an instance is terminated but the floating IP was not correctly de-" -"associated from that instance. Because the database is in an inconsistent " -"state, the usual tools to disassociate the IP no longer work. To fix this, " -"you must manually update the database.IP addressesfloatingfloating IP address" -msgstr "" -"しばしば、フローティングIPを正しく開放しないままインスタンスが終了されること" -"があります。するとデータベースは不整合状態となるため、通常のツールではうまく" -"開放できません。解決するには、手動でデータベースを更新する必要があります。" -"IP addressesfloatingfloating IP address" - -msgid "" -"Source openrc to set up your environment variables for " -"the CLI:" -msgstr "openrc を読み込み、CLI の環境変数を設定します。" - -msgid "Source openrc to set up your environment variables for the CLI:" -msgstr "openrc を読み込み、CLI の環境変数を設定します。" - -msgid "" -"SourceGroups are a special dynamic way of defining the CIDR of allowed " -"sources. The user specifies a SourceGroup (security group name) and then all " -"the users' other instances using the specified SourceGroup are selected " -"dynamically. This dynamic selection alleviates the need for individual rules " -"to allow each new member of the cluster." -msgstr "" -"ソースグループは許可されたソースの CIDR を動的に定義する特別な方法です。ユー" -"ザーがソースグループ (セキュリティグループ名) を指定します。これにより、指定" -"されたソースグループを使用する、ユーザーの他のインスタンスが動的にすべて選択" -"されます。この動的な選択により、クラスターのそれぞれの新しいメンバーを許可する、個別のルールの必要性を軽減できま" -"す。" - -msgid "Spare space for future growth" -msgstr "将来のための余剰" - -msgid "Specific Configuration Topics" -msgstr "設定に関する個別のトピック" - -msgid "" -"Specifies the size of a secondary ephemeral data disk. This is an empty, " -"unformatted disk and exists only for the life of the instance." -msgstr "" -"二次的な一時データディスクの容量を指定します。これは空の、フォーマットされて" -"いないディスクです。インスタンスの生存期間だけ存在します。" - -msgid "Specify access rules and security services for existing shares." -msgstr "既存の共有のアクセスルールおよびセキュリティーサービスを指定します。" - -msgid "Specify quotas for existing users or tenants" -msgstr "既存のユーザまたはテナントのクォータを表示します" - -msgid "StackTach" -msgstr "StackTach" - -msgid "Standard VLAN number limitation." -msgstr "標準的な VLAN 数の上限。" - -msgid "" -"Standard backup best practices apply when creating your OpenStack backup " -"policy. For example, how often to back up your data is closely related to " -"how quickly you need to recover from data loss.backup/recoveryconsiderations" -msgstr "" -"OpenStackバックアップポリシーを作成する際、標準的なバックアップのベストプラク" -"ティスが適用できます。例えば、どの程度の頻度でバックアップを行なうかは、どの" -"くらい早くデータロスから復旧させる必要があるかに密接に関連しています。" -"backup/recoveryconsiderations" - -msgid "Standard networking." -msgstr "標準的なネットワーク。" - -msgid "Starting Instances" -msgstr "インスタンスの起動" - -msgid "" -"Starting instances and deleting instances is demanding on the compute node " -"but also demanding on the controller node because of all the API queries and " -"scheduling needs." -msgstr "" -"インスタンスの起動と停止は コンピュートノードに負荷をかけますが、それだけで" -"なく、すべてのAPI処理とスケジューリングの必要性のために、コントローラーノード" -"にも負荷をかけます。" - -msgid "Status" -msgstr "状態" - -msgid "Status: Confirmed" -msgstr "Status: Confirmed" - -msgid "Status: Fix Committed" -msgstr "Status: Fix Committed" - -msgid "Status: In Progress" -msgstr "Status: In Progress" - -msgid "Status: Incomplete" -msgstr "Status: Incomplete" - -msgid "Status: New" -msgstr "Status: New" - -msgid "Status: Fix Released" -msgstr "Status: Fix Released" - -msgid "Steps to reproduce the bug, including what went wrong" -msgstr "バグの再現手順。何がおかしいかも含めてください" - -msgid "Stop all OpenStack services." -msgstr "すべての OpenStack サービスを停止します。" - -msgid "Storage" -msgstr "ストレージ" - -msgid "Storage Decisions" -msgstr "ストレージ選定" - -msgid "Storage Driver Support" -msgstr "ストレージドライバーズサポート" - -msgid "Storage Node Failures and Maintenance" -msgstr "ストレージノードの故障とメンテナンス" - -msgid "" -"Storage is found in many parts of the OpenStack stack, and the differing " -"types can cause confusion to even experienced cloud engineers. This section " -"focuses on persistent storage options you can configure with your cloud. " -"It's important to understand the distinction between ephemeral storage and persistent storage." -msgstr "" -"ストレージは、OpenStack のスタックの多くの箇所で使用されており、ストレージの" -"種別が異なると、経験豊かなエンジニアでも混乱する可能性があります。本章は、お" -"使いのクラウドで設定可能な永続ストレージオプションにフォーカスします。" -" 一時 ストレージと " -" 永続 ストレージの相違" -"点を理解することが重要です。" - -msgid "Storage node" -msgstr "ストレージノード" - -msgid "Storage nodes" -msgstr "ストレージノード" - -msgid "" -"Storage nodes store all the data required for the environment, including " -"disk images in the Image service library, and the persistent storage volumes " -"created by the Block Storage service. Storage nodes use GlusterFS technology " -"to keep the data highly available and scalable." -msgstr "" -"ストレージノードは、環境に必要な全データを保管します。これには、Image " -"service ライブラリ内のディスクイメージや、Block Storage によって作成された永" -"続ストレージボリュームが含まれます。ストレージノードは GlusterFS テクノロジー" -"を使用してデータの高可用性とスケーラビリティを確保します。" - -msgid "Store data, including VM images" -msgstr "データを保存する(VMイメージも含む)" - -msgid "" -"Stores and serves images with metadata on each, for launching in the cloud" -msgstr "" -"クラウド内で起動するための各メタデータが付属したイメージデータを蓄え、提供し" -"ます" - -msgid "Strengths" -msgstr "長所" - -msgid "Subnet router" -msgstr "サブネットルーター" - -msgid "Summary" -msgstr "概要" - -msgid "" -"Supply highly available \"infrastructure\" services, such as MySQL and Qpid, " -"that underpin all the services." -msgstr "" -"高可用性の「インフラストラクチャー」サービス (全サービスの基盤となる MySQL " -"や Qpid など) を提供します。 " - -msgid "" -"Support for global clustering of object storage servers is available for all " -"supported releases. You would implement these global clusters to ensure " -"replication across geographic areas in case of a natural disaster and also " -"to ensure that users can write or access their objects more quickly based on " -"the closest data center. You configure a default region with one zone for " -"each cluster, but be sure your network (WAN) can handle the additional " -"request and response load between zones as you add more zones and build a " -"ring that handles more zones. Refer to Geographically Distributed Clusters in the documentation " -"for additional information.Object " -"Storagegeographical considerationsstoragegeographical considerationsconfiguration optionsgeographical storage considerations" -msgstr "" -"オブジェクトストレージサーバーのグローバルクラスターが、すべてのサポートされ" -"ているリリースで利用できます。自然災害の発生時に備えて地理的な地域をまたがっ" -"て確実に複製するために、またユーザーが最も近いデータセンターに基づいてより迅" -"速にオブジェクトにアクセスできるようにするために、これらのグローバルクラス" -"ターを導入できるでしょう。各クラスターに 1 つのゾーンを持つデフォルトのリー" -"ジョンを設定します。しかし、より多くのゾーンを追加して、より多くのゾーンを処" -"理するリングを構築するので、お使いのネットワーク (WAN) が、ゾーン間の追加リク" -"エストとレスポンスの負荷を処理できることを確認してください。詳細は Geographically Distributed " -"Clusters にあるドキュメントを参照してください。Object Storagegeographical " -"considerationsstoragegeographical considerationsconfiguration " -"optionsgeographical storage considerations" - -msgid "" -"Sure enough, the user had been periodically refreshing the console log page " -"on the dashboard and the 5G file was traversing the Rabbit cluster to get to " -"the dashboard." -msgstr "" -"思った通り、ユーザはダッシュボード上のコンソールログページを定期的に更新して" -"おり、ダッシュボードに向けて5GB のファイルが RabbitMQ クラスタを通過してい" -"た。" - -msgid "" -"Suspend I/O to the disks, similar to the Linux fsfreeze -f operation." -msgstr "" -"ディスクへの I/O を一時停止します。Linux の fsfreeze -f 処" -"理と似ています。" - -msgid "" -"Suspend the instance using the virsh command, taking note " -"of the internal ID:" -msgstr "" -"virsh コマンドを使用してインスタンスを一時停止します。内" -"部 ID を記録します。" - -msgid "Suspend the instance using the virsh command." -msgstr "" -"virsh コマンドを使用して、インスタンスを一時停止します。" - -msgid "Swap" -msgstr "スワップ" - -msgid "" -"Swap space to free up memory for processes, as an independent area of the " -"physical disk used only for swapping and nothing else" -msgstr "" -"プロセス用にメモリーを空ける Swap 領域。物理ディスクから独立した、スワップの" -"みに使用される領域。" - -msgid "Swift" -msgstr "Swift" - -msgid "" -"Swift should notice the new disk and that no data exists. It then begins " -"replicating the data to the disk from the other existing replicas." -msgstr "" -"Swift は新しいディスクを認識します。また、データが存在しないことを認識しま" -"す。そうすると、他の既存の複製からディスクにデータを複製しはじめます。" - -msgid "" -"Switch back to the n-sch screen. Among the log statements, " -"you'll see the line:" -msgstr "" -"n-sch 画面に切り替えます。ログ出力の中に、以下の行を見つけられま" -"す。" - -msgid "Switch to the stack user:" -msgstr "stack ユーザーに変更します。" - -msgid "Switches must support 802.1q VLAN tagging." -msgstr "802.1q VLAN タギングに対応したスイッチが必要。" - -msgid "Systems Administration" -msgstr "システム管理" - -msgid "" -"Sébastien Han has written excellent blogs and generously gave his permission " -"for re-use." -msgstr "" -"Sébastien Han は素晴らしいブログを書いてくれて、寛大にも再利用の許可を与えて" -"くれました。" - -msgid "TCP/IP Illustrated, Volume 1: The Protocols, 2/E" -msgstr "TCP/IP Illustrated, Volume 1: The Protocols, 2/E" - -msgid "Tailing Logs" -msgstr "最新ログの確認" - -msgid "Taking Snapshots" -msgstr "スナップショットの取得" - -msgid "" -"Taking the first 11 characters, we can construct a device name of " -"tapff387e54-9e from this output." -msgstr "" -"この出力から最初の 11 文字をとり、デバイス名 tapff387e54-9e を作ることができ" -"ます。" - -msgid "Tales From the Cryp^H^H^H^H Cloud" -msgstr "ハリウッド^H^H^H^H^Hクラウドナイトメア" - -msgid "Tenant Network Separation" -msgstr "テナントネットワークの分離" - -msgid "Terminal 1:" -msgstr "端末 1:" - -msgid "Terminal 2:" -msgstr "端末 2:" - -msgid "" -"Test the middleware from outside DevStack on a remote machine that has " -"access to your DevStack instance:" -msgstr "" -" DevStack 環境の外の、DevStack 用インスタンスにアクセス可能なリモートマシンか" -"らミドルウェアをテストします。" - -msgid "" -"Test your middleware with the swift CLI. Start by switching to " -"the shell screen and finish by switching back to the swift-proxy screen to check the log output:" -msgstr "" -"swift の CLI でミドルウェアのテストをしてください。shell の " -"screen セッションに切り替えてテストを開始し、swift-proxy の " -"screen セッションにもどってログ出力をチェックして終了します。" - -msgid "" -"Test your scheduler with the nova CLI. Start by switching to the " -"shell screen and finish by switching back to the n-sch screen to check the log output:" -msgstr "" -"nova の CLI でスケジューラーのテストをしてください。shell の " -"screen セッションに切り替えてテストを開始し、n-sch screen セッ" -"ションにもどってログ出力をチェックして終了します。" - -msgid "That made no sense." -msgstr "これでは意味が無かった。" - -msgid "Thaw (unfreeze) the system" -msgstr "システムを解凍 (フリーズ解除) します" - -msgid "" -"The \"cluster\" rule allows SSH access from any other instance that uses the " -"global-http group." -msgstr "" -"\"cluster\" ルールにより、global-http グループを使用する他" -"のすべてのインスタンスから SSH アクセスが許可されます。" - -msgid "The 1gb NIC was still alive and active" -msgstr "1Gb NICはまだ生きていて、有効だった。" - -msgid "" -"The -s flag used in the cURL commands above are used to prevent " -"the progress meter from being shown. If you are having trouble running cURL " -"commands, you'll want to remove it. Likewise, to help you troubleshoot cURL " -"commands, you can include the -v flag to show you the verbose " -"output. There are many more extremely useful features in cURL; refer to the " -"man page for all the options." -msgstr "" -"上記の cURL コマンドで使用している -s flag は、進行状況メーター" -"が表示されないようにするために使用します。cURL コマンドの実行で問題が生じた場" -"合には、このオプションを削除してください。また、cURL コマンドのトラブルシュー" -"ティングを行う場合には、-v フラグを指定してより詳細な出力を表示" -"すると役立ちます。cURL には他にも多数の役立つ機能があります。全オプションは、" -"man ページで参照してください。" - -msgid "" -"The copy-from option copies the image from the location " -"specified into the /var/lib/glance/images directory. The same " -"thing is done when using the STDIN redirection with <, as shown in the " -"example." -msgstr "" -"copy-from オプションは、指定された位置から /var/lib/" -"glance/images ディレクトリの中にコピーします。例に示されたように " -"STDIN リダイレクションを使用するときに、同じことが実行されます。" - -msgid "" -"The glance-api part is an abstraction layer that allows a " -"choice of back end. Currently, it supports:" -msgstr "" -"glance-api部はバックエンドを選択する事ができる抽象的なレイヤーで" -"す。現在、以下をサポートしています:" - -msgid "" -"The location option is important to note. It does not copy the " -"entire image into the Image service, but references an original location " -"where the image can be found. Upon launching an instance of that image, the " -"Image service accesses the image from the location specified." -msgstr "" -"location オプションは注意する意味があります。Image service にイ" -"メージ全体のコピーを行わず、そのイメージがある元の位置への参照を保持します。" -"イメージのインスタンスを起動するとき、Image service が指定された場所からイ" -"メージにアクセスします。" - -msgid "" -"The nova flavor-create command allows authorized users to " -"create new flavors. Additional flavor manipulation commands can be shown " -"with the command: " -msgstr "" -"nova flavor-create コマンドにより、権限のあるユーザーが新しいフ" -"レーバーを作成できます。さらなるフレーバーの操作コマンドは次のコマンドを用い" -"て表示できます: " - -msgid "" -"The nova.conf option allow_same_net_traffic (which " -"defaults to true) globally controls whether the rules " -"apply to hosts that share a network. When set to true, " -"hosts on the same subnet are not filtered and are allowed to pass all types " -"of traffic between them. On a flat network, this allows all instances from " -"all projects unfiltered communication. With VLAN networking, this allows " -"access between instances within the same project. If " -"allow_same_net_traffic is set to false, " -"security groups are enforced for all connections. In this case, it is " -"possible for projects to simulate allow_same_net_traffic by " -"configuring their default security group to allow all traffic from their " -"subnet." -msgstr "" -"nova.conf のオプション allow_same_net_traffic (標準" -"で true) は、同じネットワークを共有するホストにルールを適" -"用するかを制御します。このオプションはシステム全体に影響するグローバルオプ" -"ションです。true に設定したとき、同じサブネットにあるホス" -"トはフィルターされず、それらの間ですべての種類の通信が通過できるようになりま" -"す。フラットなネットワークでは、これにより、全プロジェクトの全インスタンスが" -"通信をフィルターされなくなります。VLAN ネットワークでは、これにより、同じプロ" -"ジェクト内のインスタンス間でアクセスが許可されます。" -"allow_same_net_trafficfalse に設定されて" -"いると、セキュリティグループがすべての通信に対して強制されます。この場合、既" -"定のセキュリティグループをそれらのサブネットからのすべての通信を許可するよう" -"設定することにより、プロジェクトが allow_same_net_traffic をシ" -"ミュレートできます。" - -msgid "" -"The nova.quota_usages table keeps track of how many resources " -"the tenant currently has in use:" -msgstr "" -"nova.quota_usagesテーブルはどのくらいリソースをテナントが利用し" -"ているかを記録しています。" - -msgid "" -"The nova.quotas and nova.quota_usages tables store " -"quota information. If a tenant's quota is different from the default quota " -"settings, its quota is stored in the nova.quotas table. For example:" -msgstr "" -"nova.quotasnova.quota_usages テーブルはクォータ" -"の情報が保管されています。もし、テナントのクォータがデフォルト設定と異なる場" -"合、nova.quotas テーブル" -"に保管されます。以下に例を示します。" - -msgid "" -"The qg-<n> interface in the l3-agent router namespace " -"sends the packet on to its next hop through device eth2 on the " -"external bridge br-ex. This bridge is constructed similarly to " -"br-eth1 and may be inspected in the same way." -msgstr "" -"l3-agent のルーターの名前空間にある qg-<n> インターフェー" -"スは、外部ブリッジ br-ex にある eth2 デバイス経由で" -"次のホップにパケットを送信します。このブリッジは、br-eth1 と同じ" -"ように作られ、同じ方法で検査できるでしょう。" - -msgid "" -"The stack.sh script takes a while to run. Perhaps you can take " -"this opportunity to join the OpenStack Foundation." -msgstr "" -"stack.sh スクリプトは、実行にしばらく時間がかかります。おそら" -"く、この機会に OpenStack Foundation に参加できます。" - -msgid "" -"The user-data key is a special key in the metadata service that " -"holds a file that cloud-aware applications within the guest instance can " -"access. For example, cloudinit is an " -"open source package from Ubuntu, but available in most distributions, that " -"handles early initialization of a cloud instance that makes use of this user " -"data.user data" -msgstr "" -"user-data 鍵は、メタデータサービス内の特別キーです。ゲストインス" -"タンスにあるクラウド対応アプリケーションがアクセス可能なファイルを保持しま" -"す。たとえば、cloudinit システムは、" -"Ubuntu 発祥のオープンソースパッケージですが、ほとんどのディストリビューション" -"で利用可能です。このユーザーデータを使用するクラウドインスタンスの初期設定を" -"処理します。user data" - -msgid "" -"The OpenStack High Availability Guide offers " -"suggestions for elimination of a single point of failure that could cause " -"system downtime. While it is not a completely prescriptive document, it " -"offers methods and techniques for avoiding downtime and data loss.high availabilityconfiguration optionshigh availability" -msgstr "" -"OpenStack High Availability Guide は、システム停止につな" -"がる可能性がある、単一障害点の削減に向けた提案があります。完全に規定されたド" -"キュメントではありませんが、停止時間やデータ損失を避けるための方法や技術を提" -"供しています。high availabilityconfiguration " -"optionshigh availability" - -msgid "" -"The OpenStack " -"Security Guide provides a deep dive into securing an " -"OpenStack cloud, including SSL/TLS, key management, PKI and certificate " -"management, data transport and privacy concerns, and compliance.security issuesconfiguration optionsconfiguration optionssecurity" -msgstr "" -"OpenStack " -"Security Guide は、OpenStack クラウドのセキュア化に関する深" -"い考察を提供します。SSL/TLS、鍵管理、PKI および証明書管理、データ転送およびプ" -"ライバシーの懸念事項、コンプライアンスなど。security issuesconfiguration optionsconfiguration " -"optionssecurity" - -msgid "" -"The /etc/nova directory on both the cloud controller " -"and compute nodes should be regularly backed up.cloud controllersfile system backups andcompute nodesbackup/recovery of" -msgstr "" -"クラウドコントローラーとコンピュートノードの /etc/nova " -"ディレクトリーは、定期的にバックアップすべきです。cloud controllersfile system backups andcompute nodesbackup/recovery of" - -msgid "" -"The API Specifications define the core " -"actions, capabilities, and mediatypes of the OpenStack API. A client can " -"always depend on the availability of this core API, and implementers are " -"always required to support it in its entirety. Requiring strict adherence to the core API allows " -"clients to rely upon a minimal level of functionality when interacting with " -"multiple implementations of the same API.extensionsdesign considerationsdesign " -"considerationsextensions" -msgstr "" -"API仕様にはOpenStack APIのコアアクショ" -"ン、ケイパビリティ、メタタイプについて定義されています。クライアントはいつで" -"もこのコアAPIの可用性に頼る事ができ、実装者は常にそれら全体をサポートする必要があります。コアAPIの遵守を要求する" -"事は同じAPIで複数の実装と相互作用する際に、クライアントに機能の最小レベルに依" -"存する事ができます。拡張機能設計上の考慮事項設計上の考慮事項拡張機能" - -msgid "" -"The -H flag is required when running the daemons with " -"sudo because some daemons will write files relative to the user's home " -"directory, and this write may fail if -H is left off." -msgstr "" -"sudo を用いてデーモンを実行するとき、-H フラグが必要です。" -"いくつかのデーモンは、ユーザーのホームディレクトリーからの相対パスのファイル" -"に書き込みを行うため、-H がないと、この書き込みが失敗して" -"しまいます。" - -msgid "" -"The deleted field is set to 1 if the " -"instance has been deleted and NULL if it has not been " -"deleted. This field is important for excluding deleted instances from your " -"queries." -msgstr "" -"deleted フィールドは、インスタンスが削除されていると " -"1 がセットされます。削除されていなければ NULL です。このフィールドは、クエリーから削除済みインスタンスを除外するた" -"めに重要です。" - -msgid "" -"The host field tells which compute node is hosting the " -"instance." -msgstr "" -"host フィールドは、どのコンピュートノードがインスタンスを" -"ホストしているかを示します。" - -msgid "" -"The hostname field holds the name of the instance when it " -"is launched. The display-name is initially the same as hostname but can be " -"reset using the nova rename command." -msgstr "" -"hostname フィールドは、インスタンスが起動したときのインス" -"タンス名を保持します。display-name は、最初は hostname と同じですが、nova " -"rename コマンドを使って再設定することができます。" - -msgid "" -"The nova-conductor service is horizontally scalable. To " -"make nova-conductor highly available and fault tolerant, " -"just launch more instances of the nova-conductor process, " -"either on the same server or across multiple servers." -msgstr "" -"nova-conductorサービスは水平方向にスケーラブルです。" -"nova-conductorを冗長構成になるためには、nova-" -"conductor プロセスを同一サーバまたは複数のサーバに渡って複数起動するだ" -"けです。" - -msgid "" -"The nova-manage tool can provide some additional details:" -msgstr "" -"nova-manage ツールは、追加の情報を提供することが可能です:" - -msgid "" -"The nova-network service has the ability to operate in a " -"multi-host or single-host mode. Multi-host is when each compute node runs a " -"copy of nova-network and the instances on that compute " -"node use the compute node as a gateway to the Internet. The compute nodes " -"also host the floating IPs and security groups for instances on that node. " -"Single-host is when a central server—for example, the cloud controller—runs " -"the nova-network service. All compute nodes forward traffic " -"from the instances to the cloud controller. The cloud controller then " -"forwards traffic to the Internet. The cloud controller hosts the floating " -"IPs and security groups for all instances on all compute nodes in the cloud." -"single-host networkingnetworksmulti-hostmulti-host networkingnetwork designnetwork " -"topologymulti- vs. single-host networking" -msgstr "" -"nova-networkはマルチホストまたはシングルホストモードで運用" -"する事ができます。マルチホスト構成はそれぞれのコンピュートノードで" -"nova-network の複製とそのコンピュートノードのインスタンス" -"が稼働している時、コンピュートノードをインターネットへのゲートウェイとして利" -"用します。また、そのコンピュートノードはそこで稼働しているインスタンスのフ" -"ローティングIPアドレスとセキュリティグループを受け持ちます。シングルホスト構" -"成はクラウドコントローラと言った中央サーバがnova-network サービ" -"スを受け持ちます。すべてのコンピュートノードはインターネットへのトラフィック" -"をコントローラに転送します。クラウドコントローラがすべてのコントローラ上のす" -"べてのインスタンスのフローティングIPアドレスとセキュリティグループを受け持ち" -"ます。シングルホストネットワークネットワークマルチホストマルチホストネットワークネットワークデザインネットワークトポロジーマルチvsシング" -"ルホストネットワーク" - -msgid "" -"The uuid field is the UUID of the instance and is used " -"throughout other tables in the database as a foreign key. This ID is also " -"reported in logs, the dashboard, and command-line tools to uniquely identify " -"an instance." -msgstr "" -"uuid フィールドはインスタンスの UUID です。データベースに" -"ある他の表において外部キーとして使用されます。この ID は、インスタンスを一意" -"に識別するために、ログ、ダッシュボードおよびコマンドラインツールにおいて表示" -"されます。" - -msgid "" -"The command wasn't working, so I used , but " -"it immediately came back with an error saying it was unable to find the " -"backing disk. In this case, the backing disk is the Glance image that is " -"copied to /var/lib/nova/instances/_base when the image " -"is used for the first time. Why couldn't it find it? I checked the directory " -"and sure enough it was gone." -msgstr "" -" コマンドは機能しなかったので、 を使用した" -"が、すぐに仮想ディスクが見つからないとのエラーが返ってきた。この場合、仮想" -"ディスクは Glance イメージで、イメージが最初に使用する際に /var/" -"lib/nova/instances/_base にコピーされていた。何故イメージが見つか" -"らないのか?私はそのディレクトリをチェックし、イメージがないことを知った。" - -msgid "" -"The looked very, very weird. In short, it looked as though " -"network communication stopped before the instance tried to renew its IP. " -"Since there is so much DHCP chatter from a one minute lease, it's very hard " -"to confirm it, but even with only milliseconds difference between packets, " -"if one packet arrives first, it arrived first, and if that packet reported " -"network issues, then it had to have happened before DHCP." -msgstr "" -" の結果は非常に奇妙だった。一言で言えば、インスタンスが IP ア" -"ドレスを更新しようとする前に、まるでネットワーク通信が停止しているように見え" -"た。1分間のリース期間で大量の DHCP ネゴシエーションがあるため、確認作業は困" -"難を極めた。しかし、パケット間のたった数ミリ秒の違いであれ、あるパケットが最" -"初に到着する際、そのパケットが最初に到着し、そのパケットがネットワーク障害を" -"報告した場合、DHCP より前にネットワーク障害が発生していることになる。" - -msgid "The Book of Xen" -msgstr "The Book of Xen" - -msgid "" -"The CSAIL cloud is currently 64 physical nodes with a total of 768 physical " -"cores and 3,456 GB of RAM. Persistent data storage is largely outside the " -"cloud on NFS, with cloud resources focused on compute resources. There are " -"more than 130 users in more than 40 projects, typically running 2,000–2,500 " -"vCPUs in 300 to 400 instances." -msgstr "" -"CSAIL クラウドは現在 64 物理ノード、768 物理コア、3,456 GB のメモリがありま" -"す。クラウドリソースがコンピュータリソースに焦点をあてているため、永続データ" -"ストレージの大部分は、クラウド外の NFS 上にあります。40 以上のプロジェクトに " -"130 以上のユーザーがいます。一般的に、300 ~ 400 インスタンスで 2,000 ~ " -"2,500 仮想 CPU が動作しています。" - -msgid "The I/O statistics of your storage services" -msgstr "ストレージサービスの I/O の統計" - -msgid "" -"The ID of the volume to boot from, as shown in the output of nova " -"volume-list" -msgstr "" -"起動するボリュームの ID。nova volume-list の出力に表示され" -"ます。" - -msgid "The Image service and the Database" -msgstr "Image service とデータベース" - -msgid "" -"The Logical Volume Manager is a Linux-based system that provides an " -"abstraction layer on top of physical disks to expose logical volumes to the " -"operating system. The LVM back-end implements block storage as LVM logical " -"partitions." -msgstr "" -"論理ボリュームマネージャー (LVM) は Linux ベースのシステムで、物理ディスク上" -"に抽象層を提供して論理ボリュームをオペレーティングシステムに公開します。LVM " -"バックエンドは、LVM 論理パーティションとしてブロックストレージを実装します。" - -msgid "" -"The Modular Layer 2 plug-in is a framework allowing OpenStack Networking to " -"simultaneously utilize the variety of layer-2 networking technologies found " -"in complex real-world data centers. It currently works with the existing " -"Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is intended to replace " -"and deprecate the monolithic plug-ins associated with those L2 agents." -msgstr "" -"Modular Layer 2 プラグインは、OpenStack Networking が複雑な実世界のデータセン" -"ターに見られるさまざまな L2 ネットワーク技術を同時に利用できるようにするフ" -"レームワークです。現在、既存の Open vSwitch、Linux Bridge、Hyper-V L2 エー" -"ジェントと一緒に動作します。それらの L2 エージェントと関連付けられたモノリ" -"シックなプラグインを置き換えて廃止することを意図しています。" - -msgid "" -"The Open vSwitch driver should and usually does manage this automatically, " -"but it is useful to know how to do this by hand with the ovs-vsctl command. This command has many more subcommands than we will use " -"here; see the man page or use ovs-vsctl --help for the " -"full listing." -msgstr "" -"Open vSwitch ドライバーは、これを自動的に管理すべきです。また、一般的に管理し" -"ます。しかし、ovs-vsctl コマンドを用いて、これを手動で実行" -"する方法を知ることは有用です。このコマンドは、ここで使用している以上に、数多" -"くのサブコマンドがあります。完全な一覧は、マニュアルページを参照するか、" -"ovs-vsctl --help を使用してください。" - -msgid "" -"The OpenStack Compute API is extensible. An extension adds capabilities to " -"an API beyond those defined in the core. The introduction of new features, " -"MIME types, actions, states, headers, parameters, and resources can all be " -"accomplished by means of extensions to the core API. This allows the " -"introduction of new features in the API without requiring a version change " -"and allows the introduction of vendor-specific niche functionality." -msgstr "" -"OpenStack Compute API は拡張可能です。ある拡張は、ある API にコア定義を超えた" -"ケイパビリティを追加します。新機能、新しい MIME タイプ、アクション、状態、" -"ヘッダ、パラメータ、そしてリソースの導入は、コア API の拡張によって達成するこ" -"とができます。これにより、API に対してバージョンを変更することなく新機能を導" -"入することができ、ベンダー固有の特定の機能を導入することもできます。" - -msgid "" -"The OpenStack Foundation supported the creation of this book with plane " -"tickets to Austin, lodging (including one adventurous evening without power " -"after a windstorm), and delicious food. For about USD $10,000, we could " -"collaborate intensively for a week in the same room at the Rackspace Austin " -"office. The authors are all members of the OpenStack Foundation, which you " -"can join. Go to the Foundation web site at http://openstack.org/join." -msgstr "" -"OpenStack Foundationは、オースチンへの航空券、(暴風後の停電によるドキドキの夜" -"を含む)宿、そして美味しい食事で、この本の作成をサポートしました。約10,000USド" -"ルで、Rackspaceのオースチンオフィスの同じ部屋の中で、私たちは1週間で集中的に" -"共同作業をすることができました。著者たちはすべて OpenStack Foundation のメン" -"バーであり、あなたも OpenStack Foundation に参加できます。Foundationのウェブサイト " -"(http://openstack.org/join) に行ってみてください。" - -msgid "" -"The OpenStack Image service consists of two parts: glance-api " -"and glance-registry. The former is responsible for the delivery " -"of images; the compute node uses it to download images from the back end. " -"The latter maintains the metadata information associated with virtual " -"machine images and requires a database.glanceglance registryglanceglance API servermetadataOpenStack Image service " -"andImage " -"servicedesign considerationsdesign considerationsimages" -msgstr "" -"OpenStack Image service はglance-apiglance-registryの2つのパートから成り立っています。前者はコンピュートノードがバックエン" -"ドからイメージをダウンロードする際の、イメージの配信に責任を持ちます。後者は" -"仮想マシンに属し、データベースを必要とするメタデータの情報をメンテナンスしま" -"す。glanceglance " -"registryglanceglance API サーバメタデータOpenStack イメージサービスイメージサービス設計上の考慮事項設計上の考慮事項イメージ" - -msgid "" -"The OpenStack Networking service is run on all controller nodes, ensuring at " -"least one instance will be available in case of node failure. It also sits " -"behind HAProxy, which detects if the software fails and routes requests " -"around the failing instance." -msgstr "" -"OpenStack Networking Service は、全コントローラーノード上で実行され、ノードの" -"障害が発生した場合に少なくとも 1 インスタンスが利用可能となるようにします。" -"また、このサービスは、ソフトウェアの障害を検出して、障害の発生したインスタン" -"スを迂回するように要求をルーティングする HAProxy の背後に配置されます。" - -msgid "" -"The OpenStack community has had a database-as-a-service tool in development " -"for some time, and we saw the first integrated release of it in Icehouse. " -"From its release it was able to deploy database servers out of the box in a " -"highly available way, initially supporting only MySQL. Juno introduced " -"support for Mongo (including clustering), PostgreSQL and Couchbase, in " -"addition to replication functionality for MySQL. In Kilo, more advanced " -"clustering capability was delivered, in addition to better integration with " -"other OpenStack components such as Networking. Junodatabase-as-a-service tool" -msgstr "" -"OpenStack コミュニティーは、何回か開発中の database-as-a-service ツールがあり" -"ました。Icehouse において最初の統合リリースがありました。そのリリース以降、高" -"可用な方法でそのまま使えるデータベースサーバーを配備できます。最初は MySQL の" -"みをサポートしていました。Juno では、Mongo (クラスターを含む)、PostgreSQL、" -"Couchbase、MySQL の複製機能をサポートしました。Kilo では、さらに高度なクラス" -"ター機能が導入されました。また、Networking などの他の OpenStack コンポーネン" -"トとより統合されました。Junodatabase-as-a-service tool" - -msgid "" -"The OpenStack dashboard (horizon) can be configured to use multiple regions. " -"This can be configured through the parameter." -msgstr "" -"OpenStack dashboard (horizon) は、複数のリージョンを使用するよう設定できま" -"す。これは、 パラメーターにより設定できます。" - -msgid "" -"The OpenStack dashboard (horizon) provides a web-based user interface to the " -"various OpenStack components. The dashboard includes an end-user area for " -"users to manage their virtual infrastructure and an admin area for cloud " -"operators to manage the OpenStack environment as a whole.dashboarddesign considerationsdashboard" -msgstr "" -"OpenStackダッシュボード(horizon)は様々なOpenStackコンポーネントのウェブベース" -"ユーザーインターフェースを提供します。ダッシュボードにはエンドユーザの仮想イ" -"ンフラを管理するための領域と、OpenStack環境全体を管理するためのクラウド管理者" -"のための管理者領域が含まれます。ダッ" -"シュボード設計上" -"の考慮事項ダッシュボード" - -msgid "" -"The OpenStack dashboard simulates the ability to modify a flavor by deleting " -"an existing flavor and creating a new one with the same name." -msgstr "" -"OpenStack dashboard は、既存のフレーバーを削除し、同じ名前の新しいものを作成" -"することにより、フレーバーを変更する機能を模倣しています。" - -msgid "" -"The OpenStack service's policy engine matches a policy directly. A rule " -"indicates evaluation of the elements of such policies. For instance, in a " -"compute:create: [[\"rule:admin_or_owner\"]] statement, the " -"policy is compute:create, and the rule is admin_or_owner." -msgstr "" -"OpenStack サービスのポリシーエンジンがポリシーと直接照合を行います。ルールは" -"そのようなポリシーの要素の評価を意味します。たとえば、compute:create: " -"[[\"rule:admin_or_owner\"]] 文において、ポリシーは compute:" -"create で、ルールは admin_or_owner です。" - -msgid "" -"The OpenStack snapshot mechanism allows you to create new images from " -"running instances. This is very convenient for upgrading base images or for " -"taking a published image and customizing it for local use. To snapshot a " -"running instance to an image using the CLI, do this:base imagesnapshotuser trainingsnapshots" -msgstr "" -"OpenStack のスナップショット機能により、実行中のインスタンスから新しいイメー" -"ジを作成することもできます。これは、ベースイメージをアップグレードするため、" -"公開されているイメージをカスタマイズするために、非常に便利です。このように " -"CLI を使用して、実行中のインスタンスをイメージにスナップショットをとります。" - -msgid "The REST API" -msgstr "REST API" - -msgid "" -"The RabbitMQ web management interface is accessible on your cloud controller " -"at http://localhost:55672." -msgstr "" -"RabbitMQ Web管理インターフェイスは、クラウドコントローラーから " -"http://localhost:55672 でアクセスできます。" - -msgid "" -"The Real Estate team at Rackspace in Austin, also known as \"The Victors,\" " -"were super responsive." -msgstr "" -"「The Victors」としても知られている、オースチンの Rackspace の不動産チーム" -"は、素晴らしい応答をしてくれました。" - -msgid "" -"The Red Hat Distributed OpenStack package offers an easy way to download the " -"most current OpenStack release that is built for the Red Hat Enterprise " -"Linux platform." -msgstr "" -"Red Hat Distributed OpenStack パッケージは、Red Hat Enterprise Linux プラット" -"フォーム用に構築された最新の OpenStack リリースを容易にダウンロードする方法を" -"提供します。" - -msgid "" -"The Solaris iSCSI driver for OpenStack Block Storage implements blocks as " -"ZFS entities. ZFS is a file system that also has the functionality of a " -"volume manager. This is unlike on a Linux system, where there is a " -"separation of volume manager (LVM) and file system (such as, ext3, ext4, " -"xfs, and btrfs). ZFS has a number of advantages over ext4, including " -"improved data-integrity checking." -msgstr "" -"OpenStack Block Storage 用の Solaris iSCSI ドライバーは、ZFS エンティティーと" -"してブロックを実装します。ZFS は、ボリュームManagerの機能が備わっているファイ" -"ルシステムです。これは、ボリュームマネージャー (LVM) およびファイルシステム " -"(ext3、ext4、xfs、btrfs など) が分離している Linux システムとは違います。ZFS " -"は、向上されたデータ整合性チェックなど、ext4 よりも多数利点があります。" - -msgid "The Starting Point" -msgstr "出発点" - -msgid "" -"The TAP device is connected to the integration bridge, br-int. " -"This bridge connects all the instance TAP devices and any other bridges on " -"the system. In this example, we have int-br-eth1 and " -"patch-tun. int-br-eth1 is one half of a veth pair " -"connecting to the bridge br-eth1, which handles VLAN networks " -"trunked over the physical Ethernet device eth1. patch-" -"tun is an Open vSwitch internal port that connects to the br-" -"tun bridge for GRE networks." -msgstr "" -"TAP デバイスは統合ブリッジ br-int に接続されます。このブリッジ" -"は、すべてのインスタンスの TAP デバイスや他のシステム上のブリッジを接続しま" -"す。この例では、int-br-eth1patch-tun がありま" -"す。int-br-eth1 は、ブリッジ br-eth1 に接続してい" -"る veth ペアの片側です。これは、物理イーサネットデバイス eth1 経" -"由でトランクされる VLAN ネットワークを処理します。patch-tun は、" -"GRE ネットワークの br-tun ブリッジに接続している Open vSwitch 内" -"部ポートです。" - -msgid "" -"The TAP device name is constructed using the first 11 characters of the port " -"ID (10 hex digits plus an included '-'), so another means of finding the " -"device name is to use the neutron command. This returns a " -"pipe-delimited list, the first item of which is the port ID. For example, to " -"get the port ID associated with IP address 10.0.0.10, do this:" -msgstr "" -"TAP デバイス名は、ポート ID の先頭 11 文字 (10 桁の16進数とハイフン) を使用し" -"て作られます。そのため、デバイス名を見つける別の手段として、" -"neutron コマンドを使用できます。これは、パイプ区切りの一覧" -"を返し、最初の項目がポート ID です。例えば、次のように IP アドレス 10.0.0.10 " -"に関連づけられているポート ID を取得します。" - -msgid "" -"The TAP devices and veth devices are normal Linux network devices and may be " -"inspected with the usual tools, such as ip and " -"tcpdump. Open vSwitch internal devices, such as " -"patch-tun, are only visible within the Open vSwitch " -"environment. If you try to run tcpdump -i patch-tun, it " -"will raise an error, saying that the device does not exist." -msgstr "" -"TAP デバイスと veth デバイスは、通常の Linux ネットワークデバイスです。" -"iptcpdump などの通常のツールを用い" -"て調査できるでしょう。patch-tun のような Open vSwitch 内部デバイ" -"スは、Open vSwitch 環境の中だけで参照できます。tcpdump -i patch-" -"tun を実行しようとした場合、デバイスが存在しないというエラーが発生" -"するでしょう。" - -msgid "The TCP/IP Guide" -msgstr "The TCP/IP Guide" - -msgid "The Valentine's Day Compute Node Massacre" -msgstr "バレンタインデーのコンピュートノード大虐殺" - -msgid "" -"The ZFS back end for OpenStack Block Storage supports only Solaris-based " -"systems, such as Illumos. While there is a Linux port of ZFS, it is not " -"included in any of the standard Linux distributions, and it has not been " -"tested with OpenStack Block Storage. As with LVM, ZFS does not provide " -"replication across hosts on its own; you need to add a replication solution " -"on top of ZFS if your cloud needs to be able to handle storage-node failures." -msgstr "" -"OpenStack Block Storage の ZFS バックエンドは、Illumos などの Solaris ベース" -"のシステムのみをサポートします。ZFS の Linux ポートは存在するものの、標準の " -"Linux ディストリビューションには含まれておらず、OpenStack Block Storage では" -"テストされていません。LVM では、ZFS はこれだけではホスト間の複製ができませ" -"ん。つまり、お使いのクラウドでストレージノードの問題を処理する機能が必要な場" -"合、ZFS に複製ソリューションを追加する必要があります。" - -msgid "" -"The admin is global, not per project, so granting a user the admin role in " -"any project gives the user administrative rights across the whole cloud." -msgstr "" -"管理者はプロジェクトごとではなく、グローバルです。そのため、ユーザーに管理者" -"の役割を与えることにより、クラウド全体にわたるユーザー管理権限を与えることに" -"なります。" - -msgid "" -"The asterisk * indicates which screen window you are viewing. This example " -"shows we are viewing the key (for keystone) screen window:" -msgstr "" -"アスタリスク (*) は、表示している screen ウィンドウを表しています。この例は、" -"keystone 用の key という screen ウィンドウを表示していることを表しています。" - -msgid "" -"The authors have spent too much time looking at packet dumps in order to " -"distill this information for you. We trust that, following the methods " -"outlined in this chapter, you will have an easier time! Aside from working " -"with the tools and steps above, don't forget that sometimes an extra pair of " -"eyes goes a long way to assist." -msgstr "" -"執筆者は、この情報を読者のために蒸留するために、パケットダンプを見ることに多" -"大な時間を費やしました。本章に概略をまとめた方法に従って、より簡単な時間を過" -"ごせると信じています。このツールと上の手順を扱うことから離れて、ときどき別の" -"人が支援すれば十分であることを忘れないでください。" - -msgid "" -"The bare-metal deployment has been widely lauded, and development continues. " -"The Juno release brought the OpenStack Bare metal drive into the Compute " -"project, and it was aimed to deprecate the existing bare-metal driver in " -"Kilo. If you are a current user of the bare metal driver, a particular " -"blueprint to follow is Deprecate the bare metal driver.KiloCompute bare-metal deployment" -msgstr "" -"ベアメタル配備は幅広く叫ばれていて、開発が続いています。Juno リリースは、" -"OpenStack Bare metal ドライブを Compute プロジェクトの中に持ち込みました。" -"Kilo において、既存のベアメタルドライバーを目指していました。現在ベアメタルド" -"ライバーを使用している場合、従うべき具体的なブループリントは Deprecate the bare metal driver です。KiloCompute bare-metal deployment" - -msgid "" -"The best information available to support your choice is found on the Hypervisor Support Matrix and in the " -"configuration reference." -msgstr "" -"選択を行う上で参考になるベストな情報は、Hypervisor Support Matrix や configuration reference で確認いただけます。" - -msgid "The block device mapping format is " -msgstr "ブロックデバイスマッピングのフォーマットは " - -msgid "The bonded 10gb network device (bond0) was in a DOWN state" -msgstr "冗長化された 10Gb ネットワークデバイス(bond0)は DOWN 状態だった。" - -msgid "The bug impacts are categorized as follows:" -msgstr "バグ影響度は以下のカテゴリに分かれています。" - -msgid "" -"The chances of failure for the server's hardware are high at the start and " -"the end of its life. As a result, dealing with hardware failures while in " -"production can be avoided by appropriate burn-in testing to attempt to " -"trigger the early-stage failures. The general principle is to stress the " -"hardware to its limits. Examples of burn-in tests include running a CPU or " -"disk benchmark for several days.testingburn-in testingtroubleshootingburn-in testingburn-in testingscalingburn-in testing" -msgstr "" -"サーバーハードウェアの故障確率は、そのライフタイムの最初と最後に高くなりま" -"す。結論として、初期故障を誘発する適切なエージングテストを行うことによって、" -"運用中の故障に対応するための多くの労力を避けることができます。一般的な原則" -"は、限界まで負荷をかけることです。エージング試験の例としては、数日間にわたっ" -"てCPUやディスクベンチマークを走行させることが含まれます。testingburn-in testingtroubleshootingburn-in testingburn-in testingscalingburn-in testing" - -msgid "" -"The chassis size of the compute node can limit the number of spindles able " -"to be used in a compute node." -msgstr "" -"コンピュートノードの筐体サイズによって、コンピュートノードに搭載できるディス" -"ク数が制限されます。" - -msgid "" -"The choice of RabbitMQ over other AMQP compatible " -"options that are gaining support in OpenStack, such as ZeroMQ and Qpid, is " -"due to its ease of use and significant testing in production. It also is the " -"only option that supports features such as Compute cells. We recommend " -"clustering with RabbitMQ, as it is an integral component of the system and " -"fairly simple to implement due to its inbuilt nature.Advanced Message Queuing Protocol (AMQP)" -msgstr "" -"OpenStack では AMQP 互換の選択肢として ZeroMQ や Qpid などのサポートが進んで" -"いますが、RabbitMQ を選んだのは、その使いやすさと本番環" -"境で十分にテストされているのが理由です。また、RabbitMQ は Compute Cell といっ" -"た機能でサポートされている唯一の選択肢です。メッセージキューは OpenStack シス" -"テムで不可欠のコンポーネントで、RabbitMQ 自体で元々サポートされているため、極" -"めて簡単に実装できます。このため、RabbitMQ は クラスター構成にすることを推奨" -"します。Advanced Message Queuing " -"Protocol (AMQP)" - -msgid "" -"The cloud controller and storage proxy are very similar to each other when " -"it comes to expected and unexpected downtime. One of each server type " -"typically runs in the cloud, which makes them very noticeable when they are " -"not running." -msgstr "" -"想定内の場合も想定外の場合も停止時間が発生した場合の挙動が、クラウドコント" -"ローラーとストレージプロキシは互いに似ています。クラウドコントローラーとスト" -"レージプロキシはそれぞれクラウドで一つ実行されるので、動作していない場合、非" -"常に目立ちます。" - -msgid "" -"The cloud controller could completely fail if, for example, its motherboard " -"goes bad. Users will immediately notice the loss of a cloud controller since " -"it provides core functionality to your cloud environment. If your " -"infrastructure monitoring does not alert you that your cloud controller has " -"failed, your users definitely will. Unfortunately, this is a rough " -"situation. The cloud controller is an integral part of your cloud. If you " -"have only one controller, you will have many missing services if it goes " -"down.cloud controllerstotal failure ofmaintenance/debuggingcloud " -"controller total failure" -msgstr "" -"クラウドコントローラーは、例えばマザーボードがおかしくなった場合に、完全に故" -"障するでしょう。これはクラウド環境の中核的な機能を提供しているため、ユーザー" -"はクラウドコントローラーの損失にすぐに気づくでしょう。お使いのインフラ監視機" -"能が、クラウド環境の障害のアラートを上げなかった場合でも、ユーザーは絶対に気" -"づきます。残念ながら、これは大まかな状況です。クラウドコントローラーは、クラ" -"ウドの必須部分です。コントローラーが 1 つだけの場合、ダウンした際に多くのサー" -"ビスが失われるでしょう。cloud " -"controllerstotal failure ofmaintenance/debuggingcloud controller total failure" - -msgid "" -"The cloud controller is an invention for the sake of consolidating and " -"describing which services run on which nodes. This chapter discusses " -"hardware and network considerations as well as how to design the cloud " -"controller for performance and separation of services." -msgstr "" -"クラウドコントローラーとは、各ノードで実行するサービスを集約して説明するため" -"の便宜的なものです。この章は、ハードウェア、ネットワーク、およびパフォーマン" -"スとサービスの分離のためにクラウドコントローラーを設定する方法について議論し" -"ます。" - -msgid "" -"The cloud controller manages the following services for the cloud:cloud controllersservices " -"managed by" -msgstr "" -"クラウドコントローラーはクラウドの次のサービスを管理します:クラウドコントローラー管理対象サー" -"ビス" - -msgid "" -"The cloud controller provides the central management system for OpenStack " -"deployments. Typically, the cloud controller manages authentication and " -"sends messaging to all the systems through a message queue." -msgstr "" -"クラウドコントローラは、複数ノードで構成されるOpenStack構成に対する集中管理機" -"能を提供します。典型的には、クラウドコントローラは認証および、メッセージ" -"キューを通じたメッセージのやりとりを管理します。" - -msgid "" -"The cloud controller receives the 255.255.255.255 request and " -"sends a third response." -msgstr "" -"クラウドコントローラーは 255.255.255.255 宛のリクエストを受信" -"し、3番めのレスポンスを返す。" - -msgid "" -"The cloud controller runs the dashboard, the API services, the database " -"(MySQL), a message queue server (RabbitMQ), the scheduler for choosing " -"compute resources (nova-scheduler), Identity services " -"(keystone, nova-consoleauth), Image services (glance-api, glance-registry), services for console access of guests, " -"and Block Storage services, including the scheduler for storage resources " -"(cinder-api and cinder-scheduler).cloud controllersduties of" -msgstr "" -"クラウドコントローラーは、ダッシュボード、API サービス、データベース " -"(MySQL)、メッセージキューサーバー (RabbitMQ)、コンピュートリソースを選択する" -"スケジューラー (nova-scheduler)、Identity Service " -"(keystone、nova-consoleauth)、Image Service (glance-apiglance-registry)、ゲストのコンソールアクセスのためのサー" -"ビス、ストレージリソースのスケジューラーを含む Block Storage Service " -"(cinder-api および cinder-scheduler) を実行します。" -"クラウドコントローラー役割" - -msgid "" -"The code for OpenStack lives in /opt/stack, so go to the " -"nova directory and edit your scheduler module. Change to " -"the directory where nova is installed:" -msgstr "" -"OpenStack のコードは /opt/stack にあるので、nova ディレクトリに移動してあなたのスケジューラーモジュールを編集します。" -"nova をインストールしたディレクトリーに移動します。" - -msgid "" -"The code in is a driver that will schedule " -"servers to hosts based on IP address as explained at the beginning of the " -"section. Copy the code into ip_scheduler.py. When " -"you're done, save and close the file." -msgstr "" -" にあるコードはドライバーです。セクションの最" -"初に説明されているように IP アドレスに基づいて、サーバーをホストにスケジュー" -"ルします。コードを ip_scheduler.py にコピーします。完了" -"すると、ファイルを保存して閉じます。" - -msgid "" -"The code is similar to the above example of security-group-rule-" -"create. To use RemoteGroup, specify --remote-group-id instead of --remote-ip-prefix. For example:" -msgstr "" -"コードは、上の security-group-rule-create の例と似ていま" -"す。RemoteGroup を使用するために、--remote-ip-prefix の代" -"わりに --remote-group-id を指定します。例:" - -msgid "" -"The code is structured like this: nova secgroup-add-group-rule <" -"secgroup> <source-group> <ip-proto> <from-port> <to-" -"port>. An example usage is shown here:" -msgstr "" -"コードはこのような形になります。nova secgroup-add-group-rule <" -"secgroup> <source-group> <ip-proto> <from-port> <to-" -"port>。使用例は次のとおりです。" - -msgid "" -"The command-line tools can be made to show the OpenStack API calls they make " -"by passing the --debug flag to them.API (application programming interface)API " -"calls, inspectingcommand-line toolsinspecting API calls For example:" -msgstr "" -"コマンドラインツールに --debug フラグを渡すことにより、実行す" -"る OpenStack API コールを表示することができます。API (Application Programming Interface)API コール、検査コマンドラインツールAPI コールの検" -"査 例えば、以下のようになります。" - -msgid "" -"The command-line tools for managing users are inconvenient to use directly. " -"They require issuing multiple commands to complete a single task, and they " -"use UUIDs rather than symbolic names for many items. In practice, humans " -"typically do not use these tools directly. Fortunately, the OpenStack " -"dashboard provides a reasonable interface to this. In addition, many sites " -"write custom tools for local needs to enforce local policies and provide " -"levels of self-service to users that aren't currently available with " -"packaged tools.user managementcreating new users" -msgstr "" -"直接コマンドラインツールを使ってユーザーを管理することは面倒です。一つの作業" -"を完了するために、複数のコマンドを実行する必要があります。多くの項目に対し" -"て、シンボル名の代わりに UUID を使用します。現実的に、人間はこれらのツールを" -"そのまま使用しません。幸運なことに、OpenStack dashboard が便利なインター" -"フェースを提供しています。さらに、多くのサイトは個別の要求を満たすために独自" -"ツールを作成し、サイト固有のポリシーを適用し、パッケージツールでは実現できな" -"いレベルのセルフサービスをユーザーに提供しています。user managementcreating new " -"users" - -msgid "" -"The concepts supporting OpenStack's authentication and authorization are " -"derived from well-understood and widely used systems of a similar nature. " -"Users have credentials they can use to authenticate, and they can be a " -"member of one or more groups (known as projects or tenants, interchangeably)." -"credentialsauthorizationauthenticationdesign considerationsauthentication/authorization" -msgstr "" -"OpenStackの認証と承認は良く知られ、幅広いシステムで良く利用されている物から来" -"ています。ユーザは認証のためにクレデンシャルを持ち、1つ以上のグループ(プロ" -"ジェクトまたはテナントと呼ばれます)のメンバーとなる事ができます。クレデンシャル認証承認設計上の考慮事項認証/承認" - -msgid "" -"The conductor service resolves both of these issues by acting as a proxy for " -"the nova-compute service. Now, instead of nova-" -"compute directly accessing the database, it contacts the " -"nova-conductor service, and nova-conductor accesses the database on nova-compute's behalf. " -"Since nova-compute no longer has direct access to the " -"database, the security issue is resolved. Additionally, nova-" -"conductor is a nonblocking service, so requests from all compute " -"nodes are fulfilled in parallel." -msgstr "" -"コンダクターサービスはnova-computeサービスのプロクシとして" -"次の両方の問題を解決します。現在は、 nova-computeが直接" -"データベースにアクセスするのに変わってnova-conductorサービ" -"スにアクセスしnova-conductorサービスがnova-" -"computeサービスの代理でデータベースにアクセスをします。これにより" -"nova-computeはもう直接データベースにアクセスする必要はなく" -"セキュリティ問題は解消されます。加えて nova-conductor はノ" -"ンブロッキングサービスなのですべてのコンピュートからのリクエストが並列で処理" -"できます。" - -msgid "The connection strings take this format:" -msgstr "connection 文字列は以下の形式をとります。" - -msgid "" -"The currently implemented hypervisors are listed on the OpenStack documentation website. You can see a " -"matrix of the various features in OpenStack Compute (nova) hypervisor " -"drivers on the OpenStack wiki at the Hypervisor support matrix page." -msgstr "" -"現在実装されているハイパーバイザーは、OpenStack ドキュメントの Web サイトに一覧化されています。the Hypervisor support matrix page にある OpenStack wiki に、" -"OpenStack Compute (nova) ハイパーバイザーのドライバーにおけるさまざまな機能の" -"組み合わせ表があります。" - -msgid "" -"The dangerous possibility comes with the ability to change member roles. " -"This is the dropdown list below the username in the Project " -"Members list. In virtually all cases, this value should be set to " -"Member. This example purposefully shows an administrative user where this " -"value is admin." -msgstr "" -"危険な点としては、メンバーの役割を変更する機能があることです。これは" -"プロジェクトメンバー 一覧のユーザー名の後ろにあるドロッ" -"プダウンリストです。事実上すべての場合で、この値はメンバーに設定されていま" -"す。この例では意図的に、この値が管理者になっている管理ユーザーを示していま" -"す。" - -msgid "" -"The dashboard interface for snapshots can be confusing because the snapshots " -"and images are displayed in the Images page. However, " -"an instance snapshot is an image. The only difference " -"between an image that you upload directly to the Image Service and an image " -"that you create by snapshot is that an image created by snapshot has " -"additional properties in the glance database. These properties are found in " -"the image_properties table and include:" -msgstr "" -"ダッシュボードのインターフェースでは、スナップショットとイメージが両方とも" -"イメージのページに表示されるため、まぎらわしいかもしれま" -"せん。しかしながら、インスタンスのスナップショットはイメージです。Image service に直接アップロードしたイメージと、スナップショットに" -"より作成したイメージとの唯一の違いは、スナップショットにより作成されたイメー" -"ジが glance データベースにおいて追加のプロパティを持つことです。これらのプロ" -"パティは image_properties テーブルで確認でき、次の項目を含" -"みます:" - -msgid "" -"The dashboard is based on the Python Django web application framework. The best guide " -"to customizing it has already been written and can be found at Building on Horizon.DjangoPythondashboardDevStackcustomizing dashboardcustomizationdashboard" -msgstr "" -"dashboard は、Python Django Web アプリケーションフレームワークを利用しています。カスタマ" -"イズする最高のガイドがすでに書かれていて、Building on Horizon にあります。DjangoPythondashboardDevStackcustomizing dashboardcustomizationdashboard" - -msgid "" -"The dashboard is implemented as a Python web application that normally runs " -"in Apachehttpd. Therefore, you may treat " -"it the same as any other web application, provided it can reach the API " -"servers (including their admin endpoints) over the network.Apache" -msgstr "" -"ダッシュボードはPythonのウェブアプリケーションとして実装され、通常は" -"Apachehttpdサーバ上で稼働します。したがっ" -"て、そこから ネットワーク経由で (管理" -"者 エンドポイントを含む) API サーバーにアクセスできるという条件の下、他の任意" -"の Web アプリケーションと同じように取り扱うことができます。Apache" - -msgid "" -"The dashboard makes many requests, even more than the API access, so add " -"even more CPU if your dashboard is the main interface for your users." -msgstr "" -"ダッシュボードは、APIアクセスよりもさらに多くのリクエストを発行します。そのた" -"め、もしユーザに対するインタフェースがダッシュボードなのであれば、より多くの" -"CPUを追加してください。" - -msgid "" -"The decisions you make with respect to provisioning and deployment will " -"affect your day-to-day, week-to-week, and month-to-month maintenance of the " -"cloud. Your configuration management will be able to evolve over time. " -"However, more thought and design need to be done for upfront choices about " -"deployment, disk partitioning, and network configuration." -msgstr "" -"プロビジョニングやデプロイメントでの意思決定は、クラウドの日次、週次、月次の" -"メンテナンスに影響を与えます。設定管理は時が経つにつれ進化することができま" -"す。しかし、デプロイメント、ディスクのパーティショニング、ネットワーク設定を" -"事前に選択するには、さらに検討、設計が必要になります。" - -msgid "" -"The default authorization settings allow " -"administrative users only to create resources on behalf of a different " -"project. OpenStack handles two kinds of authorization policies:authorization" -msgstr "" -"デフォルトの認可設定では、管理ユーザーのみが他のプロ" -"ジェクトのリソースを作成できます。OpenStack では以下の 2 種類の認可ポリシーを" -"使うことができます。authorization" - -msgid "" -"The default CPU allocation ratio of 16:1 means that the scheduler allocates " -"up to 16 virtual cores per physical core. For example, if a physical node " -"has 12 cores, the scheduler sees 192 available virtual cores. With typical " -"flavor definitions of 4 virtual cores per instance, this ratio would provide " -"48 instances on a physical node." -msgstr "" -"デフォルトの CPU 割当比は 16:1 で、これは、物理コア 1 つにつき仮想コアが最大 " -"16 個までスケジューラーにより割り当てることができることを意味します。例えば、" -"物理ノードにコアが 12 個ある場合、スケジューラーには、使用可能な仮想コアが " -"192 個あることになります。インスタンス 1 個に仮想コア 4 個という通常のフレー" -"バーの定義では、この比率をもとにすると、1 つの物理にインスタンスが 48 個割り" -"当てらることになります。" - -msgid "" -"The default OpenStack flavors are shown in ." -msgstr "" -"デフォルトの OpenStack フレーバーは に表" -"示されています。" - -msgid "" -"The direction in which the security group rule is applied. Valid values are " -"ingress or egress." -msgstr "" -"セキュリティグループルールが適用される通信方向。有効な値は ingressegress です。" - -msgid "" -"The environment is largely based on Scientific Linux 6, which is Red Hat " -"compatible. We use KVM as our primary hypervisor, although tests are ongoing " -"with Hyper-V on Windows Server 2008." -msgstr "" -"この環境は、大部分は Red Hat 互換の Scientific Linux 6 ベースです。主なハイ" -"パーバイザとして KVM を使用していますが、一方 Windows Server 2008 上の Hyper-" -"V を使用したテストも進行中です。" - -msgid "" -"The example OpenStack architecture designates the cloud controller as the " -"MySQL server. This MySQL server hosts the databases for nova, glance, " -"cinder, and keystone. With all of these databases in one place, it's very " -"easy to create a database backup:databasesbackup/recovery ofbackup/recoverydatabases" -msgstr "" -"参考アーキテクチャーでは、クラウドコントローラーを MySQL サーバにしています。" -"この MySQL サーバーは nova, glance, cinder, そして keystone のデータベースを" -"保持しています。全てのデータベースが一か所にある場合、データベースバックアッ" -"プは非常に容易となります。databasesbackup/recovery ofbackup/recoverydatabases" - -msgid "" -"The existence of the *-manage tools is a legacy issue. It is a " -"goal of the OpenStack project to eventually migrate all of the remaining " -"functionality in the *-manage tools into the API-based tools. " -"Until that day, you need to SSH into the cloud controller node to perform some maintenance operations that require one of the " -"*-manage " -"tools.cloud controller " -"nodescommand-line tools and" -msgstr "" -"*-manage ツールの存在は、レガシーの問題です。OpenStack プロジェ" -"クトでは、最終的には *-manage ツールの残りの機能をすべて API " -"ベースのツールに移行することを目標としています。移行が完了するまで、*-manage ツール を必要とするメンテナンス操作は、クラウドコントローラーノー" -"ド に SSH 接続して実行する必要があります。クラウドコントローラーノードコマン" -"ドラインツール" - -msgid "" -"The file system does not have any \"dirty\" buffers: where programs have " -"issued the command to write to disk, but the operating system has not yet " -"done the write" -msgstr "" -"ファイルシステムが「ダーティー」バッファーを持たないこと: 「ダーティー」バッ" -"ファーがあるとは、プログラムがディスクに書き込むためにコマンドを発行しました" -"が、オペレーティングシステムがまだ書き込みを完了していないことです。" - -msgid "" -"The first column of this form, named All Users, includes a list of all the " -"users in your cloud who are not already associated with this project. The " -"second column shows all the users who are. These lists can be quite long, " -"but they can be limited by typing a substring of the username you are " -"looking for in the filter field at the top of the column." -msgstr "" -"\"すべてのユーザー (All Users)\" という見出しが付けられた、このフォームの最初" -"の列に、このプロジェクトにまだ割り当てられていない、クラウドのすべてのユー" -"ザーが一覧表示されます。2 列目には、すべての割り当て済みユーザーが一覧表示さ" -"れます。これらの一覧は非常に長い可能性があります。しかし、それぞれのの上部にあるフィルターフィールドに、探して" -"いるユーザー名の部分文字列を入力することにより、表示を絞り込むことができま" -"す。" - -msgid "" -"The first is the _base directory. This contains all the cached " -"base images from glance for each unique image that has been launched on that " -"compute node. Files ending in _20 (or a different number) are " -"the ephemeral base images." -msgstr "" -"一つ目は _base ディレクトリです。ここには、そのコンピュートノー" -"ドで起動されたそれぞれのイメージに関して、glance から取得したすべてのベースイ" -"メージのキャッシュが置かれます。_20 (または他の番号) で終わる" -"ファイルは一時ディスクのベースイメージです。" - -msgid "" -"The first place to look is the log file related to the command you are " -"trying to run. For example, if nova list is failing, try " -"tailing a nova log file and running the command again:tailing logs" -msgstr "" -"最初に確認する場所は、実行しようとしているコマンドに関連するログファイルで" -"す。たとえば、nova list が失敗していれば、nova ログファイルを " -"tail 表示しながら、次のコマンドを再実行してください:tailing logs" - -msgid "" -"The first step in finding the source of an error is typically to search for " -"a CRITICAL, TRACE, or ERROR message in the log starting at the bottom of the " -"log file.logging/monitoringreading log messages" -msgstr "" -"エラーの原因を見つけるための典型的な最初のステップは、 CRTICAL、TRACE、ERROR" -"などのメッセージがログファイルの終わりで出力されていないかを確認することで" -"す。logging/monitoringreading log messages" - -msgid "" -"The first step is setting the aggregate metadata keys " -"cpu_allocation_ratio and " -"ram_allocation_ratio to a floating-point value. The " -"filter schedulers AggregateCoreFilter and " -"AggregateRamFilter will use those values rather than " -"the global defaults in nova.conf when scheduling to " -"hosts in the aggregate. It is important to be cautious when using this " -"feature, since each host can be in multiple aggregates but should have only " -"one allocation ratio for each resources. It is up to you to avoid putting a " -"host in multiple aggregates that define different values for the same " -"resource." -msgstr "" -"最初のステップは、cpu_allocation_ratio と " -"ram_allocation_ratio のアグリゲートメタデータキーを浮" -"動小数点の値に設定します。AggregateCoreFilter と " -"AggregateRamFilter のフィルタースケジューラーは、アグ" -"リゲートのホストにスケジューリングしている場合、nova.conf のグローバルの初期値を仕様するのではなく、この値を使用します。各ホ" -"ストは複数のアグリゲートに含まれていますがリソースごとに 1 つの割当比率しか指" -"定されていないため、この機能の使用時は、注意が必要です。同じ リソース に対して別の値が定義されている複数のアグ" -"リゲートにホストを設置しないように注意してください。" - -msgid "" -"The first thing you must do is authenticate with the cloud using your " -"credentials to get an authentication token." -msgstr "" -"まずはじめに、クラウドの認証が必要です。あなたの認証情報を用いて認" -"証トークンを入手してください。" - -msgid "" -"The following command requires you to have your shell environment configured " -"with the proper administrative variables:" -msgstr "" -"以下のコマンドを実行するには、管理系の変数を正しく設定したシェル環境が必要で" -"す。" - -msgid "" -"The following command will boot a new instance and attach a volume at the " -"same time. The volume of ID 13 will be attached as /dev/vdc. It " -"is not a snapshot, does not specify a size, and will not be deleted when the " -"instance is terminated:" -msgstr "" -"以下のコマンドは、新しいインスタンスを起動して、同時にボリュームを接続しま" -"す。ID 13 のボリュームが /dev/vdc として接続されます。これは、ス" -"ナップショットではなく、容量を指定せず、インスタンスの削除時に一緒に削除され" -"ます。" - -msgid "The following comments are added to the rule set as appropriate:" -msgstr "以下のコメントが、適切にルールセットに追加されます。" - -msgid "" -"The following diagrams ( through " -") include logical information about " -"the different types of nodes, indicating what services will be running on " -"top of them and how they interact with each other. The diagrams also " -"illustrate how the availability and scalability of services are achieved." -msgstr "" -"以下の図 ( through ) には、異なるタイプのノードについての論理的情報が" -"含まれます。この図には、実行されるサービスやそれらがどのように相互に対話する" -"かが示されています。また、サービスの可用性とスケーラビリティがどのように実現" -"されるかについても図示しています。" - -msgid "" -"The following features of OpenStack are supported by the example " -"architecture documented in this guide, but are optional:" -msgstr "" -"以下にあげる OpenStack の機能は、本ガイドに記載のアーキテクチャーではサポート" -"されていますが、必須項目ではありません。" - -msgid "" -"The following implicit values are being used to create the signature in this " -"example: " -msgstr "" -"以下の暗黙的な値が、この例において署名を作成するために使用されます。 " -"" - -msgid "The following options are currently supported: " -msgstr "以下のオプションが現在サポートされます。 " - -msgid "" -"The following people have contributed to this book: Akihiro Motoki, " -"Alejandro Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, Benjamin " -"Stassart, Chandan Kumar, Chris Ricker, David Cramer, David Wittman, Denny " -"Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, James E. Blair, Jay " -"Clark, Jeff White, Jeremy Stanley, K Jonathan Harker, KATO Tomoyuki, Lana " -"Brindley, Laura Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew " -"Kassawara, Michael Still, Monty Taylor, Nermina Miller, Nigel Williams, Phil " -"Hopkins, Russell Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha " -"Peilicke, Sean M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, " -"Summer Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun \"Daisy" -"\" Guo, Zhengguang Ou, and ZhiQiang Fan." -msgstr "" -"以下の方々がこのドキュメントに貢献しています: Akihiro Motoki, Alejandro " -"Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, Benjamin Stassart, " -"Chandan Kumar, Chris Ricker, David Cramer, David Wittman, Denny Zhang, " -"Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, James E. Blair, Jay Clark, " -"Jeff White, Jeremy Stanley, K Jonathan Harker, KATO Tomoyuki, Lana Brindley, " -"Laura Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew Kassawara, " -"Michael Still, Monty Taylor, Nermina Miller, Nigel Williams, Phil Hopkins, " -"Russell Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha Peilicke, Sean " -"M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, Summer Long, Uwe " -"Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun \"Daisy\" Guo, " -"Zhengguang Ou, ZhiQiang Fan." - -msgid "" -"The following section details how the nodes are connected to the different " -"networks (see ) and what other " -"considerations need to take place (for example, bonding) when connecting " -"nodes to the networks." -msgstr "" -"以下のセクションでは、ノードを異なるネットワークに接続する方法 ( を参照) と、ノードをネットワークに接続する際" -"に他に考慮すべき点 (例: ボンディング) について説明します。" - -msgid "" -"The following section is from Sébastien Han's “OpenStack: Perform Consistent " -"Snapshots” blog entry." -msgstr "" -"以下のセクションは、Sébastien Han さんの “OpenStack: Perform Consistent " -"Snapshots” ブログ記事からの引用です。" - -msgid "The following signature properties are used: " -msgstr "以下の署名のプロパティーが使用されます。 " - -msgid "" -"The following steps described for Ubuntu have worked on at least one " -"production environment, but they might not work for all environments." -msgstr "" -"以下の手順は、Ubuntu 向けに記載しています。少なくとも 1 つの本番環境で動作し" -"ましたが、すべての環境で動作するとは限りません。" - -msgid "The following typographical conventions are used in this book:" -msgstr "以下の表記規則がこのドキュメントで使用されます。" - -msgid "" -"The formula for the number of virtual instances on a compute node is " -"(OR*PC)/VC, where:" -msgstr "" -"コンピュートノード上の仮想インスタンス数の公式は、 (OR*PC)/VC です。それぞれ以下を意味します。" - -msgid "" -"The frequency is defined separately for each periodic task. Therefore, to " -"disable every periodic task in OpenStack Compute (nova), you would need to " -"set a number of configuration options to zero. The current list of " -"configuration options you would need to set to zero are:" -msgstr "" -"実行頻度は周期的タスク別に定義されています。したがって、OpenStack Compute " -"(nova) ではすべての周期的タスクは無効にするためには、多くの設定オプションを " -"0 に設定する必要があることでしょう。現在のところ 0 に設定する必要がある設定オ" -"プションの一覧は以下のとおりです。" - -msgid "" -"The general case for this is setting key-value pairs in the aggregate " -"metadata and matching key-value pairs in flavor's extra_specs metadata. The AggregateInstanceExtraSpecsFilter in the filter scheduler will enforce that instances be scheduled " -"only on hosts in aggregates that define the same key to the same value." -msgstr "" -"この一般的なケースは、アグリゲートメタデータで key-value ペアを設定して、フ" -"レーバーの extra_specs メタデータで key-value ペアを一" -"致させます。フィルタースケジューラーの " -"AggregateInstanceExtraSpecsFilter は、強制的にインスタ" -"ンスが、同じ値に同じキーが定義されているアグリゲートのホストに対してのみスケ" -"ジューリングするようにします。" - -msgid "The generated file looks something like this:" -msgstr "出力は以下のようになります。" - -msgid "" -"The genesis of this book was an in-person event, but now that the book is in " -"your hands, we want you to contribute to it. OpenStack documentation follows " -"the coding principles of iterative work, with bug logging, investigating, " -"and fixing." -msgstr "" -"この本の元は人が集まったイベントで作成されましたが、今やこの本はみなさんも貢" -"献できる状態になっています。 OpenStack のドキュメント作成は、バグ報告、調査、" -"修正を繰り返し行うというコーディングの基本原則に基いて行われています。" - -msgid "" -"The genesis of this book was an in-person event, but now that the book is in " -"your hands, we want you to contribute to it. OpenStack documentation follows " -"the coding principles of iterative work, with bug logging, investigating, " -"and fixing. We also store the source content on GitHub and invite " -"collaborators through the OpenStack Gerrit installation, which offers " -"reviews. For the O'Reilly edition of this book, we are using the company's " -"Atlas system, which also stores source content on GitHub and enables " -"collaboration among contributors." -msgstr "" -"この本の元は人が集まったイベントで作成されましたが、今やこの本はみなさんも貢" -"献できる状態になっています。 OpenStack のドキュメント作成は、バグ登録、調査、" -"修正を繰り返して行うというコーディングの基本原則に基いて行われています。我々" -"はこの本のソースコンテンツを GitHub にも置いており、レビューシステムである " -"OpenStack Gerrit 経由で協力をお待ちしています。この本の O'Reilly 版では、我々" -"は O'Reilly の Atlas システムを使用していますが、ソースコンテンツは GitHub に" -"も格納され、コントリビュータ間での共同作業ができるようになっています。" - -msgid "" -"The good news: OpenStack has unprecedented transparency when it comes to " -"providing information about what's coming up. The bad news: each release " -"moves very quickly. The purpose of this appendix is to highlight some of the " -"useful pages to track, and take an educated guess at what is coming up in " -"the next release and perhaps further afield.Kiloupcoming release ofOpenStack communityworking with roadmapsrelease cycle" -msgstr "" -"良いお知らせ: OpenStack は、次に行われることに関する情報を提供する際に、前例" -"がないくらいにオープンです。悪いお知らせ: 各リリースが非常に迅速に行われま" -"す。この付録の目的は、参照しておく価値のあるページをいくつか紹介すること、次" -"のリリースやその先に起こることを根拠を持って推測することです。Kiloupcoming release ofOpenStack " -"communityworking with roadmapsrelease cycle" - -msgid "The horizon dashboard web application" -msgstr "horizon dashboard Web アプリケーション" - -msgid "" -"The initial implementation of OpenStack Compute had its own authentication " -"system and used the term project. When authentication " -"moved into the OpenStack Identity (keystone) project, it used the term " -"tenant to refer to a group of users. Because of this " -"legacy, some of the OpenStack tools refer to projects and some refer to " -"tenants." -msgstr "" -"OpenStack Compute の初期実装は独自の認証システムを持ち、プロジェクト" -"という用語を使用していました。認証が OpenStack Identity (Keystone) " -"プロジェクトに移行したとき、ユーザーのグループを参照するためにテナン" -"トという用語が使用されました。このような経緯のため、いくつかの " -"OpenStack ツールはプロジェクトを使用し、いくつかはテナントを使用します。" - -msgid "The instance finally gives up." -msgstr "最終的に、インスタンスはIPアドレス取得を諦める。" - -msgid "" -"The instance generates a packet and places it on the virtual NIC inside the " -"instance, such as eth0." -msgstr "" -"インスタンスはパケットを生成し、インスタンス内の仮想NIC、例えば eth0にそれを" -"渡します。" - -msgid "" -"The instance generates a packet and places it on the virtual Network " -"Interface Card (NIC) inside the instance, such as eth0." -msgstr "" -"インスタンスはパケットを生成し、インスタンス内の仮想NIC、例えば " -"eth0にそれを渡します。" - -msgid "" -"The instances table carries most of the information related to both running " -"and deleted instances. It has a bewildering array of fields; for an " -"exhaustive list, look at the database. These are the most useful fields for " -"operators looking to form queries:" -msgstr "" -"インスタンスのテーブルは、実行中および削除済みの両方のインスタンスに関連する" -"情報のほとんどを保持しています。データベースで完全なリストを見ると、このテー" -"ブルには目が回るほどたくさんのフィールドがあることがわかります。以下に、クエ" -"リーを行おうとしている運用者にとって非常に有用なフィールドを挙げます。" - -msgid "" -"The inverse operation is called secgroup-delete-rule, " -"using the same format. Whole security groups can be removed with " -"secgroup-delete." -msgstr "" -"逆の操作が secgroup-delete-rule です。secgroup-delete-" -"rule のコマンドラインは同じ形式です。セキュリティグループ全体を " -"secgroup-delete を用いて削除できます。" - -msgid "" -"The inverse operation is called security-group-rule-delete, specifying security-group-rule ID. Whole security groups can be " -"removed with security-group-delete." -msgstr "" -"逆の操作が security-group-rule-delete です。security-" -"group-rule ID を指定します。セキュリティグループ全体を security-" -"group-delete を使用して削除できます。" - -msgid "The keystone service" -msgstr "keystone サービス" - -msgid "The load shot up to 8 right before I received the alert" -msgstr "私が警告を受け取る直前、負荷率は8に急増した。" - -msgid "" -"The main advantage of this option is that it scales to external storage when " -"you require additional storage." -msgstr "" -"このオプションの主な利点は、追加ストレージが必要な場合、外部ストレージにス" -"ケールアウトできる点です。" - -msgid "The main downsides to this approach are:" -msgstr "この方法の主なマイナス面は以下の点です。" - -msgid "" -"The main reason to use GFO rather than regular swift is if you also want to " -"support a distributed file system, either to support shared storage live " -"migration or to provide it as a separate service to your end users. If you " -"want to manage your object and file storage within a single system, you " -"should consider GFO." -msgstr "" -"通常の swift ではなく GFO を使用するのは、主に、分散ファイルシステムのサポー" -"トや、共有ストレージのライブマイグレーションのサポートを提供したり、エンド" -"ユーザーに個別サービスとして提供したりするためです。単一システムでオブジェク" -"トとファイルストレージを管理する場合は、GFO の使用を検討してください。" - -msgid "" -"The maximum port number in the range that is matched by the security group " -"rule. The port_range_min attribute constrains the " -"port_range_max attribute. If the protocol is ICMP or " -"ICMPv6, this value must be an ICMP or ICMPv6 type, respectively." -msgstr "" -"セキュリティグループルールに一致する、ポート番号の範囲の最大値。" -"port_range_min 属性が port_range_max 属" -"性を制限します。プロトコルが ICMP または ICMPv6 の場合、この値はそれぞれ " -"ICMP コードまたは ICMPv6 コードでなければいけません。" - -msgid "" -"The most important step is the pre-upgrade testing. If you are upgrading " -"immediately after release of a new version, undiscovered bugs might hinder " -"your progress. Some deployers prefer to wait until the first point release " -"is announced. However, if you have a significant deployment, you might " -"follow the development and testing of the release to ensure that bugs for " -"your use cases are fixed.upgradingpre-upgrade testing" -msgstr "" -"すべて中で最も大切なステップは事前のアップグレードテストです。新しいバージョ" -"ンのリリース後すぐにアップグレードする場合、未発見のバグによってアップグレー" -"ドがうまくいかないこともあるでしょう。管理者によっては、最初のアップデート版" -"が出るまで待つことを選ぶ場合もあります。しかしながら、重要な環境の場合には、" -"リリース版の開発やテストに参加することで、あなたのユースケースでのバグを確実" -"に修正することもできるでしょう。upgradingpre-upgrade testing" - -msgid "" -"The network consists of two switches, one for the management or private " -"traffic, and one that covers public access, including floating IPs. To " -"support this, the cloud controller and the compute nodes have two network " -"cards. The OpenStack Block Storage and NFS storage servers only need to " -"access the private network and therefore only need one network card, but " -"multiple cards run in a bonded configuration are recommended if possible. " -"Floating IP access is direct to the Internet, whereas Flat IP access goes " -"through a NAT. To envision the network traffic, use this diagram:" -msgstr "" -"ネットワークは、2 つのスイッチで構成され、1 つは管理/プライベートトラフィック" -"用、もう 1 つは Floating IP を含むパブリックアクセスが対象です。この構成に対" -"応するために、クラウドコントローラーおよびコンピュートノードで NIC を 2 枚を" -"装備します。OpenStack Block Storage および NFS ストレージサーバーは、プライ" -"ベートネットワークにのみアクセスする必要があるので、必要な NIC は 1 枚です" -"が、可能な場合には複数の NIC をボンディング構成で動作させることを推奨します。" -"Floating IP のアクセスは、インターネットに直結ですが、Flat IP のアクセスは " -"NAT 経由となります。ネットワークトラフィックの構想には、以下の図を利用してく" -"ださい。" - -msgid "" -"The network contains all the management devices for all hardware in the " -"environment (for example, by including Dell iDrac7 devices for the hardware " -"nodes, and management interfaces for network switches). The network is " -"accessed by internal staff only when diagnosing or recovering a hardware " -"issue." -msgstr "" -"ネットワークには、環境内の全ハードウェア用の管理デバイスがすべて含まれます " -"(例: ハードウェアノード用の Dell iDrac7 デバイスやネットワークスイッチ用の管" -"理インターフェースの追加による) 。ネットワークは、ハードウェア問題の診断また" -"はリカバリを実行する場合にのみ内部スタッフがアクセスします。" - -msgid "" -"The networking chapter of the OpenStack Cloud Administrator Guide shows a variety of " -"networking scenarios and their connection paths. The purpose of this section " -"is to give you the tools to troubleshoot the various components involved " -"however they are plumbed together in your environment." -msgstr "" -"OpenStack Cloud " -"Administrator Guide のネットワークの章に、さまざまな種類のネットワーク" -"のシナリオや接続パスがあります。このセクションの目的は、どのようにお使いの環" -"境に一緒に関わっているかによらず、さまざまなコンポーネントをトラブルシュー" -"ティングするためのツールを提供します。" - -msgid "" -"The next best approach is to use a configuration-management tool, such as " -"Puppet, to automatically build a cloud controller. This should not take more " -"than 15 minutes if you have a spare server available. After the controller " -"rebuilds, restore any backups taken (see )." -msgstr "" -"次に最も優れているアプローチは、クラウドコントローラーを自動的に構築するため" -"に Puppet のような構成管理ツールを使用することです。利用可能な予備サーバーが" -"あれば、15 分もかかりません。コントローラーを再構築後、取得したすべてのバック" -"アップを復元します ( の章を参照してく" -"ださい)。" - -msgid "" -"The next step depends on whether the virtual network is configured to use " -"802.1q VLAN tags or GRE:" -msgstr "" -"次の手順は、仮想ネットワークが 802.1q VLAN タグや GRE を使用するよう設定して" -"いるかどうかに依存します。" - -msgid "" -"The nova API, scheduler, objectstore, cert, consoleauth, conductor, and " -"vncproxy services are run on all controller nodes, ensuring at least one " -"instance will be available in case of node failure. Compute is also behind " -"HAProxy, which detects when the software fails and routes requests around " -"the failing instance." -msgstr "" -"nova API、scheduler、objectstore、cert、consoleauth、conductor、および " -"vncproxy のサービスは、全コントローラーノードで実行され、ノードに障害が発生し" -"た場合には少なくとも 1 インスタンスが利用可能となるようにします。Compute " -"は、ソフトウェアの障害を検出して、障害の発生したインスタンスを迂回するように" -"要求をルーティングする HAProxy の背後に配置されます。" - -msgid "The nova scheduler service" -msgstr "nova スケジューラーサービス" - -msgid "The nova services" -msgstr "nova サービス" - -msgid "The number of nova-api requests each hour" -msgstr "1時間あたりの nova-api リクエスト数" - -msgid "The number of Object Storage requests each hour" -msgstr "1時間あたりの Object Storage リクエスト数" - -msgid "" -"The number of cores that the CPU has also affects the decision. It's common " -"for current CPUs to have up to 12 cores. Additionally, if an Intel CPU " -"supports hyperthreading, those 12 cores are doubled to 24 cores. If you " -"purchase a server that supports multiple CPUs, the number of cores is " -"further multiplied.coreshyperthreadingmultithreading" -msgstr "" -"CPU のコア数も選択に影響します。現在の CPU では最大 12 コアあるのが一般的で" -"す。さらに、Intel CPU がハイパースレッディングをサポートしている場合、12 コア" -"は 2 倍の 24 コアになります。複数の CPU をサポートするサーバーを購入する場" -"合、コア数はさらに倍になっていきます。" -"コアハイパース" -"レッディングマル" -"チスレッド" - -msgid "The number of instances on each compute node" -msgstr "各コンピュートノード上のインスタンス数" - -msgid "" -"The number of virtual machines (VMs) you expect to run, ((overcommit " -"fraction " -msgstr "実行する必要のある仮想マシン数。((overcommit fraction " - -msgid "The number of volumes in use" -msgstr "使用中のボリューム数" - -msgid "" -"The official OpenStack Object Store implementation. It is a mature " -"technology that has been used for several years in production by Rackspace " -"as the technology behind Rackspace Cloud Files. As it is highly scalable, it " -"is well-suited to managing petabytes of storage. OpenStack Object Storage's " -"advantages are better integration " -"with OpenStack (integrates with OpenStack Identity, works with the OpenStack " -"dashboard interface) and better support for multiple data center deployment " -"through support of asynchronous eventual consistency replication." -msgstr "" -"公式の OpenStack Object Store 実装。Rackspace Cloud Filesのベースとなる技術と" -"して、RackSpace により実稼動環境で数年間使用された成熟テクノロジーです。拡張" -"性が高いため、ペタバイトレベルのストレージを管理するのに非常に適しています。" -"OpenStack Object Storage の利点は OpenStack (OpenStack Identity と統合し、" -"OpenStack Dashboard インターフェースと連携) と" -"統合でき、非同期のイベントを整合性を保ちながら複製できるため、複数の" -"データセンターのデプロイメントへのサポートも向上されています。" - -msgid "" -"The only thing that the Image service does not store in a database is the " -"image itself. The Image service database has two main tables:databasesImage serviceImage servicedatabase tables" -msgstr "" -"Image service がデータベースに保存しない唯一のものは、イメージ自体です。" -"Image service のデータベースは、主要なテーブルが 2 つあります。databasesImage serviceImage servicedatabase tables" - -msgid "The operating system and version where you've identified the bug" -msgstr "あなたがバグを確認したオペレーティングシステムとそのバージョン" - -msgid "" -"The order you should upgrade services, and any changes from the general " -"upgrade process is described below:" -msgstr "" -"サービスをアップグレードすべき順番、一般的なアップグレード手順との違いは、以" -"下に示す通りです。" - -msgid "" -"The other directories are titled instance-xxxxxxxx. These " -"directories correspond to instances running on that compute node. The files " -"inside are related to one of the files in the _base directory. " -"They're essentially differential-based files containing only the changes " -"made from the original _base directory." -msgstr "" -"もう一つのディレクトリは instance-xxxxxxxx という名前です。これ" -"らのディレクトリはコンピュートノードにおいて実行中のインスタンスと対応しま" -"す。中にあるファイルは _base ディレクトリにあるファイルのどれか" -"と関連があります。これらは基本的に、元々の _base ディレクトリか" -"らの変更点のみ含む、差分ベースのファイルです。" - -msgid "The output looks like the following:" -msgstr "出力は以下のようになります。" - -msgid "The output looks something like the following:" -msgstr "出力は以下のようになります。" - -msgid "" -"The output of this command varies depending on the hypervisor because " -"hypervisors support different attributes.hypervisorscompute node diagnosis and The following demonstrates the difference between the " -"two most popular hypervisors. Here is example output when the hypervisor is " -"Xen: While the command should work with any hypervisor that " -"is controlled through libvirt (KVM, QEMU, or LXC), it has been tested only " -"with KVM. Here is the example output when the hypervisor is KVM:" -msgstr "" -"ハイパーバイザーによってサポートする属性が異なるため、コマンドの出力はハイ" -"パーバイザーによって異なります。ハイ" -"パーバイザーコンピュートノードの診断 以下の実例は、最もよく使用されている 2 つのハイパーバイザーの間で" -"出力がどのように異なるかを示しています。ハイパーバイザーが Xen の場合の例は以" -"下のようになります: このコマンドは、libvirt によって管理され" -"ている任意のハイパーバイザー (例: KVM、QEMU、LXC) で機能するはずですが、KVM " -"でのみテスト済みです。ハイパーバイザーが KVM の場合の例は以下のようになります" - -msgid "" -"The output shows that there are five compute nodes and one cloud controller. " -"You see a smiley face, such as :-), which indicates that the " -"services are up and running. If a service is no longer available, the " -":-) symbol changes to XXX. This is an indication " -"that you should troubleshoot why the service is down." -msgstr "" -"出力には、5 つのコンピュートノードと 1 つのクラウドコントローラーが表示されて" -"います。スマイリーフェイス :-) が見えます。これはサービスが稼働" -"中であることを示しています。サービスが利用できなくなると、:-) の" -"シンボルが XXX に変わります。これは、サービスが停止している理由" -"をトラブルシューティングする必要があることを示しています。" - -msgid "" -"The output shows three different dnsmasq processes. The dnsmasq process that " -"has the DHCP subnet range of 192.168.122.0 belongs to libvirt and can be " -"ignored. The other two dnsmasq processes belong to nova-network. The two processes are actually related—one is simply the parent " -"process of the other. The arguments of the dnsmasq processes should " -"correspond to the details you configured nova-network " -"with." -msgstr "" -"出力は 3 種類の dnsmasq プロセスを示しています。192.168.122.0 の DHCP サブ" -"ネット範囲を持つ dnsmasq プロセスが、libvirt に属していますが、無視できます。" -"他の 2 つのプロセスは実際に関連します。1 つは単純なもう一つの親プロセスです。" -"dnsmasq プロセスの引数は、nova-network に設定した詳細に対" -"応するでしょう。" - -msgid "" -"The packet is then received on the network node. Note that any traffic to " -"the l3-agent or dhcp-agent will be visible only within their network " -"namespace. Watching any interfaces outside those namespaces, even those that " -"carry the network traffic, will only show broadcast packets like Address " -"Resolution Protocols (ARPs), but unicast traffic to the router or DHCP " -"address will not be seen. See Dealing with Network Namespaces for detail on how to run commands " -"within these namespaces." -msgstr "" -"次に、パケットはネットワークノードで受信されます。L3 エージェントや DHCP エー" -"ジェントへのすべての通信は、それらのネットワーク名前空間の中のみで参照できま" -"す。それらの名前空間の外部にあるすべてのインターフェースを監視することによ" -"り、ネットワーク通信を転送している場合でも、ARP のようなブロードキャストパ" -"ケットのみが表示されます。しかし、ルーターや DHCP アドレスへのユニキャスト通" -"信は表示されません。これらの名前空間の中でコマンドを実行する方法の詳細は、" -"Dealing with Network " -"Namespacesを参照してください。" - -msgid "" -"The packet then makes it to the l3-agent. This is actually another TAP " -"device within the router's network namespace. Router namespaces are named in " -"the form qrouter-<router-uuid>. Running ip a within the namespace will show the TAP device name, qr-e6256f7d-31 " -"in this example:" -msgstr "" -"そして、パケットが L3 エージェントに到達します。これは実際には、ルーターの名" -"前空間の中にある別の TAP デバイスです。ルーター名前空間は、qrouter-<" -"router-uuid> という形式の名前です。名前空間の中で ip a を実行することにより、TAP デバイスの名前を表示します。この例では qr-" -"e6256f7d-31 です。" - -msgid "" -"The packet transfers to a Test Access Point (TAP) device on the compute " -"host, such as tap690466bc-92. You can find out what TAP is being used by " -"looking at the /etc/libvirt/qemu/instance-xxxxxxxx.xml " -"file." -msgstr "" -"そのパケットはコンピュートホストの Test Access Point (TAP)、例えば " -"tap690466bc-92 に転送されます。TAP の構成は、/etc/libvirt/qemu/" -"instance-xxxxxxxx.xml を見ることで把握できます。" - -msgid "" -"The packet transfers to the main NIC of the compute node. You can also see " -"this NIC in the brctl output, or you can find it by " -"referencing the flat_interface option in nova." -"conf." -msgstr "" -"パケットはコンピュートノードの物理NICに送られます。このNICはbrctlコマンドの出力から、もしくはnova.confの" -"flat_interfaceオプションから確認できます。" - -msgid "" -"The packet transfers to the virtual NIC of the compute host, such as, " -"vnet1. You can find out what vnet NIC is being used by " -"looking at the /etc/libvirt/qemu/instance-xxxxxxxx.xml " -"file." -msgstr "" -"そのパケットはコンピュートホストの仮想NIC、例えば vnet1に" -"転送されます。vnet NICの構成は、/etc/libvirt/qemu/instance-" -"xxxxxxxx.xml を見ることで把握できます。" - -msgid "" -"The pip utility is used to manage package installation from the PyPI archive " -"and is available in the python-pip package in most Linux distributions. Each " -"OpenStack project has its own client, so depending on which services your " -"site runs, install some or all of the followingneutronpython-neutronclientswiftpython-swiftclientcinderkeystoneglancepython-glanceclientnovapython-novaclient packages:" -msgstr "" -"pip ユーティリティは、PyPI アーカイブからのパッケージインストールの管理に使" -"用するツールで、大半の Linux ディストリビューションの python-pip パッケージに" -"含まれています。各 OpenStack プロジェクトにはそれぞれ独自のクライアントがあり" -"ます。サイトで実行するサービスに応じて、以下のパッケージの一部またはすべてを" -"インストールしてください。neutronpython-neutronclientswiftpython-swiftclientcinderkeystoneglancepython-glanceclientnovapython-novaclient" - -msgid "" -"The point we are trying to make here is that just because an option exists " -"doesn't mean that option is relevant to your driver choices. Normally, the " -"documentation notes which drivers the configuration applies to." -msgstr "" -"ここで言っておきたいことは、オプションが存在するからといって、そのオプション" -"があなたが選んだドライバーに関係するとは限らないということである。通常は、ド" -"キュメントには、その設定オプションが適用されるドライバーについての記載があり" -"ます。" - -msgid "" -"The policy engine reads entries from the policy.json file. The " -"actual location of this file might vary from distribution to distribution: " -"for nova, it is typically in /etc/nova/policy.json. You can " -"update entries while the system is running, and you do not have to restart " -"services. Currently, the only way to update such policies is to edit the " -"policy file." -msgstr "" -"ポリシーエンジンは policy.json ファイルから項目を読み込みます。" -"このファイルの実際の位置はディストリビューションにより異なります。一般的に " -"Nova 用の設定ファイルは /etc/nova/policy.json にあります。システ" -"ムの実行中に項目を更新でき、サービスを再起動する必要がありません。今のとこ" -"ろ、ポリシーファイルの編集がこのようなポリシーを更新する唯一の方法です。" - -msgid "" -"The preceding information was generated by using a custom script that can be " -"found on GitHub." -msgstr "" -"前の情報は GitHub にあるカスタムスクリプトを使用して" -"生成されました。" - -msgid "" -"The preceding output has been truncated to show only two services. You will " -"see one service entry for each service that your cloud provides. Note how " -"the endpoint domain can be different depending on the endpoint type. " -"Different endpoint domains per type are not required, but this can be done " -"for different reasons, such as endpoint privacy or network traffic " -"segregation." -msgstr "" -"上記の出力は、2 つのサービスのみを表示するようにカットされています。クラウド" -"が提供するサービスごとにサービスエントリーが 1 つ表示されているのがわかりま" -"す。エンドポイントタイプによってエンドポイントドメインが異なる場合がある点に" -"注意してください。タイプによってエンドポイントドメインを別にする必要はありま" -"せんが、エンドポイントのプライバシーやネットワークトラフィックの分離などの異" -"なる理由で分けることができます。" - -msgid "" -"The purpose of automatic configuration management is to establish and " -"maintain the consistency of a system without using human intervention. You " -"want to maintain consistency in your deployments so that you can have the " -"same cloud every time, repeatably. Proper use of automatic configuration-" -"management tools ensures that components of the cloud systems are in " -"particular states, in addition to simplifying deployment, and configuration " -"change propagation.automated " -"configurationprovisioning/deploymentautomated " -"configuration" -msgstr "" -"自動環境設定管理の目的は、人間の介在なしにシステムの整合性を確保、維持するこ" -"とにあります。毎回、同じクラウド環境を繰り返し作るために、デプロイメントにお" -"ける整合性を確保します。自動環境設定管理ツールを正しく利用することによって、" -"デプロイメントと環境設定の変更を伝搬する作業を簡素化するだけでなく、クラウド" -"システムのコンポーネントが必ず特定の状態にあるようにすることができます。" -"自動設定プロビジョニング/デプロイメ" -"ント自動設定" - -msgid "The purpose of the screen windows are as follows:" -msgstr "screen ウィンドウの目的は、以下のとおりです。" - -msgid "" -"The qemu-nbd device tries to export the instance disk's different partitions " -"as separate devices. For example, if vda is the disk and vda1 is the root " -"partition, qemu-nbd exports the device as /dev/nbd0 and " -"/dev/nbd0p1, respectively:" -msgstr "" -"qemu-nbd デバイスはインスタンスのディスクの個々のパーティションを別々のデバイ" -"スとしてエクスポートしようとします。たとえば、ディスクが vda で、ルートパー" -"ティションが vda1 の場合、qemu-nbd はそれぞれ /dev/nbd0 " -"と /dev/nbd0p1 としてデバイスをエクスポートします。" - -msgid "" -"The reference architecture consists of multiple compute nodes, a cloud " -"controller, an external NFS storage server for instance storage, and an " -"OpenStack Block Storage server for volume storage." -"legacy networking (nova)detailed description A network " -"time service (Network Time Protocol, or NTP) synchronizes time on all the " -"nodes. FlatDHCPManager in multi-host mode is used for the networking. A " -"logical diagram for this example architecture shows which services are " -"running on each node:" -msgstr "" -"この参照アーキテクチャーは、複数のコンピュートノード、クラウドコントローラー " -"1 台、インスタンスストレージ用の外部 NFS ストレージサーバー 1 台、" -"volume ストレージ用の OpenStack Block Storage サー" -"バー 1 台で構成されます。レガシーネッ" -"トワーク (nova)詳しい説明 ネット" -"ワークタイムサービス (Network Time Protocol / NTP) は全ノードの時刻を同期しま" -"す。ネットワークには、マルチホストモードの FlatDHCPManager を使用しています。" -"このアーキテクチャー例の論理図には、各ノードで実行されているサービスが示され" -"ています。" - -msgid "" -"The release, or milestone, or commit ID corresponding to the software that " -"you are running" -msgstr "実行しているソフトウェアのリリースやマイルストーン、コミット ID" - -msgid "" -"The remaining point on bandwidth is the public-facing portion. The " -"swift-proxy service is stateless, which means that you " -"can easily add more and use HTTP load-balancing methods to share bandwidth " -"and availability between them." -msgstr "" -"帯域幅に関する残りのポイントは、パブリック側の部分です。swift-" -"proxy サービスは、ステートレスです。ステートレスとは、HTTP 負荷分散" -"メソッドを簡単に追加、使用して帯域幅や可用性を共有できるという意味です。" - -msgid "" -"The roadmap for the next release as it is developed can be seen at Releases." -msgstr "" -"開発されている次のリリースのロードマップは Releases にあります。" - -msgid "" -"The scheduling services are responsible for determining the compute or " -"storage node where a virtual machine or block storage volume should be " -"created. The scheduling services receive creation requests for these " -"resources from the message queue and then begin the process of determining " -"the appropriate node where the resource should reside. This process is done " -"by applying a series of user-configurable filters against the available " -"collection of nodes.schedulersdesign considerationsdesign considerationsscheduling" -msgstr "" -"スケジューリングサービスは仮想マシンやブロックストレージのボリュームがどのコ" -"ンピュートあるはストレージノードで生成されるかを決定する事に責任を持ちます。" -"スケジューリングサービスはメッセージキューからそれらのリソースの生成リクエス" -"トを受け、どのリソースが適切かを決定するプロセスを起動します。このプロセスは" -"利用可能なノード郡からユーザが設定したフィルタを適用する事で実行されます。" -"スケジューラー設" -"計上の考慮事項" -"設計上の考慮事項スケジューリング" - -msgid "" -"The simplest architecture you can build upon for Compute has a single cloud " -"controller and multiple compute nodes. The simplest architecture for Object " -"Storage has five nodes: one for identifying users and proxying requests to " -"the API, then four for storage itself to provide enough replication for " -"eventual consistency. This example architecture does not dictate a " -"particular number of nodes, but shows the thinking and considerations that went into choosing this " -"architecture including the features offered.CentOSRDO (Red Hat Distributed " -"OpenStack)Ubuntulegacy networking (nova)component overviewexample " -"architectureslegacy networking; OpenStack networkingObject Storagesimplest architecture forComputesimplest architecture for" -msgstr "" -"Compute 用の基盤となる最もシンプルなアーキテクチャーは、単一のクラウドコント" -"ローラーと複数のコンピュートノードで構成されます。Object Storage 用の最もシン" -"プルなアーキテクチャーは、ユーザーを識別して API への要求をプロキシするノー" -"ド 1 つと、最終的な整合性を確保するのに十分なレプリケーションを提供する、スト" -"レージ自体のためのノード 4つを合わせた 5 つのノードで構成されます。このアーキ" -"テクチャーの例では、特定のノード数は決まっていませんが、このアーキテクチャー" -"を選択するにあたって考慮 および検討した点 (どのような機能を提供するかな" -"ど) をお分かりいただけます。CentOSRDO (Red Hat " -"Distributed OpenStack)Ubuntuレガシーネットワーク (nova)コンポーネントの" -"概要アーキテク" -"チャー例レガシーネットワーク; OpenStack NetworkingObject Storage最もシンプルなアーキテクチャーCompute" -"最もシンプルなアーキテクチャー" - -msgid "" -"The simplest option to get started is to use one hard drive with two " -"partitions:" -msgstr "" -"最もシンプルに使用を開始できるオプションは、1台のハードディスクを2つのパー" -"ティションに分割することです。" - -msgid "" -"The simplest place to start testing the next version of OpenStack is by " -"setting up a new environment inside your own cloud. This might seem odd, " -"especially the double virtualization used in running compute nodes. But it " -"is a sure way to very quickly test your configuration." -msgstr "" -"OpenStack の次のバージョンをテストするための一番簡単な方法は、お使いのクラウ" -"ドの中に新しい環境をセットアップすることです。これは奇妙に思えるかもしれませ" -"ん。とくに、動作中のコンピュートノードで使用される二重仮想化です。しかし、お" -"使いの設定を非常に手軽にテストする確実な方法です。" - -msgid "" -"The simplest reasons for nodes to fail to launch are quota violations or the " -"scheduler being unable to find a suitable compute node on which to run the " -"instance. In these cases, the error is apparent when you run a nova " -"show on the faulted instance:config drive" -msgstr "" -"ノードが起動に失敗する最も簡単な理由は、クォータ違反、またはスケジューラーが" -"インスタンスを実行するのに適したコンピュートノードを見つけられなかった場合で" -"す。これらの場合、失敗したインスタンスに対して nova show を実行" -"するとエラーが表示されます。config " -"drive" - -msgid "" -"The simplest way to identify that this is the problem with your instance is " -"to look at the console output of your instance. If DHCP failed, you can " -"retrieve the console log by doing:" -msgstr "" -"もっともシンプルにこの問題を特定する方法は、インスタンス上のコンソール出力を" -"確認することです。もしDHCPが正しく動いていなければ、下記のようにコンソールロ" -"グを参照してください。" - -msgid "" -"The size of the volume in gigabytes. It is safe to leave this blank and have " -"the Compute Service infer the size." -msgstr "" -"ボリュームのギガバイト単位の容量。このフィールドを空欄にして、Compute サービ" -"スに容量を推定させるのが安全です。" - -msgid "" -"The software stack is still Ubuntu 12.04 LTS, but now with OpenStack Havana " -"from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed using FAI and Puppet for " -"configuration management. The FAI and Puppet combination is used lab-wide, " -"not only for OpenStack. There is a single cloud controller node, which also " -"acts as network controller, with the remainder of the server hardware " -"dedicated to compute nodes." -msgstr "" -"ソフトウェアスタックは Ubuntu 12.04 LTS と Ubuntu Cloud Archive からの " -"OpenStack Havana です。KVM がハイパーバイザで、FAI と Puppet を設定管理に使用してデプロイされていま" -"す。FAI と Puppet の組み合わせはOpenStack のみならず研究所全体で使用されてい" -"ます。単一のクラウドコントローラーノードがあり、これはネットワークコントロー" -"ラーとしても動作しますが、他のコンピュータ・ハードウェアはコンピュートノード" -"に使用されています。" - -msgid "" -"The source files are located in /usr/lib/python2.7/dist-packages/" -"nova." -msgstr "" -"ソースファイルは /usr/lib/python2.7/dist-packages/nova " -"にあります。" - -msgid "" -"The starting point for most is the core count of your cloud. By applying " -"some ratios, you can gather information about: You can use " -"these ratios to determine how much additional infrastructure you need to " -"support your cloud." -msgstr "" -"多くの場合、クラウドのコア数から始めます。比率を適用することで、 " -" に関する情報を取得できます。これらの比率を使用して、クラウド" -"のサポートに必要なインフラストラクチャーがどの程度必要か判断することができま" -"す。" - -msgid "The swift services" -msgstr "swift サービス" - -msgid "" -"The take away from this is if you observe an OpenStack process that appears " -"to \"stop\" for a while and then continue to process normally, you should " -"check that periodic tasks aren't the problem. One way to do this is to " -"disable the periodic tasks by setting their interval to zero. Additionally, " -"you can configure how often these periodic tasks run—in some cases, it might " -"make sense to run them at a different frequency from the default." -msgstr "" -"これから分かることは、OpenStack のプロセスが少しの間「停止」したように見え" -"て、それから通常通りの動作を継続するような状況を見つけた場合、周期的タスクが" -"問題になっていないかを確認すべきだということです。取りうる一つの方法は、間隔" -"を 0 に設定して周期的タスクを無効にすることです。また、周期的タスクの実行頻度" -"を設定することもできます。デフォルトとは異なる頻度で周期的タスクを実行する方" -"が意味がある場合もあります。" - -msgid "The team includes:" -msgstr "以下が執筆チームのメンバーです。" - -msgid "" -"The team will curate a set of vulnerability related issues in the issue " -"tracker. Some of these issues are private to the team and the affected " -"product leads, but once remediation is in place, all vulnerabilities are " -"public." -msgstr "" -"このチームはバグ追跡システムに登録された脆弱性に関連する問題の管理を行いま" -"す。問題の中にはこのチームおよび影響を受けるプロジェクトの責任者のみが参照で" -"きるものありますが、問題の修正が用意されると、全ての脆弱性は公開されます。" - -msgid "" -"The technology behind OpenStack consists of a series of interrelated " -"projects delivering various components for a cloud infrastructure solution. " -"Each service provides an open API so that all of these resources can be " -"managed through a dashboard that gives administrators control while " -"empowering users to provision resources through a web interface, a command-" -"line client, or software development kits that support the API. Many " -"OpenStack APIs are extensible, meaning you can keep compatibility with a " -"core set of calls while providing access to more resources and innovating " -"through API extensions. The OpenStack project is a global collaboration of " -"developers and cloud computing technologists. The project produces an open " -"standard cloud computing platform for both public and private clouds. By " -"focusing on ease of implementation, massive scalability, a variety of rich " -"features, and tremendous extensibility, the project aims to deliver a " -"practical and reliable cloud solution for all types of organizations." -msgstr "" -"OpenStack の背後にある技術は、クラウドインフラストラクチャーソリューション向" -"けのさまざまなコンポーネントを提供する、一連の相互に関連するプロジェクトから" -"構成されます。各サービスは、これらのリソースをすべてダッシュボードから管理で" -"きるよう、オープンな API を提供します。これは、権限を与えられたユーザーが、" -"API をサポートする Web インターフェース、コマンドラインクライアント、ソフト" -"ウェア開発キット (SDK) 経由でリソースを配備する管理者権限を与えます。多くの " -"OpenStack API は拡張できます。API 拡張経由でより多くのリソースにアクセスして" -"革新しながら、コアなコール群の互換性を保つことができます。OpenStack プロジェ" -"クトは、開発者とクラウドコンピューティング技術者のグローバルなコラボレーショ" -"ンです。プロジェクトは、パブリッククラウドおよびプライベートクラウド向けの" -"オープン標準なクラウドコンピューティングプラットフォームを生産します。実装の" -"しやすさ、大規模なスケーラビリティ、さまざまな豊富な機能、ものすごい拡張性に" -"注力することにより、プロジェクトはあらゆる種類の組織に対して実践的かつ信頼で" -"きるクラウドソリューションを提供することを目指しています。" - -msgid "" -"The third version of the Compute API was broadly discussed and worked on " -"during the Havana and Icehouse release cycles. Current discussions indicate " -"that the V2 API will remain for many releases, and the next iteration of the " -"API will be denoted v2.1 and have similar properties to the existing v2.0, " -"rather than an entirely new v3 API. This is a great time to evaluate all API " -"and provide comments while the next generation APIs are being defined. A new " -"working group was formed specifically to improve OpenStack APIs and " -"create design guidelines, which you are welcome to join. KiloCompute V3 API" -msgstr "" -"Compute API の 3 番目のバージョンが幅広く議論され、Havana と Icehouse のリ" -"リースサイクル期間中に取り組まれました。現在の議論は、V2 API が多くのリリース" -"のために残され、次の API の繰り返しは v2.1 と表示されます。これは、完全に新し" -"い v3 API ではなく、既存の v2.0 と同じようなプロパティーを持ちます。これは、" -"すべての API を評価するための良い機会です。次世代の API が定義されるまでにコ" -"メントを出します。新しいワーキンググループが特別に形成され、OpenStack API を改善し" -"て、設計のガイドラインを作成します。あなたの参加も歓迎されています。" -"KiloCompute V3 " -"API" - -msgid "" -"The tunnel bridge, br-tun, contains the patch-int " -"interface and gre-<N> interfaces for each peer it " -"connects to via GRE, one for each compute and network node in your cluster:" -msgstr "" -"トンネルブリッジ br-tun は、GRE 経由で接続するお互いの接続相手の" -"ために patch-int インターフェースと gre-<N> " -"インターフェース、クラスター内で各コンピュートノードとネットワークノードのた" -"めのものを含みます。" - -msgid "" -"The type of CPU in your compute node is a very important choice. First, " -"ensure that the CPU supports virtualization by way of VT-x for Intel chips and AMD-v for AMD chips." -"CPUs (central processing units)choosingcompute nodesCPU choice" -msgstr "" -"コンピュートノードの CPU タイプを選択することは非常に重要です。まず、Intel " -"チップには VT-x、AMD チップには AMD-v という風に、CPU が仮想化をサポートするようにします。CPU (central processing unit)選択コンピュートノードCPU の選択" - -msgid "The types of flavors in use" -msgstr "使用中のフレーバー" - -msgid "" -"The typical hardware recommended for use with OpenStack is the standard " -"value-for-money offerings that most hardware vendors stock. It should be " -"straightforward to divide your procurement into building blocks such as " -"\"compute,\" \"object storage,\" and \"cloud controller,\" and request as " -"many of these as you need. Alternatively, should you be unable to spend " -"more, if you have existing servers—provided they meet your performance " -"requirements and virtualization technology—they are quite likely to be able " -"to support OpenStack." -msgstr "" -"OpenStack での使用に推奨しているハードウェアは一般的に、多くのハードウェアベ" -"ンダーが提供している、コストパフォーマンスの高い標準オファリングです。調達す" -"べきものをコンピュート、オブジェクトストレージ、クラウドコントローラなどのビ" -"ルディングブロックに分類し、必要に応じ依頼していくと分かりやすいでしょう。ま" -"た、投資額をこれ以上かけられない場合、既存サーバーがありパフォーマンス要件や" -"仮想化技術に対応しているのであれば、高い確率で OpenStack をサポートしているは" -"ずです。" - -msgid "" -"The typical way is to trace the UUID associated with an instance across the " -"service logs." -msgstr "" -"一般的な方法はインスタンスのUUIDをキーにして、各サービスのログを追跡すること" -"です。" - -msgid "Then it all made sense… " -msgstr "やっと事の全容が判明した… " - -msgid "" -"Then on the actual compute node, create the following NRPE configuration:" -msgstr "" -"そして、対象のコンピュートノードにおいて、次のようなNRPE設定を作成します。" - -msgid "Then start the nova-compute service:" -msgstr "そして nova-compute サービスを起動します。" - -msgid "" -"Then, if you use SSH to log into your instance and try ping openstack." -"org, you should see something like:" -msgstr "" -"インスタンスに SSH ログインして、ping openstack.org を試すと、以" -"下のようなメッセージが確認できるでしょう。" - -msgid "" -"There are a number of optional items that can be specified. You should read " -"the rest of this section before trying to start an instance, but this is the " -"base command that later details are layered upon." -msgstr "" -"指定できる多くのオプション項目があります。インスタンスを起動しようとする前" -"に、このセクションを最後まで読んでください。しかし、これが今から説明する詳細" -"の基本となるコマンドです。" - -msgid "" -"There are also several *-manage command-line tools. These " -"are installed with the project's services on the cloud controller and do not " -"need to be installed*-manage command-" -"line toolscommand-line toolsadministrative separately:" -msgstr "" -"*-manage のコマンドラインツールも複数あります。これらは、" -"プロジェクトのサービスとともにクラウドコントローラーにインストールされるの" -"で、別途インストールする必要はありません。 *-manage コマンドラインツールコマンドラインツール管理系" - -msgid "" -"There are benefits to both modes. Single-node has the downside of a single " -"point of failure. If the cloud controller is not available, instances cannot " -"communicate on the network. This is not true with multi-host, but multi-host " -"requires that each compute node has a public IP address to communicate on " -"the Internet. If you are not able to obtain a significant block of public IP " -"addresses, multi-host might not be an option." -msgstr "" -"どちらのモードにもメリットがあります。シングルホストモードには、単一障害点と" -"いうマイナス面があります。クラウドコントローラーが利用できなくなると、インス" -"タンスはネットワークと通信できなくなります。マルチホストモードでは、この状況" -"にはなりませんが、各コンピュートノードはインターネットと通信するためのパブ" -"リックIPアドレスが必要となります。十分な大きさのパブリックIPアドレスブロック" -"を取得できない場合には、マルチホストモードは選択肢にならないかもしれません。" - -msgid "" -"There are currently two categories of quotas for Object Storage:account quotascontainersquota settingObject Storagequota setting" -msgstr "" -"現在、Object Storage に対する 2 種類のクォータがあります。account quotascontainersquota settingObject Storagequota setting" - -msgid "" -"There are currently two schedulers: nova-scheduler for " -"virtual machines and cinder-scheduler for block storage " -"volumes. Both schedulers are able to scale horizontally, so for high-" -"availability purposes, or for very large or high-schedule-frequency " -"installations, you should consider running multiple instances of each " -"scheduler. The schedulers all listen to the shared message queue, so no " -"special load balancing is required." -msgstr "" -"現在のところ2つのスケジューラがあります:仮想サーバを受け持つnova-" -"scheduler とブロックストレージボリュームを受け持つcinder-" -"scheduler です。それぞれのスケジューラは高可用性目的や、高負荷環境" -"での実装のために水平方向にスケーリングが可能で、それぞれのスケジューラに対し" -"て複数のインスタンスを起動すべきです。スケジューラはすべて共有されたメッセー" -"ジキューをリスンするので特別な負荷分散は必要ありません。" - -msgid "" -"There are other books on the OpenStack documentation website that can help you get the job done." -msgstr "" -"作業を完了するために役立つ、その他のガイドはOpenStack ドキュメント Web サイトにあります。" - -msgid "There are several advantages to this approach:" -msgstr "このアプローチには複数の利点があります。" - -msgid "" -"There are several avenues available for seeking assistance. The quickest way " -"is to help the community help you. Search the Q&A sites, mailing list " -"archives, and bug lists for issues similar to yours. If you can't find " -"anything, follow the directions for reporting bugs or use one of the " -"channels for support, which are listed below.mailing listsOpenStackdocumentationhelp, resources fortroubleshootinggetting helpOpenStack communitygetting help " -"from" -msgstr "" -"助けを探すために利用できる方法がいくつかあります。最も早い方法は、あなたを手" -"伝ってくれるコミュニティーを支援することです。あなたの問題に近いものについ" -"て、Q&A サイト、メーリングリストの過去ログ、バグ一覧を検索します。何も見" -"つけられなければ、バグを報告して指示に従うか、以下のリストにまとまっているサ" -"ポートチャンネルのどれかを使用します。mailing listsOpenStackdocumentationhelp, resources fortroubleshootinggetting helpOpenStack communitygetting help " -"from" - -msgid "" -"There are several good sources of information available that you can use to " -"track your OpenStack development desires.OpenStack communityworking with roadmapsinformation available" -msgstr "" -"OpenStack 環境の要望を把握するために使用できる、いくつかの良い情報源がありま" -"す。" - -msgid "" -"There are three clouds currently running at CERN, totaling about 4,700 " -"compute nodes, with approximately 120,000 cores. The CERN IT cloud aims to " -"expand to 300,000 cores by 2015." -msgstr "" -"CERN では現在 3 つのクラウドが稼働しており、合計で約 4,700 台のコンピュート" -"ノード、120,000 コアがあります。 CERN IT クラウドは 2015年までに 300,000 コ" -"アにまで拡張される予定です。" - -msgid "There are two main reasons why this is a good idea:" -msgstr "このアイデアが良いとされる理由は主に 2 点挙げられます。" - -msgid "" -"There are two main types of IP addresses for guest virtual machines: fixed " -"IPs and floating IPs. Fixed IPs are assigned to instances on boot, whereas " -"floating IP addresses can change their association between instances by " -"action of the user. Both types of IP addresses can be either public or " -"private, depending on your use case.IP addressespublic addressing optionsnetwork designpublic addressing options" -msgstr "" -"ゲストの仮想マシン用に主に2つのタイプのIPアドレス(固定IPアドレスとフローティ" -"ングIPアドレス)があります。固定IPアドレスはインスタンス起動時に割り当てら" -"れ、フローティングIPアドレスは、ユーザ操作によって割当が変更できます。どちら" -"のタイプのIPアドレスも用途に合わせてパブリックまたはプライベートにする事がで" -"きます。IPアドレスパブリックアドレスオプションネットワーク設計パブリックアドレスオプション" - -msgid "" -"There are two main types of instance-specific data: metadata and user data." -"metadatainstance " -"metadatainstancesinstance-specific data" -msgstr "" -"インスタンス固有のデータには 2 種類あります。メタデータとユーザーデータです。" -"metadatainstance " -"metadatainstancesinstance-specific data" - -msgid "" -"There are two types of monitoring: watching for problems and watching usage " -"trends. The former ensures that all services are up and running, creating a " -"functional cloud. The latter involves monitoring resource usage over time in " -"order to make informed decisions about potential bottlenecks and upgrades." -"cloud controllersprocess monitoring and" -msgstr "" -"二つの監視のタイプがあります。問題の監視と、利用傾向の監視です。前者は全ての" -"サービスが動作していることを保証するものであり、後者は時間に沿ったリソース利" -"用状況を監視することで、潜在的なボトルネックの発見とアップグレードのための情" -"報を得るものです。cloud controllersprocess monitoring and" - -msgid "" -"There are two ways to ensure stability with this directory. The first is to " -"make sure this directory is run on a RAID array. If a disk fails, the " -"directory is available. The second way is to use a tool such as rsync to " -"replicate the images to another server:" -msgstr "" -"このディレクトリの永続性を保証するために二つの方法があります。一つ目はRAIDア" -"レイ上にこのディレクトリを置くことで、ディスク障害時にもこのディレクトリが利" -"用できます。二つ目の方法はrsyncのようなツールを用いてイメージを他のサーバーに" -"複製することです。" - -msgid "" -"There is a configuration option in glance-api.conf that " -"limits the number of members allowed per image, called " -"image_member_quota, set to 128 by default. That setting is a " -"different quota from the storage quota.image quotas" -msgstr "" -"イメージごとに許可されるメンバーの数を制限する、glance-api.conf の設定オプションがあります。これは、image_member_quota であり、デフォルトで 128 です。その設定は、保存容量のクォータとは別の" -"クォータです。image quotas" - -msgid "" -"There is a lot of useful information in context, " -"request_spec, and filter_properties that you can " -"use to decide where to schedule the instance. To find out more about what " -"properties are available, you can insert the following log statements into " -"the schedule_run_instance method of the scheduler above:" -msgstr "" -" contextrequest_spec と " -"filter_propertiesには、どこにインスタンスをスケジュールするのか" -"決定するのに使える有用な情報が多数含まれています。どんなプロパティが利用可能" -"なのかを知るには、以下のログ出力文を上記の schedule_run_instance メソッドに挿入してください。" - -msgid "" -"There is a lot of useful information in env and conf that you can use to decide what to do with the request. To find out " -"more about what properties are available, you can insert the following log " -"statement into the __init__ method:" -msgstr "" -"envconf には、リクエストについて何をするのか判" -"断するのに使える有用な情報が多数含まれています。どんなプロパティが利用可能な" -"のかを知るには、以下のログ出力文を __init__ メソッドに挿入してく" -"ださい。" - -msgid "" -"There is nothing OpenStack-specific in being aware of the steps needed to " -"access block devices from within the instance operating system, potentially " -"formatting them for first use and being cautious when removing them. What is " -"specific is how to create new volumes and attach and detach them from " -"instances. These operations can all be done from the Volumes page of the dashboard or by using the cinder " -"command-line client." -msgstr "" -"ブロックデバイスにアクセスするために、インスタンスのオペレーティングシステム" -"において必要となる手順に、OpenStack 固有の事項はありません。初めて使用すると" -"きにフォーマットが必要になる、デバイスを取り外すときに注意する、などが考えら" -"れます。固有の事項は、新しいボリュームを作成し、それらをインスタンスに接続お" -"よび切断する方法です。これらの操作は、ダッシュボードの ボリューム ページからすべて実行できます。または、cinder コ" -"マンドラインクライアントを使用します。" - -msgid "" -"Therefore, if you eventually plan on distributing your storage cluster " -"across multiple data centers, if you need unified accounts for your users " -"for both compute and object storage, or if you want to control your object " -"storage with the OpenStack dashboard, you should consider OpenStack Object " -"Storage. More detail can be found about OpenStack Object Storage in the " -"section below.accounts" -msgstr "" -"そのため、最終的に、複数のデータセンターにまたがってストレージクラスターを分" -"散するように計画した場合、Compute およびオブジェクトストレージ用に統合アカウ" -"ントが必要な場合、または OpenStack Dashboard でオブジェクトストレージを制御す" -"る場合、OpenStack Object Storage を考慮してみてください。以下のセクションで " -"OpenStack Object Storage iについて詳しく記載されています。アカウント" - -msgid "" -"Therefore, the fastest way to get your feature request up for consideration " -"is to create an Etherpad with your ideas and propose a session to the design " -"summit. If the design summit has already passed, you may also create a " -"blueprint directly. Read this blog post about " -"how to work with blueprints the perspective of Victoria Martínez, a " -"developer intern." -msgstr "" -"あなたの機能追加リクエストを新機能として検討してもらうのに一番早い方法は、ア" -"イデアを書いた Etherpad を作成し、デザインサミットのセッションを提案すること" -"です。デザインサミットが終わっている場合には、blueprint を直接作成することも" -"できます。 Victoria Martínez の blueprint での開" -"発の進め方についてのブログを是非読んでください。 OpenStack 開発者のイ" -"ンターンの視点で書かれた記事です。" - -msgid "" -"These actions effectively take the storage node out of the storage cluster." -msgstr "" -"これらの操作はストレージノードをストレージクラスターから効率的に外せます。" - -msgid "" -"These are configurable by admin users (the rights may also be delegated to " -"other users by redefining the access controls for compute_extension:" -"flavormanage in /etc/nova/policy.json on the nova-" -"api server). To get the list of available flavors on your system, run:" -"DAC (discretionary access control)flavoruser trainingflavors" -msgstr "" -"これらは管理ユーザーにより設定できます。nova-api サーバーにおい" -"て /etc/nova/policy.jsoncompute_extension:" -"flavormanage にあるアクセス制限を再定義することにより、この権限を他の" -"ユーザーに委譲することもできます。次のように、お使いのシステムで利用可能なフ" -"レーバーの一覧を取得できます。DAC " -"(discretionary access control)flavoruser trainingflavors" - -msgid "" -"These are shared legendary tales of image disappearances, VM massacres, and " -"crazy troubleshooting techniques that result in hard-learned lessons and " -"wisdom." -msgstr "" -"これらは、得がたい教訓や知見を得られ" -"た、イメージの消失、仮想マシンの大虐殺、クレイジーなトラブルシューティング技" -"術に関する伝説的な公開物語です。" - -msgid "" -"These changes have facilitated the first proper OpenStack upgrade guide, " -"found in , and will continue to improve " -"in the next release.Kiloupgrades in" -msgstr "" -"これらの変更は、 にある OpenStack アップグ" -"レードガイドを促進しました。次のリリースにおいて継続的に改善されつづけていま" -"す。Kiloupgrades " -"in" - -msgid "" -"These drivers work a little differently than a traditional \"block\" storage " -"driver. On an NFS or GlusterFS file system, a single file is created and " -"then mapped as a \"virtual\" volume into the instance. This mapping/" -"translation is similar to how OpenStack utilizes QEMU's file-based virtual " -"machines stored in /var/lib/nova/instances." -msgstr "" -"これらのドライバーは従来のブロックストレージドライバとは少々異なる動作をしま" -"す。NFSやGlusterFSでは1つのファイルが作成され、インスタンスに対して「仮想」ボ" -"リュームとしてマッピングされます。このマッピング/変換は/var/lib/nova/" -"instances 下に保存される、QEMUのファイルベースの仮想マシンの、" -"OpenStackによる扱い方と同様です。" - -msgid "" -"These nodes run services such as provisioning, configuration management, " -"monitoring, or GlusterFS management software. They are not required to " -"scale, although these machines are usually backed up." -msgstr "" -"これらのノードは、プロビジョニング、設定管理、モニタリング、GlusterFS 管理ソ" -"フトウェアなどのサービスを実行します。拡張の必要はありませんが、これらのマシ" -"ンは通常バックアップされます。" - -msgid "" -"These rules are all \"allow\" type rules, as the default is deny. The first " -"column is the IP protocol (one of icmp, tcp, or udp), and the second and " -"third columns specify the affected port range. The fourth column specifies " -"the IP range in CIDR format. This example shows the full port range for all " -"protocols allowed from all IPs." -msgstr "" -"標準で拒否されるので、これらのルールはすべて「許可」形式のルールです。1 番目" -"の項目は IP プロトコル (icmp, tcp, udp のどれか) です。2 番目と 3 番目の項目" -"は対象となるポート番号の範囲を指定します。4 番目の項目は CIDR 形式の IP アド" -"レスの範囲を指定します。この例では、すべてのプロトコルの全ポート番号につい" -"て、すべての IP からのトラフィックを許可しています。" - -msgid "" -"These rules are all \"allow\" type rules, as the default is deny. This " -"example shows the full port range for all protocols allowed from all IPs. " -"This section describes the most common security-group-rule parameters:" -msgstr "" -"デフォルトは拒否なので、これらのルールはすべて「許可」形式のルールです。この" -"例は、すべての IP から、すべてのプロトコルの範囲全体が許可されることを示して" -"います。このセクションは、最も一般的な security-group-rule のパラメーターを説" -"明します。" - -msgid "" -"These steps depend on your underlying distribution, but in general you " -"should be looking for \"purge\" commands in your package manager, like " -"aptitude purge ~c $package. Following this, you can look " -"for orphaned files in the directories referenced throughout this guide. To " -"uninstall the database properly, refer to the manual appropriate for the " -"product in use." -msgstr "" -"これらの手順はお使いのディストリビューションにより異なりますが、一般には " -"aptitude purge ~c $package のようなパッケージマネージャー" -"の「purge(完全削除)」コマンドを探すとよいでしょう。その後で、このガイドの中" -"に出てきたディレクトリにある不要なファイルを探します。データベースを適切にア" -"ンインストールする方法については、使用しているソフトウェアの適切なマニュアル" -"を参照して下さい。" - -msgid "" -"These tunnels use the regular routing tables on the host to route the " -"resulting GRE packet, so there is no requirement that GRE endpoints are all " -"on the same layer-2 network, unlike VLAN encapsulation." -msgstr "" -"これらのトンネルは、ホストにおいて通常のルーティングテーブルを使用して、でき" -"あがった GRE パケットを中継します。そのため、VLAN カプセル化と異なり、GRE エ" -"ンドポイントがすべて同じ L2 ネットワークにあるという要件は必要ありません。" - -msgid "" -"These two statements are produced by our middleware and show that the " -"request was sent from our DevStack instance and was allowed." -msgstr "" -"これらの2行は、このミドルウェアによって出力されており、リクエストが " -"DevStack インスタンスから送られており、許可されていることを示しています。" - -msgid "" -"These volumes are persistent: they can be detached from one instance and re-" -"attached to another, and the data remains intact. Block storage is " -"implemented in OpenStack by the OpenStack Block Storage (cinder) project, " -"which supports multiple back ends in the form of drivers. Your choice of a " -"storage back end must be supported by a Block Storage driver." -msgstr "" -"これらのボリュームは永続的であるため、インスタンスから切り離して、データをそ" -"のまま維持しつつ、別のインスタンスに再接続することができます。ブロックスト" -"レージは、ドライバー形式で複数のバックエンドをサポートする、OpenStack Block " -"Storage (Cinder) プロジェクトで OpenStack に実装されています。お使いのスト" -"レージバックエンドは、Block Storage ドライバーでサポートされている必要があり" -"ます。" - -msgid "They are:" -msgstr "次の3つの方法があります。" - -msgid "" -"Thinking it was just a one-off issue, I terminated the instance and launched " -"a new one. By then, the conference call ended and I was off to the data " -"center." -msgstr "" -"これは単なる1回限りの問題と思ったので、私はインスタンスを削除して、新しいイ" -"ンスタンスを起動した。その後電話会議は終了し、私はデータセンターを離れた。" - -msgid "" -"This allows for a single API server being used to control access to multiple " -"cloud installations. Introducing a second level of scheduling (the cell " -"selection), in addition to the regular nova-scheduler selection " -"of hosts, provides greater flexibility to control where virtual machines are " -"run." -msgstr "" -"これによって、複数のクラウドシステムに対するアクセスを、1つのAPIサーバで制御" -"することができます。通常のnova-schedulerによるホストの選択に加え" -"て、第二段階のスケジューリング(セルの選択)を導入することにより、仮想マシンを" -"実行する場所の制御の柔軟性が大きく向上します。" - -msgid "" -"This appendix contains a small selection of use cases from the community, " -"with more technical detail than usual. Further examples can be found on the " -"OpenStack website." -msgstr "" -"この付録ではコミュニティからを事例をいくつか紹介します。これらでは通常より多" -"くの技術的詳細情報が提供されています。他の事例は OpenStack ウェブサイト で探して下さい。" - -msgid "" -"This architecture's components have been selected for the following reasons:" -msgstr "" -"このアーキテクチャーコンポーネントは、以下のような理由で選択されています。" - -msgid "" -"This attribute value matches the specified IP prefix as the source IP " -"address of the IP packet." -msgstr "" -"この属性値は、IP パケットの送信元 IP アドレスとして、指定された IP プレフィッ" -"クスと一致します。" - -msgid "" -"This book is for those of you starting to run OpenStack clouds as well as " -"those of you who were handed an operational one and want to keep it running " -"well. Perhaps you're on a DevOps team, perhaps you are a system " -"administrator starting to dabble in the cloud, or maybe you want to get on " -"the OpenStack cloud team at your company. This book is for all of you." -msgstr "" -"この本は、OpenStack クラウドの運用を始めようとして人も、運用中の OpenStack ク" -"ラウドを引き継ぎ、うまく動かし続けようとしている人も対象にしています。おそら" -"く、あなたは devops チームの一員であったり、クラウドを始めたばかりのシステム" -"管理者なのでしょう。あなたの会社の OpenStack クラウドチームに入ろうとしている" -"のかもしれません。この本はそのような皆さん全員に向けたものです。" - -msgid "" -"This book is organized into two parts: the architecture decisions for " -"designing OpenStack clouds and the repeated operations for running OpenStack " -"clouds." -msgstr "" -"この本は 2つの部分から構成されています。1 つは OpenStack クラウド設計において" -"アーキテクチャーの決定に関してで、もう 1 つは稼働中の OpenStack クラウドで繰" -"り返し行う運用に関してです。" - -msgid "" -"This book provides information about designing and operating OpenStack " -"clouds." -msgstr "本書は OpenStack クラウドの設計および運用に関する情報を提供します。" - -msgid "" -"This bug report was the key to everything: KVM images lose " -"connectivity with bridged network (https://bugs.launchpad.net/ubuntu/" -"+source/qemu-kvm/+bug/997978)" -msgstr "" -"このバグ報告は、すべてに対して重要です: KVM" -"イメージがブリッジネットワークで接続を失う (https://bugs.launchpad." -"net/ubuntu/+source/qemu-kvm/+bug/997978)" - -msgid "" -"This can be useful when booting from volume because a new volume can be " -"provisioned very quickly. Ceph also supports keystone-based authentication " -"(as of version 0.56), so it can be a seamless swap in for the default " -"OpenStack swift implementation." -msgstr "" -"新規ボリュームは非常に早くプロビジョニングされるため、ボリュームからの起動に" -"便利です。また、Ceph は keystone ベースの認証 (バージョン 0.56 以降) もサポー" -"トするため、デフォルトの OpenStack swift 実装とシームレスに切り替えることがで" -"きます。" - -msgid "" -"This chapter describes only how to back up configuration files and databases " -"that the various OpenStack components need to run. This chapter does not " -"describe how to back up objects inside Object Storage or data contained " -"inside Block Storage. Generally these areas are left for users to back up on " -"their own." -msgstr "" -"この章では、OpenStackコンポーネントを動作させるのに必要な設定ファイルとデータ" -"ベースについてのバックアップ方法のみを説明します。オブジェクトストレージ内の" -"オブジェクトや、ブロックストレージ内のデータのバックアップについては説明して" -"いません。一般的にこれらの領域はユーザー自身でバックアップを行います。" - -msgid "" -"This chapter describes the compute nodes, which are dedicated to running " -"virtual machines. Some hardware choices come into play here, as well as " -"logging and networking descriptions." -msgstr "" -"この章は、仮想マシンを実行するためのコンピュートノードについて記載します。い" -"くつかのハードウェアの選択肢があります。ロギングとネットワークについても説明" -"しています。" - -msgid "" -"This chapter describes what you need to back up within OpenStack as well as " -"best practices for recovering backups." -msgstr "" -"この章は、OpenStack 中でバックアップする必要があるもの、バックアップのリカバ" -"リーのベストプラクティスについて説明します。" - -msgid "" -"This chapter discusses the growth of your cloud resources through scaling " -"and segregation considerations." -msgstr "" -"この章は、規模の拡張および分離における検討を通して、クラウドの拡大について議" -"論します。" - -msgid "" -"This chapter focuses on the second path for customizing OpenStack by " -"providing two examples for writing new features. The first example shows how " -"to modify Object Storage (swift) middleware to add a new feature, and the " -"second example provides a new scheduler feature for OpenStack Compute " -"(nova). To customize OpenStack this way you need a development environment. " -"The best way to get an environment up and running quickly is to run DevStack " -"within your cloud." -msgstr "" -"本章は、新しい機能を書くために 2 つの例を提供することにより、OpenStack をカス" -"タマイズするための 2 つ目のパスに焦点を当てます。1 番目の例は、新しい機能を追" -"加するために Object Storage (swift) ミドルウェアを変更する方法を示します。2 " -"番目の例は、OpenStack Compute (nova) に新しいスケジューラー機能を提供します。" -"このように OpenStack をカスタマイズするために、開発環境が必要になります。迅速" -"に環境を準備して動作させる最良の方法は、クラウド内で DevStack を実行すること" -"です。" - -msgid "" -"This chapter goes into the common failures that the authors have seen while " -"running clouds in production, including troubleshooting." -msgstr "" -"この章は、執筆者がこれまで本番環境を運用してきて、トラブルシューティングする" -"中で見てきた、一般的な障害について詳細に検討します。" - -msgid "" -"This chapter helps you set up your working environment and use it to take a " -"look around your cloud." -msgstr "" -"本章では、作業環境を設定し、クラウドの全体像を概観するのに役立つ内容を記載し" -"ます。" - -msgid "" -"This chapter is written to let you get your hands wrapped around your " -"OpenStack cloud through command-line tools and understanding what is already " -"set up in your cloud." -msgstr "" -"この章は、コマンドラインツールを用いてお使いの OpenStack クラウド全体を理解で" -"きるようにするために書かれました。また、お使いのクラウドにすでにセットアップ" -"されているものを理解することもできます。" - -msgid "" -"This chapter provides an example architecture using OpenStack Networking, " -"also known as the Neutron project, in a highly available environment." -msgstr "" -"本章には、OpenStack Networking (別称: Neutron プロジェクト) を高可用性環境で" -"使用するアーキテクチャーの例を記載します。" - -msgid "" -"This chapter provides upgrade information based on the architectures used in " -"this book." -msgstr "" -"この章は、本書で使用されるアーキテクチャーに基づいて、アップグレードに関する" -"情報を提供します。" - -msgid "" -"This chapter shows you how to use OpenStack cloud resources and how to train " -"your users." -msgstr "" -"この章は、OpenStack リソースの使用方法、ユーザーの教育方法について説明しま" -"す。" - -msgid "" -"This chapter shows you where OpenStack places logs and how to best read and " -"manage logs for monitoring purposes." -msgstr "" -"この章は、OpenStack のログ保存場所、監視目的にログを参照および管理する最良の" -"方法について説明します。" - -msgid "" -"This chapter walks through user-enabling processes that all admins must face " -"to manage users, give them quotas to parcel out resources, and so on." -msgstr "" -"この章は、すべての管理者がユーザーを管理し、リソースを小さくまとめるクォータ" -"を設定するために必ず直面する、ユーザー有効化のプロセスを詳細に説明します。" - -msgid "" -"This command creates a project named \"demo.\" Optionally, you can add a " -"description string by appending --description tenant-" -"description, which can be very useful. You can also " -"create a group in a disabled state by appending --disable to " -"the command. By default, projects are created in an enabled state." -msgstr "" -"このコマンドは、demo という名前のプロジェクトを作成します。オプションで、" -"--description tenant-description を追" -"加することで、説明の文字列を追加することができ、非常に便利です。また、コマン" -"ドに --disable を追加して、グループを無効な状態で作成することも" -"できます。デフォルトでは、有効化された状態でプロジェクトが作成されます。" - -msgid "" -"This command displays a list of how many instances a tenant has running and " -"some light usage statistics about the combined instances. This command is " -"useful for a quick overview of your cloud, but it doesn't really get into a " -"lot of details." -msgstr "" -"このコマンドはテナント上で実行されるインスタンスのリストと、インスタンス全体" -"の簡単な利用統計を表示します。クラウドの簡単な概要を得るのに便利なコマンドで" -"すが、より詳細な情報については表示しません。" - -msgid "" -"This creates a 10 GB volume named test-volume. To list " -"existing volumes and the instances they are connected to, if any:" -msgstr "" -"これは test-volume という名前の 10GB のボリュームを作成し" -"ます。既存のボリュームの一覧を表示するには以下のようにします。それらが接続さ" -"れているインスタンスがあれば、インスタンス情報も表示されます:" - -msgid "" -"This creates a key named , which you can associate with " -"instances. The file mykey.pem is the private key, which " -"should be saved to a secure location because it allows root access to " -"instances the key is associated with." -msgstr "" -"これにより、インスタンスと関連付けられる という名前の鍵が生" -"成されます。mykey.pem というファイルが秘密鍵です。これ" -"は、 鍵が関連付けられたインスタンスに root アクセスできるの" -"で、安全な場所に保存すべきです。" - -msgid "" -"This creates the empty security group. To make it do what we want, we need " -"to add some rules:" -msgstr "" -"これにより、空のセキュリティグループが作成されます。やりたいことを行うため" -"に、いくつかのルールを追加する必要があります。" - -msgid "" -"This deployment felt that the spare I/O on the Object Storage proxy server " -"was sufficient and that the Image Delivery portion of glance benefited from " -"being on physical hardware and having good connectivity to the Object " -"Storage back end it was using." -msgstr "" -"この構成ではオブジェクトストレージプロクシサーバのI/Oの空きが十分にあり、" -"glance のイメージ配信部分は物理ハードウェアとオブジェクトストレージのバック" -"エンドに使用している良好なネットワーク接続性の恩恵を十分に受ける事ができると" -"感じた。" - -msgid "" -"This deployment had an expensive hardware load balancer in its organization. " -"It ran multiple nova-api and swift-proxy servers " -"on different physical servers and used the load balancer to switch between " -"them." -msgstr "" -"この構成は、組織内に高価なハードウェアロードバランサーを持っていました。彼ら" -"は複数の nova-apiswift-proxy サーバーを異なる物" -"理サーバーで動作させ、その振り分けにロードバランサーを利用し増した。" - -msgid "" -"This deployment ran central services on a set of servers running KVM. A " -"dedicated VM was created for each service (nova-scheduler, rabbitmq, database, etc). This assisted the deployment with " -"scaling because administrators could tune the resources given to each " -"virtual machine based on the load it received (something that was not well " -"understood during installation)." -msgstr "" -"この構成では中央サーバをKVM上のサーバで実行しました。専用のVMをそれぞれのサー" -"ビスで用意しました(nova-scheduler, RabbitMQ, データベース" -"など)。この構成はクラウド管理者が(インストール時によく推測できないので)それぞ" -"れのサービスへの負荷のかかり具合によって各バーチャルマシンへのリソース割当を" -"変更する事ができる構成でした。" - -msgid "" -"This deployment used a central dedicated server to provide the databases for " -"all services. This approach simplified operations by isolating database " -"server updates and allowed for the simple creation of slave database servers " -"for failover." -msgstr "" -"この構成では、すべてのサービスに対するデータベースサービスを提供する専用サー" -"バを設置しました。これにより、データベースサーバーのアップデートを分離でき、" -"運用がシンプルになりました。また、フェイルオーバーのためのスレーブデータベー" -"スサーバーの設置が単純になりました。" - -msgid "" -"This does not save your password in plain text, which is a good thing. But " -"when you source or run the script, it prompts you for your password and then " -"stores your response in the environment variable OS_PASSWORD. " -"It is important to note that this does require interactivity. It is possible " -"to store a value directly in the script if you require a noninteractive " -"operation, but you then need to be extremely cautious with the security and " -"permissions of this file.passwordssecurity issuespasswords" -msgstr "" -"この場合には、パスワードがプレーンテキスト形式で保存されないのがこの方法の利" -"点となりますが、このスクリプトを元データとして使用したり、実行したりする際に" -"は、パスワードが要求され、その回答は環境変数 OS_PASSWORD に保存" -"されます。この操作は対話的に実行される必要がある点は、注意すべき重要なポイン" -"トです。 操作を非対話的に行う必要がある場合には、値をスクリプトに直接に保存す" -"ることも可能ですが、その場合にはこのファイルのセキュリティとアクセス権を極め" -"て慎重に管理する必要があります。, パス" -"ワードセキュリ" -"ティ上の問題パスワード" - -msgid "This element indicates a warning or caution." -msgstr "この要素は警告や注意を意味します。" - -msgid "This element signifies a general note." -msgstr "この要素は一般的な注記を意味します。" - -msgid "This element signifies a tip or suggestion." -msgstr "この要素はヒントや提案を意味します。" - -msgid "" -"This enables you to arrange OpenStack compute hosts into logical groups and " -"provides a form of physical isolation and redundancy from other availability " -"zones, such as by using a separate power supply or network equipment." -"availability zone" -msgstr "" -"アベイラビリティゾーンにより、OpenStack Compute ホストを論理グループにまとめ" -"て、(独立した電源系統やネットワーク装置を使うことなどで)他のアベイラビリ" -"ティゾーンとの物理的な分離や冗長性を実現できます。アベイラビリティゾーン" - -msgid "" -"This enables you to partition OpenStack Compute deployments into logical " -"groups for load balancing and instance distribution. You can use host " -"aggregates to further partition an availability zone. For example, you might " -"use host aggregates to partition an availability zone into groups of hosts " -"that either share common resources, such as storage and network, or have a " -"special property, such as trusted computing hardware.scalinghost aggregatehost aggregate" -msgstr "" -"これにより、OpenStack コンピュートデプロイメントを負荷分散やインスタンスディ" -"ストリビューション用の論理グループに分割することができます。ホストアグリゲー" -"トを使用して、アベイラビリティゾーンをさらに分割することができます。例えば、" -"ホストアグリゲートを使用してアベイラビリティゾーンをストレージやネットワーク" -"などの共通のリソースを共有するか、信頼済みのコンピューティングハードウェアな" -"どの特別なプロパティを持つホストグループに分割することができます。スケーリングホストアグリゲー" -"トホストアグリ" -"ゲート" - -msgid "" -"This example architecture does not use OpenStack Networking, because it does " -"not yet support multi-host networking and our organizations (university, " -"government) have access to a large range of publicly-accessible IPv4 " -"addresses.legacy networking (nova)vs. OpenStack Networking (neutron)" -msgstr "" -"このアーキテクチャー例では、OpenStack Networking は使用していません。neutron " -"はマルチホストネットワークをまだサポートしておらず、また対象となる組織 (大学/" -"政府機関) ではパブリックでアクセス可能な IPv4 アドレスを広範囲で利用できるの" -"が理由です。レガシーネットワーク " -"(nova)vs. OpenStack Networking (neutron)" - -msgid "" -"This example architecture has been selected based on the current default " -"feature set of OpenStack Havana, with an emphasis on " -"stability. We believe that many clouds that currently run OpenStack in " -"production have made similar choices.legacy networking (nova)rationale for " -"choice of" -msgstr "" -"このアーキテクチャーの例は、OpenStack Havana の現在の" -"デフォルト機能セットをベースに、安定性に重点を置いて選択しています。現在 " -"OpenStack を本番環境で実行しているクラウドの多くは、同様の選択をしているもの" -"と推定されます。レガシーネットワーク " -"(nova)選択の理由" - -msgid "" -"This example architecture has been selected based on the current default " -"feature set of OpenStack Havana, with an emphasis on high availability. This " -"architecture is currently being deployed in an internal Red Hat OpenStack " -"cloud and used to run hosted and shared services, which by their nature must " -"be highly available.OpenStack " -"Networking (neutron)rationale for choice of" -msgstr "" -"このアーキテクチャ例は、OpenStack Havana の現在のデフォルト機能セットをベース" -"に、高可用性に重点を置いて選択しています。このアーキテクチャーは、現在 Red " -"Hat の 企業内 OpenStack クラウドでデプロイされており、ホステッド共有サービス" -"の運用に使用されています。ホステッド共有サービスは、その性質上、高可用性が要" -"求されます。OpenStack Networking " -"(neutron)設定指針" - -msgid "This example assumes that /dev/sdb has failed." -msgstr "この例では、/dev/sdb が故障したと仮定します。" - -msgid "" -"This example configuration handles the nova service only. It first " -"configures rsyslog to act as a server that runs on port 514. Next, it " -"creates a series of logging templates. Logging templates control where " -"received logs are stored. Using the last example, a nova log from c01." -"example.com goes to the following locations:" -msgstr "" -"これはnovaサービスのみを扱っています。はじめに rsyslog を UDP 514番ポートで動" -"作するサーバーとして設定します。次に一連のログテンプレートを作成します。ログ" -"テンプレートは受け取ったログをどこに保管するかを指定します。最後の例を用いる" -"と、c01.example.comから送られるnovaのログは次の場所に保管されます。" - -msgid "" -"This example creates a security group that allows web traffic anywhere on " -"the Internet. We'll call this group global_http, which is " -"clear and reasonably concise, encapsulating what is allowed and from where. " -"From the command line, do:" -msgstr "" -"この例は、インターネットのどこからでも Web 通信を許可するセキュリティグループ" -"を作成します。このグループを global_http と呼ぶことにしま" -"す。許可されるものと許可されるところを要約した、明白で簡潔な名前になっていま" -"す。コマンドラインから以下のようにします。" - -msgid "" -"This example is for illustrative purposes only. It should not be used as a " -"container IP whitelist solution without further development and extensive " -"security testing.security issuesmiddleware example" -msgstr "" -"この例は実証目的のみのためにあります。さらなる作りこみと広範なセキュリティテ" -"ストなしにコンテナのIPホワイトリスト・ソリューションとして使用するべきではあ" -"りません。security issuesmiddleware example" - -msgid "" -"This example is for illustrative purposes only. It should not be used as a " -"scheduler for Compute without further development and testing.security " -"issuesscheduler example" -msgstr "" -"この例は実証目的のみのためにあります。さらなる作りこみとテストなしで、Compute のスケジューラーとして使用するべき" -"ではありません。security issuesscheduler example" - -msgid "" -"This example shows the HTTP requests from the client and the responses from " -"the endpoints, which can be helpful in creating custom tools written to the " -"OpenStack API." -msgstr "" -"この例は、クライアントからのHTTPリクエストとエンドポイントからのレスポンスを" -"表示しています。これはOpenStack APIを使ったカスタムツールを作る際に役立ちま" -"す。" - -msgid "" -"This external bridge also includes a physical network interface, eth2 in this example, which finally lands the packet on the external " -"network destined for an external router or destination." -msgstr "" -"この外部ブリッジも物理ネットワークインターフェースに含まれます。この例では " -"eth2 です。これは、最終的に外部ルーターや送信先に向けた外部ネッ" -"トワークのパケットを受け取ります。" - -msgid "" -"This formatting is used to support translation of logging messages into " -"different languages using the gettext internationalization library. You " -"don't need to do this for your own custom log messages. However, if you want " -"to contribute the code back to the OpenStack project that includes logging " -"statements, you must surround your log messages with underscores and " -"parentheses." -msgstr "" -"このフォーマットは、ログメッセージを異なる言語に翻訳するために gettext 国際化" -"ライブラリーを利用しているためです。カスタムログには必要ありませんが、もし、" -"OpenStackプロジェクトにログステートメントを含むコードを提供する場合は、アン" -"ダースコアと括弧でログメッセージを囲わなければなりません。" - -msgid "" -"This functionality is an important piece of the puzzle when it comes to live " -"upgrades and is conceptually similar to the existing API versioning that " -"allows OpenStack services of different versions to communicate without issue." -msgstr "" -"この機能は、ライブアップグレードするということになると、パズルの重要な部品に" -"なります。異なるバージョンの OpenStack サービスが問題なく通信できるようにする" -"ために、既存の API バージョンと概念的に同じになります。" - -msgid "" -"This guide assumes that you are familiar with a Linux distribution that " -"supports OpenStack, SQL databases, and virtualization. You must be " -"comfortable administering and configuring multiple Linux machines for " -"networking. You must install and maintain an SQL database and occasionally " -"run queries against it." -msgstr "" -"このガイドは、OpenStack をサポートする Linux ディストリビューション、SQL デー" -"タベースや仮想化に関してよく知っていることを前提にしています。複数台の Linux " -"マシンのネットワーク設定・管理にも慣れている必要があります。SQL データベース" -"のインストールと管理を行い、場合によってはデータベースに対してクエリーを実行" -"することもあります。" - -msgid "" -"This guide is for OpenStack operators and does not seek to be an exhaustive " -"reference for users, but as an operator, you should have a basic " -"understanding of how to use the cloud facilities. This chapter looks at " -"OpenStack from a basic user perspective, which helps you understand your " -"users' needs and determine, when you get a trouble ticket, whether it is a " -"user issue or a service issue. The main concepts covered are images, " -"flavors, security groups, block storage, shared file system storage, and " -"instances." -msgstr "" -"このガイドは OpenStack の運用者向けです。ユーザー向けの膨大なリファレンスを目" -"指すものではありません。しかし運用者として、クラウド設備を使用する方法につい" -"て基本的な理解を持つべきです。本章は、基本的なユーザーの観点から OpenStack を" -"見ていきます。ユーザーが必要とすることを理解する手助けになります。また、トラ" -"ブルのチケットを受け取ったときに、ユーザーの問題またはサービスの問題のどちら" -"かを判断する手助けになります。取り扱っている主な概念はイメージ、フレーバー、" -"セキュリティグループ、ブロックストレージ、共有ファイルシステムストレージおよ" -"びインスタンスです。" - -msgid "" -"This guide targets OpenStack administrators seeking to deploy and manage " -"OpenStack Networking (neutron)." -msgstr "" -"このガイドは、OpenStack Networking (neutron) を導入して管理する方法を探してい" -"る、OpenStack 管理者を対象にしています。" - -msgid "" -"This guide uses the term project, unless an example shows " -"interaction with a tool that uses the term tenant." -msgstr "" -"このガイドはプロジェクトという用語を使用します。" -"テナントという用語を使用するツールとやりとりする例もあります。" - -msgid "This has several downsides:" -msgstr "この方法には次のようなマイナス点があります。" - -msgid "" -"This infrastructure includes systems to automatically install the operating " -"system's initial configuration and later coordinate the configuration of all " -"services automatically and centrally, which reduces both manual effort and " -"the chance for error. Examples include Ansible, CFEngine, Chef, Puppet, and " -"Salt. You can even use OpenStack to deploy OpenStack, named TripleO " -"(OpenStack On OpenStack).PuppetChef" -msgstr "" -"このインフラストラクチャーには、オペレーティングシステムの初期設定を自動にイ" -"ンストールするシステムや、全サーバーを自動的かつ一元的に連携、設定するシステ" -"ムが含まれており、手作業やエラーの発生する可能性を減らします。例えば、" -"Ansible、CFEngine、Chef、Puppet、Salt などのシステムです。OpenStack を使用し" -"て、OpenStack をデプロイすることも可能です。これは、TripleO (OpenStack 上の " -"OpenStack) という愛称で呼ばれています。PuppetChef" - -msgid "" -"This instructs rsyslog to send all logs to the IP listed. In this example, " -"the IP points to the cloud controller." -msgstr "" -"これは、rsyslogに全てのログを指定したIPアドレスに送るように命令しています。こ" -"の例では、IPアドレスはクラウドコントローラーを指しています。" - -msgid "" -"This is a closed network that is not publicly routable and is simply used as " -"a private, internal network for traffic between virtual machines in " -"OpenStack, and between the virtual machines and the network nodes that " -"provide l3 routes out to the public network (and floating IPs for " -"connections back in to the VMs). Because this is a closed network, we are " -"using a different address space to the others to clearly define the " -"separation. Only Compute and OpenStack Networking nodes need to be connected " -"to this network." -msgstr "" -"これは、パブリックにはルーティングできない閉じたネットワークで、単に " -"OpenStack 内の仮想マシン間のトラフィックや、パブリックネットワークへの外向き" -"の L3 ルート (および仮想マシンに戻る接続のための Floating IP) を提供するネッ" -"トワークノードと仮想マシンと間 のトラフィックのためのプライベートの内部ネット" -"ワークに使用されます。このネットワークは閉じているため、他とは異なるアドレス" -"空間を使用して、その区別を明確に定義しています。このネットワークに接続する必" -"要があるのは、Compute ノードと OpenStack Networking ノードのみです。" - -msgid "" -"This is an ongoing and hot topic in networking circles especially with the " -"raise of virtualization and virtual switches." -msgstr "" -"これはネットワーク業界で進行中で話題のトピックである(特に仮想マシンと仮想ス" -"イッチで発生する)。" - -msgid "" -"This is the first half of the equation. To get flavor types that are " -"guaranteed a particular ratio, you must set the extra_specs in the flavor type to the key-value pair you want to match in the " -"aggregate. For example, if you define extra_specscpu_allocation_ratio to \"1.0\", then " -"instances of that type will run in aggregates only where the metadata key " -"cpu_allocation_ratio is also defined as \"1.0.\" In " -"practice, it is better to define an additional key-value pair in the " -"aggregate metadata to match on rather than match directly on " -"cpu_allocation_ratio or " -"core_allocation_ratio. This allows better " -"abstraction. For example, by defining a key overcommit and setting a value of \"high,\" \"medium,\" or \"low,\" you " -"could then tune the numeric allocation ratios in the aggregates without also " -"needing to change all flavor types relating to them." -msgstr "" -"これは前半部分です。特定の比率を保証するフレーバー種別を取得するには、フレー" -"バー種別の extra_specs をアグリゲートでマッチする key-" -"value ペアに設定する必要があります。たとえば、extra_specscpu_allocation_ratio を 1.0 に定義すると、そ" -"の種別のインスタンスは、メタデータキー cpu_allocation_ratio も 1.0 と定義されているアグリゲートのみで実行されます。実際は、 " -"cpu_allocation_ratio または " -"core_allocation_ratio で直接マッチさせるのではなく、" -"マッチするアグリゲートメタデータに追加の key-value ペアを定義すると良いでしょ" -"う。これにより抽象化が改善されます。たとえば、overcommit キーを定義して、高、中、低の値を設定することで、関連するフレーバー" -"種別をすべて変更する必要なしに、アグリゲートの割合比を調節することができま" -"す。" - -msgid "This is useful, as logs from c02.example.com go to:" -msgstr "c02.example.comから送られたログはこちらに保管されます。" - -msgid "" -"This leaves you with an important point of decision when designing your " -"cloud. OpenStack Networking is robust enough to use with a small number of " -"limitations (performance issues in some scenarios, only basic high " -"availability of layer 3 systems) and provides many more features than " -"nova-network. However, if you do not have the more " -"complex use cases that can benefit from fuller software-defined networking " -"capabilities, or are uncomfortable with the new concepts introduced, " -"nova-network may continue to be a viable option for the " -"next 12 months." -msgstr "" -"これは、クラウドを設計するときに、重要な判断ポイントを残します。OpenStack " -"Networking は、いくつかのシナリオにおける性能問題、L3 システムの基本的な高可" -"用性のみなど、少しの制限を持ちますが、十分使用できる堅牢さを持ちます。" -"nova-network よりも多くの機能を提供します。しかしながら、" -"より完全な SDN の機能から利益を得る、より複雑なユースケースを持たない場合、ま" -"たは新しく導入された概念になじめない場合、nova-network は" -"次の 12 か月間の主要な選択肢であり続けるかもしれません。" - -msgid "" -"This list of open source file-level shared storage solutions is not " -"exhaustive; other open source solutions exist (MooseFS). Your organization " -"may already have deployed a file-level shared storage solution that you can " -"use." -msgstr "" -"オープンソースのファイルレベルの共有ストレージソリューションに関する一覧は完" -"全ではありません。その他のオープンソースソリューションも存在します " -"(MooseFS)。ユーザーの所属組織では、すでにユーザーが使用できるように、ファイル" -"レベルの共有ストレージソリューションがデプロイされている可能性があります。" - -msgid "" -"This list will change between releases, so please refer to your " -"configuration guide for up-to-date information." -msgstr "" -"上記のリストはリリース間で変更されることもあります。最新の情報については設定" -"ガイドを参照してください。" - -msgid "" -"This may be a dnsmasq and/or nova-network related issue. " -"(For the preceding example, the problem happened to be that dnsmasq did not " -"have any more IP addresses to give away because there were no more fixed IPs " -"available in the OpenStack Compute database.)" -msgstr "" -"これはdnsmasqの、もしくはdnsmasqとnova-network両方の問題で" -"す。(例えば上記では、OpenStack Compute データベース上に利用可能な固定IPがな" -"く、dnsmasqがIPアドレスを払い出せない問題が発生しています)" - -msgid "This might print the error and cause of the problem." -msgstr "" -"これにより、エラーと問題の原因が表示されるかもしれません。 " - -msgid "This network is a combination of: " -msgstr "このネットワークは、以下の要素の組み合わせです: " - -msgid "" -"This network is connected to the controller nodes so users can access the " -"OpenStack interfaces, and connected to the network nodes to provide VMs with " -"publicly routable traffic functionality. The network is also connected to " -"the utility machines so that any utility services that need to be made " -"public (such as system monitoring) can be accessed." -msgstr "" -"このネットワークは、ユーザーが OpenStack インターフェースにアクセスできるよう" -"するためにコントローラーノードに接続され、パブリックでルーティング可能なトラ" -"フィックの機能を仮想マシンに提供するためにネットワークノードに接続されます。" -"また、このネットワークは、公開する必要のある任意のユーティリティサービス (シ" -"ステムの監視など) にアクセスできるようにするために、ユーティリティマシンにも" -"接続されます。" - -msgid "" -"This network is used for OpenStack management functions and traffic, " -"including services needed for the provisioning of physical nodes " -"(pxe, tftp, kickstart), traffic between various OpenStack node types using OpenStack APIs " -"and messages (for example, nova-compute talking to " -"keystone or cinder-volume talking to " -"nova-api), and all traffic for storage data to the " -"storage layer underneath by the Gluster protocol. All physical nodes have at " -"least one network interface (typically eth0) in this " -"network. This network is only accessible from other VLANs on port 22 (for " -"ssh access to manage machines)." -msgstr "" -"このネットワークは、OpenStack の管理機能およびトラフィックに使用されます。こ" -"れには、物理ノードのプロビジョニングに必要なサービス (pxetftpkickstart)、OpenStack " -"API およびメッセージを使用する様々な OpenStack ノードタイプの間のトラフィッ" -"ク (例: nova-compute から keystone への" -"通信や cinder-volume から nova-api へ" -"の通信など)、 Gluster プロトコルによる配下のストレージ層へのストレージデータ" -"の全トラフィックなどが含まれます。全物理ノードで、少なくとも 1 つのネットワー" -"クインターフェース (通常は eth0) がこのネットワークに設定" -"されます。他の VLAN からのこのネットワークへのアクセスは、ポート 22 でのみで" -"可能です (マシンを管理するための ssh アクセス)。" - -msgid "" -"This output shows that an instance named was created from " -"an Ubuntu 12.04 image using a flavor of m1.small and is " -"hosted on the compute node c02.example.com." -msgstr "" -"この出力は、 という名前のインスタンスが Ubuntu 12.04 イメージ" -"から m1.small のフレーバーで作成され、コンピュートノード " -"c02.example.com でホストされていることを示しています。" - -msgid "" -"This output shows that two networks are configured, each network containing " -"255 IPs (a /24 subnet). The first network has been assigned to a certain " -"project, while the second network is still open for assignment. You can " -"assign this network manually; otherwise, it is automatically assigned when a " -"project launches its first instance." -msgstr "" -"この出力は、2 つのネットワークが設定されており、各ネットワークには 255 の IP " -"アドレス (/24 サブネットが 1 つ) が含まれていることを示しています。1 番目の" -"ネットワークは、特定のプロジェクトに割り当て済みですが、2 番目のネットワーク" -"はまだ割り当てができる状態です。このネットワークは手動で割り当てることができ" -"ます。手動での割り当てを行わなかった場合には、プロジェクトで最初のインスタン" -"スが起動されたときに自動で割り当てられます。" - -msgid "" -"This particular example architecture has been upgraded from Grizzly to " -"Havana and tested in production environments where many public IP addresses " -"are available for assignment to multiple instances. You can find a second " -"example architecture that uses OpenStack Networking (neutron) after this " -"section. Each example offers high availability, meaning that if a particular " -"node goes down, another node with the same configuration can take over the " -"tasks so that the services continue to be available.HavanaGrizzly" -msgstr "" -"この特定のアーキテクチャー例は、Grizzly から Havana にアップグレードされてお" -"り、複数のインスタンスに割り当てるためのパブリック IP アドレスが多数利用可能" -"な本番環境でテスト済みです。このセクションの次には、OpenStack Networking " -"(neutron) を使用する第 2 のアーキテクチャー例を紹介しています。各例では高可用" -"性を提供しています。これは、特定のノードが停止した場合には同じ設定の別のノー" -"ドがタスクを引き継いでサービスを引き続き提供できることを意味します。" -"HavanaGrizzly" - -msgid "" -"This past Valentine's Day, I received an alert that a compute node was no " -"longer available in the cloudmeaning, showed this particular node with a " -"status of XXX." -msgstr "" -"この前のバレンタインデーに、クラウド中にあるコンピュートノードが最早動いてい" -"ないとの警告を受け取った。つまり、出力で、この特定のノードの状態が XXX になっ" -"ていた。" - -msgid "" -"This project continues to improve and you may consider using it for " -"greenfield deployments, though " -"according to the latest user survey results it remains to see widespread " -"uptake." -msgstr "" -"このプロジェクトは改善を続けています。greenfield の導入のために使用することを検討する可能性があります。最新" -"のユーザー調査の結果によると、大幅な理解が得られたままです。" - -msgid "" -"This request took over two minutes to process, but executed quickly on " -"another co-existing Grizzly deployment using the same hardware and system " -"configuration." -msgstr "" -"このリクエストは、処理に 2 分以上かかりました。しかし、同じハードウェアとシス" -"テム設定を使用している、他の一緒に動いている Grizzly 環境は迅速に実行されまし" -"た。" - -msgid "" -"This script dumps the entire MySQL database and deletes any backups older " -"than seven days." -msgstr "" -"このスクリプトは MySQL データベース全体をダンプし、7日間より古いバックアップ" -"を削除します。" - -msgid "" -"This script is specific to a certain OpenStack installation and must be " -"modified to fit your environment. However, the logic should easily be " -"transferable." -msgstr "" -"このスクリプトは特定のOpenStackインストール環境向けなので、自身の環境に適用す" -"る際には変更しなくてはいけませんが、ロジックは簡単に変更できるでしょう。" - -msgid "" -"This section covers specific examples of configuration options you might " -"consider tuning. It is by no means an exhaustive list." -msgstr "" -"この節では、調整を検討した方がよい設定オプションの個別の具体例を扱います。決" -"して完全なリストではありません。" - -msgid "" -"This section describes the process to upgrade a basic OpenStack deployment " -"based on the basic two-node architecture in the OpenStack " -"Installation Guide. All nodes must run a supported " -"distribution of Linux with a recent kernel and the current release packages." -msgstr "" -"このセクションは、OpenStack インストールガイドにある、基本的な 2 ノードアーキテクチャーを参照しています。すべてのノー" -"ドでは、サポートする Linux ディストリビューションで、最新のカーネルとカレント" -"のリリースパッケージが実行されている必要があります。" - -msgid "" -"This section discusses which files and directories should be backed up " -"regularly, organized by service.file " -"systemsbackup/recovery ofbackup/recoveryfile systems" -msgstr "" -"このセクションは、サービスにより整理される、定期的にバックアップすべきファイ" -"ルやディレクトリーについて議論します。file systemsbackup/recovery ofbackup/recoveryfile systems" - -msgid "" -"This section gives you a breakdown of the different nodes that make up the " -"OpenStack environment. A node is a physical machine that is provisioned with " -"an operating system, and running a defined software stack on top of it. " -" provides node descriptions and " -"specifications.OpenStack Networking " -"(neutron)detailed description of" -msgstr "" -"本セクションでは、OpenStack 環境を構成する異なるノードの内訳を記載します。 " -"ノードとは、オペレーティングシステムがプロビジョニングされた物理マシンで、そ" -"の上に定義済みソフトウェアスタックを実行します。 には、ノードの説明と仕様を記載しています。OpenStack Networking (neutron)詳細" -"" - -msgid "" -"This section lists several of the most important Use Cases that demonstrate " -"the main functions and abilities of Shared File Systems service: " -"" -msgstr "" -"このセクションは、主要な Shared File Systems サービスの機能を説明するために、" -"いくつかの重要なユースケースを示します。 " - -msgid "" -"This section provides a high-level overview of the differences among the " -"different commodity storage back end technologies. Depending on your cloud " -"user's needs, you can implement one or many of these technologies in " -"different combinations:storagecommodity storage" -msgstr "" -"このセクションでは、さまざまな商用ストレージバックエンドテクノロジーにおける" -"相違点をカンタンにまとめます。クラウドユーザーのニーズに合わせて、1 つまたは" -"多数のテクノロジーを組み合わせて実装することができます。storagecommodity storage" - -msgid "" -"This section provides guidance for rolling back to a previous release of " -"OpenStack. All distributions follow a similar procedure.rollbacksprocess forupgradingrolling back failures" -msgstr "" -"このセクションは、前のリリースの OpenStack にロールバックするためのガイドを提" -"供します。すべてのディストリビューションは、同様の手順に従います。rollbacksprocess forupgradingrolling back failures" - -msgid "" -"This section was intended as a brief introduction to some of the most useful " -"of many OpenStack commands. For an exhaustive list, please refer to the " -"Admin User " -"Guide, and for additional hints and tips, see the Cloud Admin Guide. " -"We hope your users remain happy and recognize your hard work! (For more hard " -"work, turn the page to the next chapter, where we discuss the system-facing " -"operations: maintenance, failures and debugging.)" -msgstr "" -"このセクションは、多くの OpenStack コマンドの中から、いくつかの最も有用なもの" -"を簡単に説明することを意図していました。膨大なリストは、管理者ユーザーガイドを参照してください。さらなるヒントやコツは、Cloud Admin Guide を参照してく" -"ださい。ユーザーが幸せなままであり、あなたのハードワークに気づくことを願いま" -"す。(さらなるハードワークは、次の章にページを進んでください。システムが直面" -"する運用、メンテナンス、障害、デバッグについて議論しています。)" - -msgid "" -"This stage is about checking that a bug is real and assessing its impact. " -"Some of these steps require bug supervisor rights (usually limited to core " -"teams). If the bug lacks information to properly reproduce or assess the " -"importance of the bug, the bug is set to:" -msgstr "" -"このステップでは、バグが実際に起こるかのチェックやその重要度の判定が行われま" -"す。これらのステップを行うには、バグ管理者権限が必要なものがあります (通常、" -"バグ管理権限はコア開発者チームだけが持っています)。バグ報告に、バグを再現した" -"りバグの重要度を判定したりするのに必要な情報が不足している場合、バグは" - -msgid "" -"This step completes the rollback procedure. You should remove the upgrade " -"release repository and run to prevent accidental upgrades " -"until you solve whatever issue caused you to roll back your environment." -msgstr "" -"この手順は、ロールバックの手順を完了します。アップグレードするリリースのリポ" -"ジトリーを作成し、 を実行して、環境をロールバックすることに" -"なった問題を解決するまで、偶然アップグレードすることを防ぎます。" - -msgid "" -"This tells us the currently installed version of the package, newest " -"candidate version, and all versions along with the repository that contains " -"each version. Look for the appropriate Grizzly version—" -"1:2013.1.4-0ubuntu1~cloud0 in this case. The process of " -"manually picking through this list of packages is rather tedious and prone " -"to errors. You should consider using the following script to help with this " -"process:" -msgstr "" -"これにより、パッケージの現在インストールされているバージョン、候補となる最新" -"バージョン、各バージョンを含むリポジトリーにある全バージョンを把握できます。" -"適切な Grizzly バージョン、この場合 1:2013.1.4-0ubuntu1~cloud0 " -"を探します。このパッケージ一覧から手動で探し当てる手順は、むしろ退屈で間違え" -"やすいです。この手順を支援するために、以下のスクリプトを使用することを検討す" -"べきです。" - -msgid "" -"This user data can be put in a file on your local system and then passed in " -"at instance creation with the flag --user-data <user-data-file>. For example:" -msgstr "" -"このユーザーデータは、ローカルマシンのファイルに保存され、--user-data " -"<user-data-file> フラグを用いてインスタンスの生成時に渡されま" -"す。たとえば:" - -msgid "This user was the only user on the node (new node)" -msgstr "このユーザはそのノード(新しいノード)上の唯一のユーザだった。" - -msgid "" -"Thoroughly review the release notes to learn about new, updated, and " -"deprecated features. Find incompatibilities between versions." -msgstr "" -"全体的にリ" -"リースノートを確認して、新機能、更新された機能、非推奨の機能について調" -"べてください。バージョン間の非互換を確認してください。" - -msgid "" -"Though many successfully use the various python-*client code as an effective " -"SDK for interacting with OpenStack, consistency between the projects and " -"documentation availability waxes and wanes. To combat this, an effort to " -"improve the experience has started. Cross-project development efforts " -"in OpenStack have a checkered history, such as the unified client project having several false starts. However, the early signs for the SDK " -"project are promising, and we expect to see results during the Juno cycle." -"" -msgstr "" -"OpenStack と通信するための効率的な SDK として、さまざまな python-*client コー" -"ドを使用してとても成功していますが、プロジェクト間の整合性とドキュメントの入" -"手可能性は満ち欠けがあります。これを取り除くために、エクスペリエンスを改善" -"するための取り組みが開始されました。OpenStack におけるクロスプロジェク" -"トの取り組みは、いくつかの失敗から始まった統一クライアントのプロジェクトな" -"ど、波乱の歴史があります。しかしながら、SDK プロジェクトの初期の目安が約束さ" -"れ、Juno サイクル中に結果を見ることを期待しています。" - -msgid "" -"Through nova-network or neutron, " -"OpenStack Compute automatically manages iptables, including forwarding " -"packets to and from instances on a compute node, forwarding floating IP " -"traffic, and managing security group rules. In addition to managing the " -"rules, comments (if supported) will be inserted in the rules to help " -"indicate the purpose of the rule. iptablestroubleshootingiptables" -msgstr "" -"nova-networkneutron に関わらず、" -"OpenStack Compute は自動的に iptables を管理します。コンピュートノードにある" -"インスタンスとのパケット転送、Floating IP 通信の転送、セキュリティグループ" -"ルールの管理など。ルールの管理に加えて、(サポートされる場合) コメントがルール" -"に挿入され、ルールの目的を理解しやすくします。iptablestroubleshootingiptables" - -msgid "" -"Tim Bell from CERN gave us feedback on the outline before we started and " -"reviewed it mid-week." -msgstr "" -"CERNの Tim Bell は、私たちが作業を開始する前に、その概要についてフィードバッ" -"クを与えてくれて、週の半ばにはレビューをしてくれました。" - -msgid "" -"Time synchronization is a critical element to ensure continued operation of " -"OpenStack components. Correct time is necessary to avoid errors in instance " -"scheduling, replication of objects in the object store, and even matching " -"log timestamps for debugging.networksNetwork Time Protocol (NTP)" -msgstr "" -"時刻同期はOpenStackコンポーネントを継続運用する上で非常に重要な要素です。正確" -"な時間はインスタンスのスケジューリング、ボブジェクトストア内のオブジェクトの" -"レプリケーションのエラーの回避や、さらにはデバッグのためにログのタイムスタン" -"プの一致という意味合いで必要です。ネッ" -"トワークネットワークタイムプロトコル (NTP)" - -msgid "" -"To access the instance's disk (/var/lib/nova/instances/instance-" -"xxxxxx/disk), use the following steps:" -msgstr "" -"インスタンスのディスク (/var/lib/nova/instances/instance-" -"xxxxxx/disk) にアクセスするには、以下の" -"手順を実行します。" - -msgid "To add a DEBUG logging statement, you would do:" -msgstr "DEBUGログステートメントを追加するには次のようにします。" - -msgid "" -"To add a project through the command line, you must use the OpenStack " -"command line client." -msgstr "" -"コマンドラインからのプロジェクト追加する場合、OpenStack コマンドラインクライ" -"アントを使用する必要があります。" - -msgid "" -"To add logging statements, the following line should be near the top of the " -"file. For most files, these should already be there:" -msgstr "" -"ログステートメントを追加するには、次の行をファイルの先頭に置きます。ほとんど" -"のファイルでは、これらは既に存在します。" - -msgid "" -"To add new volumes, you need only a name and a volume size in gigabytes. " -"Either put these into the create volume web form or use " -"the command line:" -msgstr "" -"新しいボリュームを追加する際に必要なのは、名前とギガバイト単位のボリューム容" -"量だけです。これらを ボリュームの作成 Web フォームに記入" -"します。または、コマンドラインを使用します:" - -msgid "To apply or update account quotas on a project:" -msgstr "プロジェクトのアカウントのクォータを適用または更新します。" - -msgid "" -"To associate a key with an instance on boot, add --key_name mykey to your command line. For example:" -msgstr "" -"起動時にインスタンスに鍵を関連付けるには、たとえば、コマンドラインに --" -"key_name mykey を追加します。例えば、" - -msgid "" -"To associate or disassociate a floating IP with a server from the command " -"line, use the following commands:" -msgstr "" -"以下のコマンドを使用して、コマンドラインからサーバーに Floating IP アドレスを" -"関連付けまたは関連付け解除します。" - -msgid "" -"To avoid this situation, create a highly available cloud controller cluster. " -"This is outside the scope of this document, but you can read more in the " -"OpenStack " -"High Availability Guide." -msgstr "" -"この状況を避けるために、高可用なクラウドコントローラークラスターを作成しま" -"す。このことは、このドキュメントの範囲外ですが、OpenStack High Availability Guide に情報があります。" - -msgid "" -"To begin, configure all OpenStack components to log to syslog in addition to " -"their standard log file location. Also configure each component to log to a " -"different syslog facility. This makes it easier to split the logs into " -"individual components on the central server:rsyslog" -msgstr "" -"まず始めに、全てのOpenStackコンポーネントのログを標準ログに加えてsyslogに出力" -"するように設定します。また、各コンポーネントが異なるsyslogファシリティになる" -"ように設定します。これによりログサーバー上で、個々のコンポーネントのログを分" -"離しやすくなります。rsyslog" - -msgid "" -"To boot normally from an image and attach block storage, map to a device " -"other than vda. You can find instructions for launching an instance and " -"attaching a volume to the instance and for copying the image to the attached " -"volume in the OpenStack End User Guide." -msgstr "" -"通常通りイメージから起動し、ブロックストレージを接続するために、vda 以外のデ" -"バイスを対応付けます。OpenStack エンドユーザーガイド" -"に、インスタンスを起動して、それにボリュームを接続する方法、接続されたボ" -"リュームにイメージをコピーする方法が説明されています。" - -msgid "" -"To capture packets from the patch-tun internal interface on " -"integration bridge, br-int:" -msgstr "" -"統合ブリッジ br-int の内部インターフェース patch-tun からのパケットをキャプチャーする方法。" - -msgid "" -"To create a development environment, you can use DevStack. DevStack is " -"essentially a collection of shell scripts and configuration files that " -"builds an OpenStack development environment for you. You use it to create " -"such an environment for developing a new feature.customizationdevelopment environment " -"creation fordevelopment environments, creatingDevStackdevelopment environment creation" -msgstr "" -"開発環境を作成するために、DevStack を使用できます。DevStack は本質的に、" -"OpenStack の開発環境を構築する、シェルスクリプトと設定ファイルの塊です。新し" -"い機能を開発するために、そのような環境を構築するために使用します。customizationdevelopment " -"environment creation fordevelopment environments, creatingDevStackdevelopment environment creation" - -msgid "To create a project through the OpenStack dashboard:" -msgstr "OpenStack Dashboard でプロジェクトを作成します。" - -msgid "" -"To create a scheduler, you must inherit from the class nova.scheduler." -"driver.Scheduler. Of the five methods that you can override, you " -"must override the two methods marked with an asterisk " -"(*) below:" -msgstr "" -"スケジューラーを作成するには、nova.scheduler.driver.Scheduler ク" -"ラスを継承しなければなりません。オーバーライド可能な5つのメソッドのうち、以" -"下のアスタリスク (*) で示される2つのメソッドをオーバーライドしなけ" -"ればなりません。" - -msgid "To create a user, you need the following information:" -msgstr "ユーザーを作成するには、以下の情報が必要です。" - -msgid "" -"To create security group rules for a cluster of instances, use RemoteGroups." -msgstr "" -"インスタンスのクラスター向けにセキュリティグループのルールを作成するために、" -"リモートグループ を使用します。" - -msgid "" -"To create security group rules for a cluster of instances, you want to use " -"SourceGroups." -msgstr "" -"インスタンスのクラスター向けにセキュリティグループのルールを作成するために、" -"SourceGroups を使用したいかもしれませ" -"ん。" - -msgid "To create the middleware and plug it in through Paste configuration:" -msgstr "ミドルウェアを作成して Paste の環境設定を通して組み込むためには:" - -msgid "To create the scheduler and plug it in through configuration:" -msgstr "スケジューラーを作成して、設定を通して組み込むためには:" - -msgid "" -"To deal with the \"dirty\" buffer issue, we recommend using the sync command " -"before snapshotting:" -msgstr "" -"「ダーティー」バッファーの問題を解決するために、スナップショットの前に sync " -"コマンドを使用することを推奨します。" - -msgid "" -"To delete an image,imagesdeleting just execute:" -msgstr "" -"イメージを削除する場合、imagesdeleting 次のようにします。" - -msgid "" -"To delete instances from the dashboard, select the Terminate " -"instance action next to the instance on the Instances page. From the command line, do this:" -msgstr "" -"ダッシュボードからインスタンスを削除するには、インスタンスページにおいてインスタンスの隣にあるインスタンスの終了アクションを選択します。または、コマンドラインから次のとおり実行しま" -"す:" - -msgid "" -"To demonstrate customizing OpenStack, we'll create an example of a Compute " -"scheduler that randomly places an instance on a subset of hosts, depending " -"on the originating IP address of the request and the prefix of the hostname. " -"Such an example could be useful when you have a group of users on a subnet " -"and you want all of their instances to start within some subset of your " -"hosts." -msgstr "" -"OpenStack のカスタマイズをデモするために、リクエストの送信元IPアドレスとホス" -"ト名のプレフィックスに基づいてインスタンスを一部のホストにランダムに配置する" -"ような Compute のスケジューラーの例を作成します。この例は、1つのユーザのグ" -"ループが1つのサブネットにおり、インスタンスをホスト群の中の一部のサブネット" -"で起動したい場合に有用です。" - -msgid "" -"To deploy your storage by using only commodity hardware, you can use a " -"number of open-source packages, as shown in ." -msgstr "" -"で記載されているように、コモディティハー" -"ドウェアを使用してストレージをデプロイする場合、オープンソースのパッケージを" -"使用することができます。" - -msgid "" -"To design, deploy, and configure OpenStack, administrators must understand " -"the logical architecture. A diagram can help you envision all the integrated " -"services within OpenStack and how they interact with each other.modules, types ofOpenStackmodule types in" -msgstr "" -"OpenStack の設計、デプロイ、および構成を行うにあたって、管理者は論理アーキテ" -"クチャーを理解するが必要があります。OpenStack 内で統合される全サービスおよび" -"それらがどのように相互作用するかについての構想を立てるには、図が役立ちます。" -"モジュール、タイプOpenStackモジュールタイプ" - -msgid "" -"To determine the potential features going in to future releases, or to look " -"at features implemented previously, take a look at the existing blueprints " -"such as OpenStack " -"Compute (nova) Blueprints, OpenStack Identity (keystone) Blueprints, " -"and release notes." -msgstr "" -"将来のリリースで検討されている機能を確認したり、過去に実装された機能を見るに" -"は、OpenStack " -"Compute (nova) Blueprints, OpenStack Identity (keystone) Blueprints など" -"の既存の Blueprint やリリースノートを見て下さい。" - -msgid "" -"To disable DEBUG-level logging, edit /etc/nova/nova.conf as follows:" -msgstr "" -"DEBUG レベルのロギングを無効にするには、/etc/nova/nova.conf を以下のように編集します。" - -msgid "" -"To discover how API requests should be structured, read the OpenStack API Reference. To chew through the responses using jq, see the jq Manual." -msgstr "" -"API 要求の構成方法については、OpenStack API Reference を参照してください。jq を使" -"用した応答についての詳しい説明は jq Manual を参照してください。" - -msgid "" -"To discover which internal VLAN tag is in use for a GRE tunnel by using the " -"ovs-ofctl command:" -msgstr "" -"ovs-ofctl コマンドを使用することにより、GRE トンネル向けに" -"使用されている内部 VLAN タグを検索します。" - -msgid "" -"To discover which internal VLAN tag is in use for a given external VLAN by " -"using the ovs-ofctl command:" -msgstr "" -"ovs-ofctl コマンドを使用することにより、外部 VLAN 向けに使" -"用されている内部 VLAN タグを検索します。" - -msgid "" -"To do this, generate a list of instance UUIDs that are hosted on the failed " -"node by running the following query on the nova database:" -msgstr "" -"これを実行するために、nova データベースにおいて以下のクエリーを実行することに" -"より、故障したノードにおいてホストされているインスタンスの UUID の一覧を生成" -"します。" - -msgid "" -"To effectively disable the libvirt live snapshotting, until the problem is " -"resolved, add the below setting to nova.conf." -msgstr "" -"問題が解決するまでは、libvirt のライブスナップショットを効果的に無効化するた" -"めに、以下の設定を nova.conf に追加します。" - -msgid "" -"To enable nova to send notifications, add the following to " -"nova.conf:" -msgstr "" -"nova で通知の送信を有効化するには次の行を nova.conf に追加します。" - -msgid "" -"To enable this feature, edit the /etc/glance/glance-api.conf file, and under the [DEFAULT] section, add:" -msgstr "" -"この機能を有効にするには、/etc/glance/glance-api.conf " -"ファイルを編集して [DEFAULT] セクションに以下を追加します。" - -msgid "" -"To ensure that important services have written their contents to disk (such " -"as databases), we recommend that you read the documentation for those " -"applications to determine what commands to issue to have them sync their " -"contents to disk. If you are unsure how to do this, the safest approach is " -"to simply stop these running services normally." -msgstr "" -"(データベースのような) 重要なサービスがコンテンツをディスクに書き込んだことを" -"保証するために、それらのアプリケーションのドキュメントを読んで、コンテンツを" -"ディスクに同期させるためにどのコマンドを発行する必要があるかを調べることを推" -"奨します。ディスクに同期させるための方法がはっきり分からない場合、最も安全な" -"方法は単にこれらの実行中のサービスを通常通り停止することです。" - -msgid "" -"To examine the secondary or ephemeral disk, use an alternate mount point if " -"you want both primary and secondary drives mounted at the same time:" -msgstr "" -"セカンダリディスクや一時ディスクを調査する際に、プライマリディスクとセカンダ" -"リディスクを同時にマウントしたければ、別のマウントポイントを使用してくださ" -"い。" - -msgid "To find out whether any floating IPs are available in your cloud, run:" -msgstr "" -"クラウドに利用可能な Floating IP アドレスがあるかどうかを確認するには、以下の" -"コマンドを実行します。" - -msgid "" -"To freeze the volume in preparation for snapshotting, you would do the " -"following, as root, inside the instance:" -msgstr "" -"スナップショットの準備においてボリュームをフリーズするには、インスタンスの中" -"で root として次のとおり実行します:" - -msgid "" -"To install (or upgrade) a package from the PyPI archive with pip, command-line toolsinstallingas root:" -msgstr "" -"pip を使用して PyPI アーカイブからパッケージをインストール (またはアップグ" -"レード) するには、コマンドラインツール" -"インストールroot として以下のコ" -"マンドを実行します。" - -msgid "" -"To launch an instance, you need to select an image, a flavor, and a name. " -"The name needn't be unique, but your life will be simpler if it is because " -"many tools will use the name in place of the UUID so long as the name is " -"unique. You can start an instance from the dashboard from the " -"Launch Instance button on the Instances page or by selecting the Launch Instance " -"action next to an image or snapshot on the Images page." -"instancesstarting" -msgstr "" -"インスタンスを起動するには、イメージ、フレーバーおよび名前を選択する必要があ" -"ります。名前は一意である必要がありませんが、名前が一意である限りは、多くの" -"ツールが UUID の代わりに名前を使用できるので、シンプルにできます。インスタン" -"スの起動は、ダッシュボードにおいて、インスタンスページに" -"ある起動ボタン、またはイメージとスナップ" -"ショットページにあるイメージまたはスナップショットの隣にある" -"起動アクションから実行できます。instancesstarting" - -msgid "" -"To list the bridges on a system, use ovs-vsctl list-br. " -"This example shows a compute node that has an internal bridge and a tunnel " -"bridge. VLAN networks are trunked through the eth1 network " -"interface:" -msgstr "" -"ovs-vsctl list-br を使用して、システムにあるブリッジを一覧" -"表示します。この例は、内部ブリッジと統合ブリッジを持つコンピュートノードを表" -"します。VLAN ネットワークが eth1 ネットワークインターフェース経" -"由でトランクされます。" - -msgid "" -"To make working with subsequent requests easier, store the token in an " -"environment variable:" -msgstr "次の要求での作業をより簡単に行うには、環境変数にトークンを保管します。" - -msgid "" -"To obtain snapshots of a Windows VM these commands can be scripted in " -"sequence: flush the filesystems, freeze the filesystems, snapshot the " -"filesystems, then unfreeze the filesystems. As with scripting similar " -"workflows against Linux VMs, care must be used when writing such a script to " -"ensure error handling is thorough and filesystems will not be left in a " -"frozen state." -msgstr "" -"Windows 仮想マシンのスナップショットを取得する場合、以下のコマンドを連続して" -"実行するスクリプト化できます。ファイルシステムをフラッシュする、ファイルシス" -"テムをフリーズする、ファイルシステムのスナップショットを取得する、ファイルシ" -"ステムをフリーズ解除する。Linux 仮想マシンのワークフローと同じように、そのよ" -"うなスクリプトを書くときに、注意して使用すべきです。エラー処理を徹底して、" -"ファイルシステムがフリーズ状態のままにならないようにします。" - -msgid "To perform the rollback" -msgstr "ロールバック方法" - -msgid "To perform this action from command line, run the following command:" -msgstr "" -"このアクションをコマンドラインから実行するには、以下のコマンドを実行します" - -msgid "" -"To plug this middleware into the swift Paste pipeline, you edit one " -"configuration file, /etc/swift/proxy-server.conf:" -msgstr "" -"このミドルウェアを swift Paste のパイプラインに組み込むには、設定ファイル " -"/etc/swift/proxy-server.conf を編集します。" - -msgid "" -"To plug this scheduler into nova, edit one configuration file, /" -"etc/nova/nova.conf:" -msgstr "" -"このスケジューラーを nova に追加するために、設定ファイル /etc/nova/" -"nova.conf を編集します。" - -msgid "" -"To prevent system capacities from being exhausted without notification, you " -"can set up quotas. Quotas are " -"operational limits. For example, the number of gigabytes allowed per tenant " -"can be controlled to ensure that a single tenant cannot consume all of the " -"disk space. Quotas are currently enforced at the tenant (or project) level, " -"rather than the user level.quotasuser managementquotas" -msgstr "" -"システムの容量が通知なしに完全に消費されないように、クォータ を設定することができます。クォータとは、運用上" -"の制限値です。たとえば、各テナントに許容される容量 (GB) を制御して、単一のテ" -"ナントで全ディスク容量すべてが消費されないようにします。現在、ユーザーレベル" -"ではなく、テナント (またはプロジェクト) レベルで、クォータを有効にすることが" -"できます。" -"クォータユーザー" -"管理クォータ" - -msgid "To provide users with a persistent storage mechanism" -msgstr "ユーザに永続的ストレージの仕組みを提供する" - -msgid "" -"To put the EC2 credentials into your environment, source the ec2rc.sh file." -msgstr "" -"EC2 認証情報を環境に適用するには、ec2rc.sh ファイルを元データと" -"します。" - -msgid "To remove the package:" -msgstr "パッケージを削除するには、" - -msgid "" -"To review the documentation before it's published, go to the OpenStack " -"Gerrit server at  and " -"search for project:openstack/openstack-" -"manuals or project:openstack/api-site." -msgstr "" -"公開される前にドキュメントのレビューを行うには、OpenStack Gerrit サーバー " -" に行って、project:openstack/openstack-manuals や project:openstack/api-site を検索して下さい。" - -msgid "To run DevStack on an instance in your OpenStack cloud:" -msgstr "" -"お使いの OpenStack クラウド上のインスタンスで DevStack を動作させるために:" - -msgid "To schedule a group of hosts with common features." -msgstr "共通の機能を持ったホストのグループに対してスケジューリングしたい場合" - -msgid "" -"To see a list of projects that have been added to the cloud,projectsobtaining list of " -"currentuser " -"managementlisting usersworking environmentusers and projects run:" -msgstr "" -"クラウドに追加されたプロジェクトの一覧を確認するには、プロジェクト現在の一覧の取得ユーザー管理ユーザーの一覧表示作業環境users and projects以下のコマンドを実行します:" - -msgid "" -"To see a list of running instances,instanceslist of runningworking environmentrunning instances run:" -msgstr "" -"実行中のインスタンスを確認するには、instances実行中の一覧作業環境実行中のインスタンス以下のコマンド" -"を実行します:" - -msgid "To see a list of users, run:" -msgstr "ユーザーのリストを見るためには、" - -msgid "" -"To see whether you are using namespaces, run ip netns:" -msgstr "" -"ip netns を実行して、名前空間を使用しているかどうかを確認" -"します。" - -msgid "" -"To see which bridge the packet will use, run the command: " -msgstr "" -"下記コマンドを実行することで、パケットがどのブリッジを使うか確認できます。" -"" - -msgid "" -"To see which fixed IP networks are configured in your cloud, you can use the " -"nova command-line client to get the IP ranges:networksinspection ofworking " -"environmentnetwork inspection" -msgstr "" -"クラウドでどの Fixed IP ネットワークが設定されているかを確認するには、 " -"nova コマンドラインクライアントを使用して IP アドレスの範" -"囲を取得することができます:ネットワー" -"ク検査作業環境ネットワークの検査" - -msgid "" -"To set a configuration option to zero, include a line such as " -"image_cache_manager_interval=0 in your nova." -"conf file." -msgstr "" -"設定オプションを 0 に設定するには、nova.conf に " -"image_cache_manager_interval=0 のような行を入れてくださ" -"い。" - -msgid "To set up the test environment, you can use one of several methods:" -msgstr "テスト環境をセットアップする場合、いくつかの方法を使用できます。" - -msgid "To share an image or snapshot with another project, do the following:" -msgstr "" -"以下のように、イメージやスナップショットを他のプロジェクトと共有します。" - -msgid "" -"To suit the cloud paradigm, OpenStack itself is designed to be horizontally " -"scalable. Rather than switching to larger servers, you procure more servers " -"and simply install identically configured services. Ideally, you scale out " -"and load balance among groups of functionally identical services (for " -"example, compute nodes or nova-api nodes), that " -"communicate on a message bus." -msgstr "" -"クラウドのパラダイムに合わせるため、OpenStack は水平的にスケーリングできるよ" -"うに設計されています。容量の大きいサーバーに切り替えるのではなく、サーバーを" -"多く調達して同じように設定したサービスをインストールするだけです。理想として" -"は、メッセージバスを通信する、機能的に同じサービス (例: コンピュートノードや " -"nova-api ノード) グループでスケールアウト、負荷分散しま" -"す。" - -msgid "" -"To take advantage of either container quotas or account quotas, your Object " -"Storage proxy server must have container_quotas or " -"account_quotas (or both) added to the [pipeline:main] pipeline. Each quota type also requires its own section in the " -"proxy-server.conf file:" -msgstr "" -"コンテナーのクォータやアカウントのクォータの利点を得るために、Object Storage " -"のプロキシーサーバーが container_quotas や " -"account_quotas (または両方) を [pipeline:main] パイプラインに追加するする必要があります。各クォータの種類は、" -"proxy-server.conf ファイルにそれ自身のセクションも必要と" -"します。" - -msgid "" -"To take the first path, you can modify the OpenStack code directly. Learn " -"how " -"to contribute, follow the code review workflow, make your changes, " -"and contribute them back to the upstream OpenStack project. This path is " -"recommended if the feature you need requires deep integration with an " -"existing project. The community is always open to contributions and welcomes " -"new functionality that follows the feature-development guidelines. This path " -"still requires you to use DevStack for testing your feature additions, so " -"this chapter walks you through the DevStack environment.OpenStack communitycustomization " -"and" -msgstr "" -"まず最初に、貢献方法を学び、Code Review Workflow に従って、あ" -"なたの修正をアップストリームの OpenStack プロジェクトへコントリビュートしてく" -"ださい。もし、あなたが必要な機能が既存のプロジェクトと密にインテグレーション" -"する必要がある場合、これが推奨される選択肢です。コミュニティは、いつでも貢献" -"に対して開かれていますし、機能開発ガイドラインに従う新機能を歓迎します。" -"OpenStack communitycustomization and" - -msgid "" -"To this day, the issue (https://bugs.launchpad.net/nova/+bug/832507) " -"doesn't have a permanent resolution, but we look forward to the discussion " -"at the next summit." -msgstr "" -"今日に至るまで、この問題 (https://bugs.launchpad.net/nova/+bug/832507) " -"には完全な解決策がないが、我々は次回のサミットの議論に期待している。" - -msgid "" -"To understand the difference between user data and metadata, realize that " -"user data is created before an instance is started. User data is accessible " -"from within the instance when it is running. User data can be used to store " -"configuration, a script, or anything the tenant wants." -msgstr "" -"ユーザーデータとメタデータの違いを理解するために、インスタンスが起動する前" -"に、ユーザーデータが作成されることに気づいてください。ユーザーデータは、イン" -"スタンスの実行時に、インスタンスの中からアクセスできます。設定、スクリプト、" -"テナントが必要とするものを保存するために使用できます。" - -msgid "" -"To understand the possibilities that OpenStack offers, it's best to start " -"with basic architecture that has been tested in production environments. We " -"offer two examples with basic pivots on the base operating system (Ubuntu " -"and Red Hat Enterprise Linux) and the networking architecture. There are " -"other differences between these two examples and this guide provides reasons " -"for each choice made." -msgstr "" -"OpenStack が提供する可能性を理解するには、確実に信頼できる、本番環境での検証" -"済みの基本アーキテクチャーから開始するのが最善の方法です。本ガイドでは、ベー" -"スオペレーティングシステム (Ubuntu および Red Hat Enterprise Linux) 上に基本" -"ピボットとネットワークアーキテクチャを備えた基本アーキテクチャーの例を 2 つ紹" -"介しています。このガイドは、それぞれの選択理由を提供します。" - -msgid "To update Block Storage quotas for a tenant (project)" -msgstr "プロジェクトの Block Storage クォータの更新方法" - -msgid "" -"To update a default value for a new tenant, update the property in the " -"/etc/cinder/cinder.conf file." -msgstr "" -"新規テナントのクォータのデフォルト値を更新するには、/etc/cinder/" -"cinder.conf ファイルの対応する項目を更新します。" - -msgid "" -"To update a service on each node, you generally modify one or more " -"configuration files, stop the service, synchronize the database schema, and " -"start the service. Some services require different steps. We recommend " -"verifying operation of each service before proceeding to the next service." -msgstr "" -"各ノードにおいてサービスをアップグレードする場合、一般的には 1 つ以上の設定" -"ファイルの変更、サービスの停止、データベーススキーマの同期、サービスの起動を" -"行います。いくつかのサービスは、違う手順を必要とします。次のサービスに進む前" -"に、各サービスの動作を検証することを推奨します。" - -msgid "To update quota values for a tenant (project)" -msgstr "テナント (プロジェクト) のクォータ値の更新" - -msgid "" -"To verify the quota, run the swift stat command again:" -msgstr "" -"再び swift stat コマンドを実行して、クォータを検証します。" - -msgid "To view Block Storage quotas for a tenant (project)" -msgstr "プロジェクトの Block Storage クォータの表示方法" - -msgid "To view a flavor's access list, do the following:" -msgstr "以下のように、フレーバーのアクセスリストを表示します。" - -msgid "" -"To view a list of options for the quota-update command, " -"run:" -msgstr "" -"以下を実行して、quota-update コマンドのオプションリストを" -"表示します。" - -msgid "To view account quotas placed on a project:" -msgstr "プロジェクトのアカウントのクォータを表示します。" - -msgid "To view all tenants, run: " -msgstr "" -"全てのテナントを表示するには、以下のコマンドを実行します。" - -msgid "" -"To view and update Object Storage quotas, use the swift command " -"provided by the python-swiftclient package. Any user included " -"in the project can view the quotas placed on their project. To update Object " -"Storage quotas on a project, you must have the role of ResellerAdmin in the " -"project that the quota is being applied to." -msgstr "" -"Object Storage クォータを表示および更新するためには、python-" -"swiftclient パッケージにより提供される swift コマンドを使" -"用します。プロジェクト内のユーザーは誰でも、そのプロジェクトに設定されている" -"クォータを表示できます。プロジェクトの Object Storage クォータを更新する場" -"合、クォータを適用するプロジェクトにおいて ResellerAdmin ロールを持つ必要があ" -"ります。" - -msgid "To view and update default Block Storage quota values" -msgstr "Block Storage のデフォルトのクォータ値の表示と更新" - -msgid "To view and update default quota values" -msgstr "デフォルトのクォータ値の表示と更新" - -msgid "To view quota values for a tenant (project)" -msgstr "テナント (プロジェクト) のクォータ値の表示" - -msgid "To view the details of the \"open\" security group:" -msgstr "「open」セキュリティグループの詳細を表示する方法:" - -msgid "" -"To write a good bug report, the following process is essential. First, " -"search for the bug to make sure there is no bug already filed for the same " -"issue. If you find one, be sure to click on \"This bug affects X people. " -"Does this bug affect you?\" If you can't find the issue, then enter the " -"details of your report. It should at least include:" -msgstr "" -"よいバグ報告を書くには、次に述べる手順を踏むことが不可欠です。まず、バグを検" -"索し、同じ問題に関してすでに登録されているバグがないことを確認します。見つ" -"かった場合は、\"This bug affects X people. Does this bug affect you?\" (この" -"バグは X 人に影響があります。あなたもこのバグの影響を受けますか?) をクリック" -"するようにして下さい。同じ問題が見つからなかった場合は、バグ報告で詳細を入力" -"して下さい。少なくとも以下の情報を含めるべきです。" - -msgid "" -"Today, OpenStack clouds explicitly support three types of persistent " -"storage: object storage, block storage, and file system storage. swiftObject Storage APIpersistent " -"storageobjectspersistent storage ofObject StorageObject " -"Storage APIstorageobject storageshared file system storageshared file systems service" -msgstr "" -"今日、OpenStack はオブジェクトストレージブ" -"ロックストレージファイルシステムストレージ" -"という 3 種類の永続ストレージを明示的にサポートします。swiftObject Storage APIpersistent " -"storageobjectspersistent storage ofObject StorageObject " -"Storage APIstorageobject storageshared file system storageshared file systems service" - -msgid "Tom Fifield" -msgstr "Tom Fifield" - -msgid "Total Cloud Controller Failure" -msgstr "全体的なクラウドコントローラーの故障" - -msgid "Total Compute Node Failure" -msgstr "コンピュートノード全体の故障" - -msgid "Toward a Python SDK" -msgstr "Python SDK へ" - -msgid "Tracing Instance Requests" -msgstr "インスタンスリクエストの追跡" - -msgid "" -"Tracks current information about users and instances, for example, in a " -"database, typically one database instance managed per service" -msgstr "" -"ユーザとインスタンスの現在の状態をトラッキングします。例えば、一般的なデータ" -"ベースでは1つのデータベースインスタンスはサービス毎に管理されます。" - -msgid "" -"Traffic among object/account/container servers and between these and the " -"proxy server's internal interface uses this private network.containerscontainer serversobjectsobject serversaccount server" -msgstr "" -"オブジェクト/アカウント/コンテナサービスの間とこれらとプロクシサーバーのイン" -"ターフェイス間のトラフィックはこのプライベートネットワークを利用します。" -"コンテナコンテナ" -"サーバーオブ" -"ジェクトオブジェクトサーバーアカウントサーバー" - -msgid "Traffic route for ping packet" -msgstr "ping パケットの通信ルート" - -msgid "Trending" -msgstr "トレンド" - -msgid "" -"Trending can give you great insight into how your cloud is performing day to " -"day. You can learn, for example, if a busy day was simply a rare occurrence " -"or if you should start adding new compute nodes.monitoringtrendinglogging/monitoringtrendingmonitoring cloud " -"performance withlogging/monitoringtrending" -msgstr "" -"傾向は、あなたのクラウドが日々どのように動作しているかについて、素晴らしい洞" -"察を与えられます。例えば、忙しい日が単純にほとんど発生していないかどうか、新" -"しいコンピュートノードを追加しはじめるべきかどうかを学習できます。monitoringtrendinglogging/monitoringtrendingmonitoring cloud " -"performance withlogging/monitoringtrending" - -msgid "" -"Trending takes a slightly different approach than alerting. While alerting " -"is interested in a binary result (whether a check succeeds or fails), " -"trending records the current state of something at a certain point in time. " -"Once enough points in time have been recorded, you can see how the value has " -"changed over time.trendingvs. alertsbinarybinary results in trending" -msgstr "" -"トレンドはアラートとは全く異なったアプローチです。アラートは0か1かの結果" -"(チェックが成功するか失敗するか)に注目しているのに対して、トレンドはある時点" -"での状態を定期的に記録します。十分な量が記録されれば、時系列でどのように値が" -"変化するかを確認できます。trendingvs. alertsbinarybinary results in trending" - -msgid "Troubleshooting Open vSwitch" -msgstr "Open vSwitch のトラブルシューティング" - -msgid "" -"Try executing the nova reboot command again. You should see an " -"error message about why the instance was not able to boot" -msgstr "" -"再び nova reboot コマンドを実行してみてください。インスタンスが" -"なぜブートできないかについて、エラーメッセージを確認すべきです。" - -msgid "Type" -msgstr "種別" - -msgid "" -"Typical use is to only create administrative users in a single project, by " -"convention the admin project, which is created by default during cloud " -"setup. If your administrative users also use the cloud to launch and manage " -"instances, it is strongly recommended that you use separate user accounts " -"for administrative access and normal operations and that they be in distinct " -"projects.accounts" -msgstr "" -"一般的な使用法は、一つのプロジェクトだけに管理ユーザーを所属させることです。" -"慣例により、\"admin\" プロジェクトがクラウド環境のセットアップ中に標準で作成" -"されます。管理ユーザーもクラウドを使用してインスタンスの起動、管理を行う場合" -"には、管理アクセスと一般アクセス用に別々のユーザーアカウントを使用し、それら" -"のユーザーを別々のプロジェクトにすることを強く推奨します。accounts" - -msgid "" -"Typically, default values are changed because a tenant requires more than " -"the OpenStack default of 10 volumes per tenant, or more than the OpenStack " -"default of 1 TB of disk space on a compute node." -msgstr "" -"テナントには、10 個を超える Block Storage ボリュームまたはコンピュートノード" -"で 1 TB 以上が必要であるため、通常クラウドのオペレーターはデフォルト値を変更" -"します。" - -msgid "UNIX and Linux Systems Administration Handbook" -msgstr "UNIX and Linux Systems Administration Handbook" - -msgid "" -"Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, including derivatives such " -"as CentOS and Scientific Linux" -msgstr "" -"Ubuntu 12.04 LTS または Red Hat Enterprise Linux 6.5 (CentOS および " -"Scientific Linux などの派生物を含む)" - -msgid "" -"Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port 55672. " -"RabbitMQ versions 3.0 and above use port 15672 instead. You can check which " -"version of RabbitMQ you have running on your local Ubuntu machine by doing:" -msgstr "" -"Ubuntu 12.04はRabiitMQのバージョン2.7.1を55672番ポートを使うようにインストー" -"ルします。RabbitMQバージョン3.0以降では15672が利用されます。Ubuntuマシン上で" -"どのバージョンのRabbitMQが実行されているかは次のように確認できます。" - -msgid "" -"Ubuntu uses rsyslog as the default logging service. Since it is natively " -"able to send logs to a remote location, you don't have to install anything " -"extra to enable this feature, just modify the configuration file. In doing " -"this, consider running your logging over a management network or using an " -"encrypted VPN to avoid interception." -msgstr "" -"Ubuntuはrsyslog をデフォルトのロギングサービスとして利用します。rsyslog はリ" -"モートにログを送信する機能を持っているので、何かを追加でインストールする必要" -"はなく、設定ファイルを変更するだけです。リモート転送を実施する際は、盗聴を防" -"ぐためにログが自身の管理ネットワーク上を通る、もしくは暗号化VPNを利用すること" -"を考慮する必要があります。" - -msgid "Under Development" -msgstr "開発中" - -msgid "Under Identity tab, click Projects." -msgstr "" -"ユーザー管理タブの プロジェクト をクリックします。" - -msgid "" -"Underlying the use of the command-line tools is the OpenStack API, which is " -"a RESTful API that runs over HTTP. There may be cases where you want to " -"interact with the API directly or need to use it because of a suspected bug " -"in one of the CLI tools. The best way to do this is to use a combination " -"of cURL and another tool, " -"such as jq, to " -"parse the JSON from the responses.authentication tokenscURL" -msgstr "" -"コマンドラインツールの使用の根底にあるのは、HTTP を介して実行する RESTful " -"API である OpenStack API です。API と直接対話を行いたい場合や、CLI ツールにバ" -"グがあることが疑われるために使用する必要がある場合があります。この場合の最善" -"の対処方法は、 cURL と " -"jq などの他のツール" -"を組み合わせて使用し、その応答から JSON を解析することです。認証トークンcURL" - -msgid "" -"Unfortunately, sometimes the error is not apparent from the log files. In " -"this case, switch tactics and use a different command; maybe run the service " -"directly on the command line. For example, if the glance-api " -"service refuses to start and stay running, try launching the daemon from the " -"command line:daemonsrunning on CLICommand-line interface (CLI)" -msgstr "" -"残念ながら、ときどきエラーがログファイルに表れない場合があります。このような" -"場合、作戦を変更し、違うコマンドを使用します。おそらくコマンドラインにおいて" -"直接サービスを実行することです。たとえば、glance-api サービスが" -"起動しなかったり、実行状態にとどまらない場合は、コマンドラインからデーモンを" -"起動してみます:daemonsrunning on CLICommand-line interface (CLI)" - -msgid "" -"Unfortunately, this command does not tell you various details about the " -"running instances, such as what " -"compute node the instance is running on, what flavor the instance is, and so " -"on. You can use the following command to view details about individual " -"instances:config drive" -msgstr "" -"残念ながら、このコマンドは、インスタンスを実行しているコンピュートノードやイ" -"ンスタンスのフレーバーなどのような、実行中のイ" -"ンスタンスについての多様な情報は提供しません。個別のインスタンスにつ" -"いての詳しい情報を確認するには以下のコマンドを使用してください:コンフィグドライブ" - -msgid "" -"Unfortunately, this story has an open ending... we're still looking into why " -"the CentOS image was sending out spanning tree packets. Further, we're " -"researching a proper way on how to mitigate this from happening. It's a " -"bigger issue than one might think. While it's extremely important for " -"switches to prevent spanning tree loops, it's very problematic to have an " -"entire compute node be cut from the network when this happens. If a compute " -"node is hosting 100 instances and one of them sends a spanning tree packet, " -"that instance has effectively DDOS'd the other 99 instances." -msgstr "" -"不幸にも、この話にはエンディングがない…我々は、なぜ CentOS イメージがスパニン" -"グツリーパケットを送信し始める原因をいまだ探している。更に、我々は障害時にス" -"パニングツリーを軽減する正しい方法を調査している。これは誰かが思うより大きな" -"問題だ。スパニングツリーループを防ぐことはスイッチにとって非常に重要である" -"が、スパニングツリーが起こった際に、コンピュートノード全体がネットワークから" -"切り離されることも大きな問題である。コンピュートノードが 100 インスタンスをホ" -"スティングしていて、そのうち1つがスパニングツリーパケットを送信した場合、そ" -"のインスタンスは事実上他の 99 インスタンスを DDoS(サービス不能攻撃)したこと" -"になる。" - -msgid "Uninstalling" -msgstr "アンインストール" - -msgid "Unique ID (integer or UUID) for the flavor." -msgstr "フレーバー向けの一意な ID (整数や UUID)。" - -msgid "" -"Unlike having a single API endpoint, regions have a separate API endpoint " -"per installation, allowing for a more discrete separation. Users wanting to " -"run instances across sites have to explicitly select a region. However, the " -"additional complexity of a running a new service is not required." -msgstr "" -"単独の API エンドポイントを持つ場合と異なり、リージョンは、クラウドごとに別々" -"のAPIエンドポイントを持ち、より細かい分離を実現できます。複数の拠点にまたがっ" -"てインスタンスを実行するユーザーは、明示的にリージョンを指定しなければなりま" -"せん。しかし、新規サービスを実行するなど、複雑化しなくて済みます。" - -msgid "" -"Unlike the CLI tools mentioned above, the *-manage tools must " -"be run from the cloud controller, as root, because they need read access to " -"the config files such as /etc/nova/nova.conf and to make " -"queries directly against the database rather than against the OpenStack " -"API endpoints.API (application programming interface)API endpointendpointsAPI endpoint" -msgstr "" -"前述の CLI ツールとは異なり、*-manage ツールは、/etc/nova/" -"nova.conf などの設定ファイルへの読み取りアクセスが必要で、かつ " -"OpenStack に対してではなくデータベースに対して直接クエリーを実行しなければな" -"らないため、クラウドコントローラーから root として実行する必要があります。" -"API エンドポイント." -"API (Application Programming " -"Interface)API エンドポイントエンドポイントAPI エンドポイント" - -msgid "Unmount the device after inspecting." -msgstr "検査後にディスクをアンマウントします。" - -msgid "Update Share" -msgstr "共有の更新" - -msgid "Update a default value for a new tenant, as follows:" -msgstr "" -"新規テナントに対するクォータのデフォルト値を更新するには、以下のようにしま" -"す。" - -msgid "Update a particular quota value, as follows:" -msgstr "指定したクォータ値を更新します。" - -msgid "" -"Update all .ini files to match passwords and pipelines " -"as required for the OpenStack release in your environment." -msgstr "" -"すべての .ini ファイルを更新して、お使いの環境で " -"OpenStack リリース向けに必要となるパスワードおよびパイプラインと一致させま" -"す。" - -msgid "Update services" -msgstr "サービスの更新" - -msgid "Update the repository database." -msgstr "リポジトリーデータベースを更新します。" - -msgid "Update the virtual machine's operating system: " -msgstr "仮想マシンのオペレーティングシステムを更新します: " - -msgid "Upgrade Levels" -msgstr "アップグレードレベル" - -msgid "Upgrade OpenStack." -msgstr "OpenStack をアップグレードします。" - -msgid "" -"Upgrade levels are a feature added to OpenStack Compute since the Grizzly " -"release to provide version locking on the RPC (Message Queue) communications " -"between the various Compute services." -msgstr "" -"アップグレードレベルは、OpenStack Compute の Grizzly リリースで追加された機能" -"です。これは、さまざまな Compute サービス間の RPC (メッセージキュー) 通信にお" -"いてバージョンを固定できます。" - -msgid "Upgrade packages on each node" -msgstr "各ノードにおけるパッケージのアップグレード" - -msgid "Upgrade planning" -msgstr "アップグレードの計画" - -msgid "Upgrade process" -msgstr "アップグレード手順" - -msgid "Upgrades" -msgstr "アップグレード" - -msgid "" -"Upgrades involve complex operations and can fail. Before attempting any " -"upgrade, you should make a full database backup of your production data. As " -"of Kilo, database downgrades are not supported, and the only method " -"available to get back to a prior database version will be to restore from " -"backup." -msgstr "" -"アップグレードは、複雑な処理が関連して、失敗する可能性があります。何かしらの" -"アップグレードを実行する前に、本番データの完全バックアップを取得すべきです。" -"Kilo 時点では、データベースのダウングレードはサポートされません。以前のバー" -"ジョンのデータベースに戻す唯一の方法は、バックアップからリストアすることで" -"す。" - -msgid "Upload Certificate in DER format to Castellan" -msgstr "証明書を DER 形式で Castellan にアップロードします" - -msgid "Upload Image to Image service, with Signature Metadata" -msgstr "" -"署名のメタデータを付けて、イメージをイメージサービスにアップロードします" - -msgid "Upstream OpenStack" -msgstr "OpenStack コミュニティー" - -msgid "" -"Use openstack-nova-network on RHEL/CentOS/Fedora but " -"nova-network on Ubuntu/Debian." -msgstr "" -"RHEL/CentOS/Fedora の場合は openstack-nova-network を使用" -"しますが、Ubuntu/Debian の場合は nova-network を使用しま" -"す。" - -msgid "Use Cases" -msgstr "事例" - -msgid "Use a public cloud" -msgstr "パブリッククラウドの利用" - -msgid "Use an external load balancer." -msgstr "外部ロードバランサーの使用" - -msgid "" -"Use local storage on the node for the virtual machines so that no VM " -"migration or instance recovery at node failure is possible." -msgstr "" -"ノードの障害発生時に仮想マシンの移行やインスタンスのリカバリができないよう" -"に、仮想マシンにノード上のローカルストレージを使用します。" - -msgid "Use of the network can decrease performance." -msgstr "ネットワークを使用するため、性能低下が起こる可能性があります。" - -msgid "" -"Use ping to quickly find where a failure exists in the network path. In an " -"instance, first see whether you can ping an external host, such as google." -"com. If you can, then there shouldn't be a network problem at all." -msgstr "" -"ネットワーク経路のどこに障害があるかを素早く見つけるには、pingを使います。ま" -"ずあなたがインスタンス上で、google.comのような外部ホストにpingできるのであれ" -"ば、ネットワークの問題はないでしょう。" - -msgid "Use private key to create a signature of the image" -msgstr "秘密鍵を使用したイメージ署名の作成" - -msgid "Use security services" -msgstr "セキュリティサービスを指定する。" - -msgid "" -"Use the command to install specific versions of each " -"package by specifying <package-name>=<version>. The " -"script in the previous step conveniently created a list of " -"package=version pairs for you:" -msgstr "" -" コマンドに <package-name>=<version> を指定して、各パッケージの特定のバージョンをインストールします。前の手" -"順にあるスクリプトは、利便性のために package=version のペアの一" -"覧を作成しました。" - -msgid "Use this command to register an existing key with OpenStack:" -msgstr "このコマンドを使用して、既存の鍵を OpenStack に登録します。" - -msgid "" -"Use this example priority list to ensure that user-affected services are " -"restored as soon as possible, but not before a stable environment is in " -"place. Of course, despite being listed as a single-line item, each step " -"requires significant work. For example, just after starting the database, " -"you should check its integrity, or, after starting the nova services, you " -"should verify that the hypervisor matches the database and fix any mismatches." -msgstr "" -"この例にある優先度一覧を使用すると、きちんと安定した状態になる前であっても、" -"できる限り早くユーザーに影響するサービスを復旧させることができます。もちろ" -"ん、1 行の項目として一覧化されていますが、各ステップは多大な作業が必要です。" -"たとえば、データベースを開始した後、その完全性を確認すべきです。また、nova " -"サービスを開始した後、ハイパーバイザーがデータベースに一致しているかを確認" -"し、不一致があれば修正すべきです。" - -msgid "Use when you need" -msgstr "用途" - -msgid "Use your own cloud" -msgstr "自身のクラウドの使用" - -msgid "" -"Used for program listings, as well as within paragraphs to refer to program " -"elements such as variable or function names, databases, data types, " -"environment variables, statements, and keywords." -msgstr "" -"プログラム一覧に使用されます。文中でも、変数名、関数名、データベース、データ" -"形式、環境変数、宣言文、キーワードなどのプログラム要素を参照するために使用さ" -"れます。" - -msgid "Used to…" -msgstr "使用目的" - -msgid "User Management" -msgstr "ユーザー管理" - -msgid "User dashboard" -msgstr "ユーザーダッシュボード" - -msgid "User quotas" -msgstr "ユーザークォータ" - -msgid "User specification in initial request" -msgstr "初回要求のユーザー仕様" - -msgid "User virtual machines" -msgstr "ユーザーの仮想マシン" - -msgid "User-Facing Operations" -msgstr "ユーザーによる運用" - -msgid "Username" -msgstr "ユーザー名" - -msgid "" -"Username and email address are self-explanatory, though your site may have " -"local conventions you should observe. The primary project is simply the " -"first project the user is associated with and must exist prior to creating " -"the user. Role is almost always going to be \"member.\" Out of the box, " -"OpenStack comes with two roles defined:" -msgstr "" -"ユーザー名と電子メールアドレスは見たとおりです。あなたのサイトは従うべき独自" -"ルールがあるかもしれません。主プロジェクトは単にユーザーが割り当てられる最初" -"のプロジェクトです。ユーザーを作成する前に存在している必要があります。役割は" -"多くの場合ずっと \"メンバー\" のままになります。標準の状態で、OpenStack は次" -"の 2 つの役割が定義されています。" - -msgid "Users Who Disrupt Other Users" -msgstr "他のユーザーに悪影響を与えるユーザー" - -msgid "Users and Projects" -msgstr "ユーザーとプロジェクト" - -msgid "" -"Users and groups are managed through Active Directory and imported into the " -"Identity service using LDAP. CLIs are available for nova and Euca2ools to do " -"this." -msgstr "" -"ユーザとグループは Active Directory で管理され、LDAP を使用して Identity にイ" -"ンポートされます。CLI は nova と euca2ools が使用可能です。" - -msgid "" -"Users being able to retrieve console logs from running instances is a boon " -"for supportmany times they can figure out what's going on inside their " -"instance and fix what's going on without bothering you. Unfortunately, " -"sometimes overzealous logging of failures can cause problems of its own." -msgstr "" -"稼働中のインスタンスからコンソールログを取得可能なユーザはサポートの恩恵とな" -"る(インスタンスの中で何が起こっているのか何度も確認できるし、あなたが悩まず" -"に問題を修正することができる)。不幸なことに、過剰な失敗の記録は時々、自らの" -"問題となり得る。" - -msgid "" -"Users must be associated with at least one project, though they may belong " -"to many. Therefore, you should add at least one project before adding users." -"user managementadding projects" -msgstr "" -"ユーザーは、多数のプロジェクトに所属することは可能ですが、最低でも 1 つのプロ" -"ジェクトと関連付ける必要があります。そのため、ユーザー追加の前にプロジェクト" -"を 1 つ追加しておく必要があります。" -"ユーザー管理プロジェクトの追加" - -msgid "" -"Users on your cloud can disrupt other users, sometimes intentionally and " -"maliciously and other times by accident. Understanding the situation allows " -"you to make a better decision on how to handle the disruption.user managementhandling " -"disruptive users" -msgstr "" -"クラウドのユーザーは他のユーザーに悪影響を与える場合があります。意図的に悪意" -"を持って行わる場合もあれば、偶然起こる場合もあります。状況を理解することによ" -"り、このような混乱に対処する方法について、よりよい判断をできるようになりま" -"す。user managementhandling disruptive users" - -msgid "" -"Users will indicate different needs for their cloud use cases. Some may need " -"fast access to many objects that do not change often, or want to set a time-" -"to-live (TTL) value on a file. Others may access only storage that is " -"mounted with the file system itself, but want it to be replicated instantly " -"when starting a new instance. For other systems, ephemeral storage—storage " -"that is released when a VM attached to it is shut down— is the preferred " -"way. When you select storage back ends, storagechoosing back endsstorage back " -"endback end " -"interactionsstoreask the " -"following questions on behalf of your users:" -msgstr "" -"クラウドのユースケースごとにニーズが異なります。頻繁に変更が発生しない多数の" -"オブジェクトに素早くアクセスする必要がある場合、ファイルに Time-to-Live " -"(TTL) の値を設定する場合、ファイルシステムのみにマウントされているストレージ" -"のみにアクセスするが、新しいインスタンスの起動時には即時にそのストレージを複" -"製する場合などがあります。他のシステムの場合は、一時ストレージ (ストレージに" -"接続された仮想マシンがシャットダウンされている場合に開放されるストレージ) が" -"より良い方法です。ストレージのバックエンドの選択時は、" -"ストレージバック" -"エンドの選択ス" -"トレージバックエンドバックエンドの対話storeユーザーの代わりに以下の質問を確認してください。" - -msgid "Uses any traditional file system to store the images as files." -msgstr "" -"イメージをファイルとして格納するために一般的なファイルシステムを使用します。" - -msgid "Using \"ip a\" to Check Interface States" -msgstr "「ip a」を使ってインタフェース状態をチェックする" - -msgid "" -"Using nova show as an admin user will show the compute node the " -"instance was scheduled on as hostId. If the instance failed " -"during scheduling, this field is blank." -msgstr "" -"管理ユーザーとして nova show を使用すると、インスタンスがスケ" -"ジュールされたコンピュートノードが hostId として表示されます。イ" -"ンスタンスがスケジュール中に失敗していれば、この項目が空白です。" - -msgid "Using Instance-Specific Data" -msgstr "インスタンス固有データの使用" - -msgid "Using OpenStack" -msgstr "OpenStack の使い方" - -msgid "" -"Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight sites " -"with approximately 4,000 cores per site." -msgstr "" -"OpenStack Compute セルを使用して、NeCTAR Research Cloud は8サイトに及び、1" -"サイトあたり約4,000コアがあります。" - -msgid "" -"Using a virtual local area network offers broadcast control, security, and " -"physical layer transparency. If needed, use VXLAN to extend your address " -"space." -msgstr "" -"仮想ローカルエリアネットワークを使用してブロードキャスト制御、セキュリティ、" -"物理レイヤーの透過性を提供します。必要な場合には、VXLAN を使用してアドレス空" -"間を拡張します。" - -msgid "Using cURL for further inspection" -msgstr "cURL を使用したさらなる検査" - -msgid "Using the OpenStack Dashboard for Administration" -msgstr "管理目的での OpenStack Dashboard の使用" - -msgid "" -"Using the command-line interface, you can manage quotas for the OpenStack " -"Compute service and the Block Storage service." -msgstr "" -"コマンドラインインターフェースを使って、OpenStack Compute と Block Storage の" -"クォータを管理できます。" - -msgid "" -"Using this functionality, ideally one would lock the RPC version to the " -"OpenStack version being upgraded from on nova-" -"compute nodes, to ensure that, for example X+1 version " -"nova-compute processes will " -"continue to work with X version nova-" -"conductor processes while the upgrade completes. Once the " -"upgrade of nova-compute processes " -"is complete, the operator can move onto upgrading nova-conductor and remove the version locking for " -"nova-compute in nova." -"conf." -msgstr "" -"この機能を使用することにより、理想的には、nova-" -"compute においてアップグレードされる OpenStack の RPC バージョン" -"を固定します。例えば、X+1 バージョンの nova-" -"compute プロセスが、アップグレード完了まで、X バージョンの " -"nova-conductor プロセスと一緒に動" -"作しつづけることを保証するためです。nova-" -"compute プロセスのアップグレードが完了すると、運用者は " -"nova-conductor のアップグレードに" -"進み、nova.conf において nova-compute のバージョンロックを削除できます。" - -msgid "Utility" -msgstr "ユーティリティ" - -msgid "" -"Utility nodes are used by internal administration staff only to provide a " -"number of basic system administration functions needed to get the " -"environment up and running and to maintain the hardware, OS, and software on " -"which it runs." -msgstr "" -"ユーティリティノードは、環境を稼働させ、その環境を実行しているハードウェア/" -"OS/ソフトウェアを維持管理するのに必要な多数の基本的なシステム管理機能を提供す" -"るために、内部管理スタッフのみが使用します。" - -msgid "VC" -msgstr "VC" - -msgid "VCPUs" -msgstr "仮想 CPU" - -msgid "VLAN" -msgstr "VLAN" - -msgid "VLAN Configuration Within OpenStack VMs" -msgstr "OpenStack VM内のVLAN設定" - -msgid "" -"VLAN configuration can be as simple or as complicated as desired. The use of " -"VLANs has the benefit of allowing each project its own subnet and broadcast " -"segregation from other projects. To allow OpenStack to efficiently use " -"VLANs, you must allocate a VLAN range (one for each project) and turn each " -"compute node switch port into a trunk port.networksVLANVLAN networknetwork designnetwork topologyVLAN with OpenStack " -"VMs" -msgstr "" -"VLAN設定は要求によっては複雑であったり単純であったりする事ができます。VLANを" -"使用すると互いのプロジェクトに専用のサブネットを提供でき、ブロードキャストド" -"メインを分割するという利点が得られます。効果的にVLANを使用できるようにするに" -"は、VLANの範囲を(それぞれのプロジェクトに1つずつ)割り当て、各コンピュートノー" -"ドのポートをトランクポートに変更する必要があります。ネットワークVLANVLANネットワークネットワーク設計ネットワークトポロジーOpenStack VMと" -"VLAN" - -msgid "" -"VLAN tags are translated between the external tag defined in the network " -"settings, and internal tags in several places. On the br-int, " -"incoming packets from the int-br-eth1 are translated from " -"external tags to internal tags. Other translations also happen on the other " -"bridges and will be discussed in those sections." -msgstr "" -"VLAN タグは、ネットワーク設定において定義された外部タグといくつかの場所にある" -"内部タグの間で変換されます。br-int において、int-br-eth1 からの受信パケットは、外部タグから内部タグへと変換されます。他の変換が" -"他のブリッジにおいても発生します。これらのセクションで議論されます。" - -msgid "" -"VLAN-based networks are received as tagged packets on a physical network " -"interface, eth1 in this example. Just as on the compute node, " -"this interface is a member of the br-eth1 bridge." -msgstr "" -"VLAN ベースのネットワークは、この例にある物理ネットワークインターフェース " -"eth1 においてタグ付きパケットとして受信されます。コンピュート" -"ノードでは、このインターフェースが br-eth1 ブリッジのメンバーで" -"す。" - -msgid "" -"VLAN-based networks exit the integration bridge via veth interface int-" -"br-eth1 and arrive on the bridge br-eth1 on the other " -"member of the veth pair phy-br-eth1. Packets on this interface " -"arrive with internal VLAN tags and are translated to external tags in the " -"reverse of the process described above:" -msgstr "" -"VLAN ベースのネットワークは、仮想インターフェース int-br-eth1 経" -"由で統合ブリッジを抜けて、仮想イーサネットペア phy-br-eth1 の他" -"のメンバーにあるブリッジ br-eth1 に届きます。このインターフェー" -"スのパケットは、内部 VLAN タグとともに届き、上で説明したプロセスの逆順におい" -"て外部タグに変換されます。" - -msgid "VM is terminated" -msgstr "VM終了まで" - -msgid "VM traffic network" -msgstr "仮想マシントラフィックネットワーク" - -msgid "VMware ESX/ESXi" -msgstr "VMware ESX/ESXi" - -msgid "Value" -msgstr "値" - -msgid "" -"Verify proper operation of your environment. Then, notify your users that " -"their cloud is operating normally again." -msgstr "" -"お使いの環境が正常に動作することを検証します。そして、クラウドが再び通常どお" -"り動作していることをユーザーに知らせます。" - -msgid "Verify your alert mechanisms are still working." -msgstr "アラート機能が動作していることを確認します。" - -msgid "View and update Block Storage quotas for a tenant (project)" -msgstr "" -"Block Storage サービスのテナント (プロジェクト) の クォータの表示と更新" - -msgid "View and update compute quotas for a tenant (project)" -msgstr "テナント (プロジェクト) のコンピュートクォータの表示/更新" - -msgid "View quotas for the tenant, as follows:" -msgstr "特定のテナントのクォータを表示するには以下のようにします。" - -msgid "View usage of share resources" -msgstr "共有リソースの使用状況" - -msgid "Virtual Machine Image Guide" -msgstr "仮想マシンイメージガイド" - -msgid "Virtual cores" -msgstr "仮想コア数" - -msgid "" -"Virtual hardware templates are called \"flavors\" in OpenStack, defining " -"sizes for RAM, disk, number of cores, and so on. The default install " -"provides five flavors." -msgstr "" -"仮想ハードウェアのテンプレートは、OpenStack において「フレーバー」と呼ばれま" -"す。メモリー、ディスク、CPU コア数などを定義します。デフォルトインストールで" -"は、5 つのフレーバーが存在します。" - -msgid "Virtual machine memory in megabytes." -msgstr "メガバイト単位の仮想マシンメモリー。" - -msgid "" -"Virtual root disk size in gigabytes. This is an ephemeral disk the base " -"image is copied into. You don't use it when you boot from a persistent " -"volume. The \"0\" size is a special case that uses the native base image " -"size as the size of the ephemeral root volume." -msgstr "" -"ギガバイト単位の仮想ルートディスク容量。これはベースイメージがコピーされる一" -"時ディスクです。永続的なボリュームからブートするとき、これは使用されません。" -"「0」という容量は特別な値で、一時ルートボリュームの容量としてベースイメージの" -"ネイティブ容量をそのまま使用することを意味します。" - -msgid "Virtualization" -msgstr "仮想化" - -msgid "Visualizing OpenStack Networking Service Traffic in the Cloud" -msgstr "クラウド上の OpenStack Networking サービス通信の仮想化" - -msgid "Visualizing nova-network Traffic in the Cloud" -msgstr "クラウド上の nova-network 通信の仮想化" - -msgid "VlanManager" -msgstr "VlanManager" - -msgid "" -"VlanManager is used extensively for network management. All servers have two " -"bonded 10GbE NICs that are connected to two redundant switches. DAIR is set " -"up to use single-node networking where the cloud controller is the gateway " -"for all instances on all compute nodes. Internal OpenStack traffic (for " -"example, storage traffic) does not go through the cloud controller." -msgstr "" -"ネットワーク管理は VlanManager が広範囲に使用されています。全てのサーバーは2" -"つの冗長化(bonding)された 10GbE NIC があり、2つの独立したスイッチに接続さ" -"れています。DAIR はクラウドコントローラーが全コンピュートノード上の全インスタ" -"ンス用のゲートウェイとなる、単一ノードのネットワーキングを使用する設定がされ" -"ています。内部の OpenStack 通信(例:ストレージ通信)はクラウドコントローラー" -"を経由していません。" - -msgid "Volumes" -msgstr "ボリューム" - -msgid "Vulnerability management" -msgstr "脆弱性管理" - -msgid "Vulnerability tracking" -msgstr "脆弱性追跡" - -msgid "Wash, rinse, and repeat until you find the core cause of the problem." -msgstr "問題の根本となる原因を見つけるまで、洗い出し、精査し、繰り返します。" - -msgid "Watch the network" -msgstr "ネットワークの監視" - -msgid "We also had some excellent input from outside of the room:" -msgstr "私たちは部屋の外から、いくつかの素晴らしいインプットを得ました。" - -msgid "" -"We approximate the older nova-network multi-host HA setup " -"by using \"provider VLAN networks\" that connect instances directly to " -"existing publicly addressable networks and use existing physical routers as " -"their default gateway. This means that if our network controller goes down, " -"running instances still have their network available, and no single Linux " -"host becomes a traffic bottleneck. We are able to do this because we have a " -"sufficient supply of IPv4 addresses to cover all of our instances and thus " -"don't need NAT and don't use floating IP addresses. We provide a single " -"generic public network to all projects and additional existing VLANs on a " -"project-by-project basis as needed. Individual projects are also allowed to " -"create their own private GRE based networks." -msgstr "" -"インスタンスが既存のパブリックにアクセスできるネットワークに直接接続され、デ" -"フォルトゲートウェイとして既存の物理ルーターを使用する、プロバイダー VLAN " -"ネットワークを使用した、より古い nova-network のマルチホス" -"ト HA セットアップに近づいています。このことは、ネットワークコントローラーが" -"停止した場合に、実行中のインスタンスがネットワークを利用可能であり続けるこ" -"と、単独の Linux ホストが通信のボトルネックにならないことを意味します。すべて" -"のインスタンスの IPv4 アドレスを十分に提供でき、NAT が必要なく、Floating IP " -"アドレスを使用しないので、これを実行できます。単独の汎用的なパブリックネット" -"ワークをすべてのプロジェクトに提供し、必要に応じてプロジェクト単位に追加で既" -"存の VLAN を提供します。個々のプロジェクトは、自身のプライベートな GRE ネット" -"ワークを作成することもできます。" - -msgid "" -"We called them and asked them to stop for a while, and they were happy to " -"abandon the horribly broken VM. After that, we started monitoring the size " -"of console logs." -msgstr "" -"我々はユーザを呼び、しばらくダッシュボードの更新を止めるよう申し入れた。する" -"と、恐ろしい VM の破壊は止み、彼らは大いに喜んだ。その後、我々はコンソールロ" -"グのサイズを監視するようになった。" - -msgid "" -"We chose the SQL back end for Identity over others, " -"such as LDAP. This back end is simple to install and is robust. The authors " -"acknowledge that many installations want to bind with existing directory " -"services and caution careful understanding of the array of options available." -msgstr "" -"Identity には SQL バックエンド を他のバックエンド (例: " -"LDAP など) よりも優先して選択しました。このバックエンドは、インストールが簡単" -"な上、頑強です。本ガイドの執筆者は、多くのインストールで既存のディレクトリ" -"サービスをバインディングする必要があることを認識しており、利用可能な数々の選択肢 に記載の内容を慎重に理" -"解するように警告しています。" - -msgid "" -"We consider live migration an integral part of the operations of the cloud. " -"This feature provides the ability to seamlessly move instances from one " -"physical host to another, a necessity for performing upgrades that require " -"reboots of the compute hosts, but only works well with shared storage." -"storagelive " -"migrationmigrationlive migrationcompute nodeslive migration" -msgstr "" -"ライブマイグレーションは、クラウドの運用に不可欠であると考えられます。この機" -"能により、物理ホストから別の物理ホストに、インスタンスをシームレスに移動し、" -"コンピュートホストの再起動を必要とするアップグレードができるようになります。" -"しかし、ライブマイグレーションは共有ストレージのみで正常に機能します。" -"ストレージライブ" -"マイグレーションマイグレーションライブマイグレーションコンピュートノードライブマイグレー" -"ション" - -msgid "" -"We couldn't have pulled it off without so much supportive help and " -"encouragement." -msgstr "" -"私たちは、これらの多大な協力的な援助と励まし無しには、これを成し遂げることは" -"できなかったでしょう。" - -msgid "" -"We decided to have run on this instance and see if we could " -"catch it in action again. Sure enough, we did." -msgstr "" -"我々は、このインスタンス上で を実行して、操作で再びこの現象" -"に遭遇するか見てみることにした。実際、我々はやってみた。" - -msgid "" -"We don't recommend ZFS unless you have previous experience with deploying " -"it, since the ZFS back end for Block Storage requires a Solaris-based " -"operating system, and we assume that your experience is primarily with Linux-" -"based systems." -msgstr "" -"本書は、Linux ベースシステムを主に使用するユーザーを想定しており、Block " -"Storage の ZFS バックエンドには Solaris ベースのオペレーティングシステムが必" -"要であるため、ZFS でのデプロイ経験がない場合は、ZFS は推奨していません。" - -msgid "" -"We hope that you now have some considerations in mind and questions to ask " -"your future cloud users about their storage use cases. As you can see, your " -"storage decisions will also influence your network design for performance " -"and security needs. Continue with us to make more informed decisions about " -"your OpenStack cloud design." -msgstr "" -"今後のクラウドユーザーにストレージのユースケースに関する質問事項および、懸念" -"点など理解いただけたかと思います。ストレージの決定はパフォーマンスやセキュリ" -"ティのニーズにあったネットワーク設計をする際に影響を与えます。OpenStack クラ" -"ウド 設計 について理解したうえで意思" -"決定が行えるように、本書を読み進めてください。" - -msgid "" -"We hope you have enjoyed this quick tour of your working environment, " -"including how to interact with your cloud and extract useful information. " -"From here, you can use the Admin User Guide as your " -"reference for all of the command-line functionality in your cloud." -msgstr "" -"クラウドとの対話や有用な情報の抽出の方法など、作業環境の概観を確認する手順を" -"簡単にご紹介しました。役立てていただければ幸いです。ここで説明した内容よりも" -"さらに詳しい情報は、クラウドの全コマンドライン機能についての参考資料として" -"管理ユーザーガイドを参照してください。" - -msgid "" -"We initially deployed on Ubuntu 12.04 with the Essex release of OpenStack " -"using FlatDHCP multi-host networking." -msgstr "" -"最初は、Ubuntu 12.04 に OpenStack Essex を導入しました。FlatDHCP マルチホスト" -"ネットワークを使用しています。" - -msgid "" -"We provide two ways to report issues to the OpenStack Vulnerability " -"Management Team, depending on how sensitive the issue is:" -msgstr "" -"OpenStack 脆弱性管理チームに問題を報告する方法は 2 種類用意されており、問題の" -"重要度に応じて使い分けて下さい。" - -msgid "" -"We reached out for help. A networking engineer suggested it was an MTU " -"issue. Great! MTU! Something to go on! What's MTU and why would it cause a " -"problem?" -msgstr "" -"我々は助けを求めた。ネットワークエンジニアは、これは MTU の問題ではないかとい" -"うのだ。素晴らしい!MTU! 事態は動き始めた! MTU とは何で、何故それが問題になる" -"のだろうか?" - -msgid "" -"We recommend that you choose one of the following multiple disk options:" -msgstr "以下に挙げる複数のディスクの選択肢から選ぶことを推奨します。" - -msgid "" -"We recommend that you do not use the default Ubuntu OpenStack install " -"packages and instead use the Ubuntu Cloud Archive. The Cloud Archive is " -"a package repository supported by Canonical that allows you to upgrade to " -"future OpenStack releases while remaining on Ubuntu 12.04." -msgstr "" -"デフォルトの Ubuntu OpenStack インストールパッケージは使用せずに、Ubuntu Cloud " -"Archive を使用することをお勧めします。Cloud Archive は、Canonical がサ" -"ポートするパッケージリポジトリです。これにより、Ubuntu 12.04 を維持した状態で" -"将来の OpenStack リリースにアップグレードすることができます。" - -msgid "" -"We recommend that you subscribe to the general list and the operator list, " -"although you must set up filters to manage the volume for the general list. " -"You'll also find links to the mailing list archives on the mailing list wiki " -"page, where you can search through the discussions." -msgstr "" -"一般向けリストと運用者向けリストを購読するのがお薦めです。一般向けリストの流" -"量に対応するにはフィルタを設定する必要があるとは思いますが。 Wiki のメーリン" -"グリストのページにはメーリングリストのアーカイブへのリンクがあり、そこで議論" -"の過程を検索することができます。" - -msgid "" -"We recommend that you use a fast NIC, such as 10 GB. You can also choose to " -"use two 10 GB NICs and bond them together. While you might not be able to " -"get a full bonded 20 GB speed, different transmission streams use different " -"NICs. For example, if the cloud controller transfers two images, each image " -"uses a different NIC and gets a full 10 GB of bandwidth.bandwidthdesign considerations " -"for" -msgstr "" -"10GbE のような高速な NIC を使うことを推奨します。また、10GbE NIC を2枚使って " -"ボンディングすることもできます。束ねられた 20Gbps の速度をフルに使うことはで" -"きないかもしれませんが、異なる送信ストリームは異なる NIC を使います。例えば、" -"クラウドコントローラーが2つのイメージを送信する場合、それぞれのイメージが別" -"の NIC を使い、10Gbps の帯域をフルに使うことができます。bandwidthdesign considerations " -"for" - -msgid "" -"We recommend that you use the same hardware for new compute and block " -"storage nodes. At the very least, ensure that the CPUs are similar in the " -"compute nodes to not break live migration." -msgstr "" -"新しいコンピュートノードとブロックストレージノードには、同じハードウェアを使" -"用することを推奨します。最低限、ライブマイグレーションが失敗しないように、コ" -"ンピュートノードでは CPU は同様のものにしてください。" - -msgid "" -"We recommend using a combination of the OpenStack command-line interface " -"(CLI) tools and the OpenStack dashboard for administration. Some users with " -"a background in other cloud technologies may be using the EC2 Compatibility " -"API, which uses naming conventions somewhat different from the native API. " -"We highlight those differences.working environmentcommand-line tools" -msgstr "" -"管理には、OpenStack コマンドラインインターフェース (CLI) ツールと OpenStack " -"Dashboard を組み合わせて使用することをお勧めします。他のクラウドテクノロジー" -"の使用経験のある一部のユーザーは、EC2 互換 API を使用している可能性がありま" -"す。この API は、ネイティブの API とは若干異なる命名規則を採用しています。こ" -"の相違点について以下に説明します。作業" -"環境コマンドラインツール" - -msgid "" -"We reviewed both sets of logs. The one thing that stood out the most was " -"DHCP. At the time, OpenStack, by default, set DHCP leases for one minute " -"(it's now two minutes). This means that every instance contacts the cloud " -"controller (DHCP server) to renew its fixed IP. For some reason, this " -"instance could not renew its IP. We correlated the instance's logs with the " -"logs on the cloud controller and put together a conversation:" -msgstr "" -"我々はログのセットを両方見直した。頻発したログの1つは DHCP だった。当時、" -"OpenStack はデフォルトでは DHCP リース期間を 1分に設定していた (現在は 2分)。" -"これは、各インスタンスが固定 IP アドレスを更新するためにクラウドコントロー" -"ラー(DHCP サーバー)に接続することを意味する。幾つかの理由で、このインスタン" -"スはその IP アドレスを更新できなかった。インスタンスのログとクラウドコント" -"ローラー上のログを突き合わせ、並べてやりとりにしてみた。" - -msgid "" -"We strongly suggest that you install the command-line clients from the Python Package Index " -"(PyPI) instead of from the distribution packages. The clients are under " -"heavy development, and it is very likely at any given time that the version " -"of the packages distributed by your operating-system vendor are out of date." -"command-line toolsPython Package Index (PyPI)pip utilityPython Package Index " -"(PyPI)" -msgstr "" -"コマンドラインクライアントは、ディストリビューションのパッケージからではな" -"く、Python Package Index (PyPI) からインストールすることを強く推奨します。これは、クライアント" -"の開発が活発に行われており、オペレーティングシステムのベンダーにより配布され" -"たパッケージのバージョンが任意の時点で無効になってしまう可能性が高いためで" -"す。コマンドラインツールPython Package Index (PyPI)pip ユーティリティPython Package " -"Index (PyPI)" - -msgid "" -"We use the Puppet Labs OpenStack modules to configure Compute, Image " -"service, Identity, and dashboard. Puppet is used widely for instance " -"configuration, and Foreman is used as a GUI for reporting and instance " -"provisioning." -msgstr "" -"我々は Compute、Image service、Identity、dashboard の設定に Puppet Labs の" -"OpenStack モジュールを使用しています。Puppet は、インスタンスの設定に幅広く使" -"用されます。Foreman は、レポートおよびインスタンスの配備の GUI として使用され" -"ます。" - -msgid "" -"We want to acknowledge our excellent host Rackers at Rackspace in Austin:" -msgstr "" -"私たちは、オースチンの Rackspace での素晴らしいホスト Rackersに感謝したい:" - -msgid "" -"We wrote furiously from our own experiences and bounced ideas between each " -"other. At regular intervals we reviewed the shape and organization of the " -"book and further molded it, leading to what you see today." -msgstr "" -"私たちは一心不乱に自分たちの経験に基づき執筆を行い、互いに意見をぶつけ合いま" -"した。一定の間隔で、本の現在の状況や構成をレビューし、本を作り上げていき、今" -"皆さんが見ているものができあがりました。" - -msgid "" -"We wrote this book because we have deployed and maintained OpenStack clouds " -"for at least a year and we wanted to share this knowledge with others. After " -"months of being the point people for an OpenStack cloud, we also wanted to " -"have a document to hand to our system administrators so that they'd know how " -"to operate the cloud on a daily basis—both reactively and pro-actively. We " -"wanted to provide more detailed technical information about the decisions " -"that deployers make along the way." -msgstr "" -"私たちは少なくとも1年以上 OpenStack クラウドを構築し運用してきました。そこで" -"得られた知識を多くの人と共有するために、この本を書きました。 OpenStack クラウ" -"ドの責任者として数ヶ月がたつと、そのドキュメントを渡しておけば、システム管理" -"者に日々のクラウドの運用をどのように行なえばよいかが分かるようなドキュメント" -"が欲しくなりました。また、クラウドを構築する際に選択したやり方のより詳細な技" -"術情報を共有したいと思いました。" - -msgid "" -"We wrote this book in a book sprint, which is a facilitated, rapid " -"development production method for books. For more information, see the BookSprints site. Your " -"authors cobbled this book together in five days during February 2013, fueled " -"by caffeine and the best takeout food that Austin, Texas, could offer." -msgstr "" -"私たちはこの本を Book Sprint で執筆しました。 Book Sprint は短い期間で本を建" -"設的に作成できるメソッドです。詳しい情報は、Book Sprint のサイト を参照して下さい。著者らは2013" -"年2月の5日間でこの本をまとめあげました。カフェインと、テキサス州オースティン" -"の素晴らしいテイクアウトの食事は力になりました。" - -msgid "We wrote this book to help you:" -msgstr "" -"次のような場面であなたの助けとなるように、この本を書きました。" - -msgid "" -"We'll discuss the three main approaches to instance storage in the next few " -"sections." -msgstr "以降の数セクションでは、 3 つの主要アプローチについて説明します。" - -msgid "" -"We've seen deployments with all, and recommend that you choose the one you " -"are most familiar with operating. If you are not familiar with any of these, " -"choose NFS, as it is the easiest to set up and there is extensive community " -"knowledge about it." -msgstr "" -"すべてのファイルシステムを使用したデプロイメントに触れ、運用になれているもの" -"を選択するように推奨しました。いずれのファイルシステムにも馴染みがない場合" -"は、設定が簡単で、コミュニティのナレッジベースが幅広く存在するため、NFS を選" -"択するようにしてください。" - -msgid "Weaknesses" -msgstr "短所" - -msgid "Weekly" -msgstr "週次" - -msgid "" -"What is the platter count I can achieve? Do more spindles result in better I/" -"O despite network access?" -msgstr "" -"実現可能な容量は?ネットワークアクセスでも、より多くのディスクがより良い I/O " -"性能に繋がるか?" - -msgid "What is the platter count you can achieve?" -msgstr "実現したいプラッター数(ディスク容量)はどれくらいか?" - -msgid "What to Back Up" -msgstr "バックアップ対象" - -msgid "" -"When adding a new security group, you should pick a descriptive but brief " -"name. This name shows up in brief descriptions of the instances that use it " -"where the longer description field often does not. Seeing that an instance " -"is using security group http is much easier to understand " -"than bobs_group or secgrp1." -msgstr "" -"新しいセキュリティグループを追加するとき、内容を表す簡潔な名前をつけるべきで" -"す。この名前はインスタンスの簡単な説明など、より長い説明フィールドが使用され" -"ないところで使用されます。インスタンスがセキュリティグループ http を使っているのを見れば、bobs_group や " -"secgrp1 よりはずっと理解しやすいことでしょう。" - -msgid "" -"When an instance fails to behave properly, you will often have to trace " -"activity associated with that instance across the log files of various " -"nova-* services and across both the cloud controller and " -"compute nodes.instancestracing instance requestslogging/monitoringtracing instance requests" -msgstr "" -"インスタンスが正しく動作していない場合、インスタンスに関連したログを調べる必" -"要があります。これらのログは複数のnova-*サービスが出力しており、" -"クラウドコントローラーとコンピュートノードの両方に存在します。logging/monitoringreading " -"log messages" - -msgid "" -"When booting a server, you can also add arbitrary metadata so that you can " -"more easily identify it among other running instances. Use the --meta option with a key-value pair, where you can make up the string for " -"both the key and the value. For example, you could add a description and " -"also the creator of the server:" -msgstr "" -"サーバーを起動するとき、他の実行中のインスタンスと区別しやすくするために、メ" -"タデータを追加することもできます。--meta オプションをキーバ" -"リューペアとともに使用します。ここで、キーとバリューの両方の文字列を指定する" -"ことができます。たとえば、説明とサーバーの作成者を追加できます。" - -msgid "" -"When debugging DNS issues, start by making sure that the host where the " -"dnsmasq process for that instance runs is able to correctly resolve. If the " -"host cannot resolve, then the instances won't be able to either." -msgstr "" -"DNS問題のデバッグをするとき、そのインスタンスのdnsmasqが動いているホストが、" -"名前解決できるかを確認することから始めます。もしホストができないのであれば、" -"インスタンスも同様でしょう。" - -msgid "" -"When designing your cluster, you must consider durability and availability. " -"Understand that the predominant source of these is the spread and placement " -"of your data, rather than the reliability of the hardware. Consider the " -"default value of the number of replicas, which is three. This means that " -"before an object is marked as having been written, at least two copies exist—" -"in case a single server fails to write, the third copy may or may not yet " -"exist when the write operation initially returns. Altering this number " -"increases the robustness of your data, but reduces the amount of storage you " -"have available. Next, look at the placement of your servers. Consider " -"spreading them widely throughout your data center's network and power-" -"failure zones. Is a zone a rack, a server, or a disk?" -msgstr "" -"クラスターの設計時には、耐久性と可用性を考慮する必要があります。耐久性や可用" -"性は主に、ハードウェアの信頼性に頼るのではなく、データを分散して設置すること" -"で確保されることを理解してください。レプリカの数のデフォルト値は 3 である点も" -"考慮します。つまり、オブジェクトが書き込みされたとマークされる前でも、少なく" -"ともコピーが 2 つ存在します。1 つのサーバーが書き込みに失敗しても、書き込み操" -"作が最初に返された時点で、3 番めのコピーが存在する可能性があります。この数字" -"を変更することで、データの堅牢性を高めることができますが、利用可能なストレー" -"ジ数が減少します。次に、サーバーの設置について見ていきます。データセンターの" -"ネットワークや停電ゾーン全体に設定するようにします。ゾーンには、ラック、サー" -"バー、ディスクのいずれを使用していますか?" - -msgid "" -"When operating an OpenStack cloud, you may discover that your users can be " -"quite demanding. If OpenStack doesn't do what your users need, it may be up " -"to you to fulfill those requirements. This chapter provided you with some " -"options for customization and gave you the tools you need to get started." -msgstr "" -"OpenStack クラウドの運用時、ユーザーが非常に要望している可能性があることに気" -"が付くかもしれません。OpenStack がユーザーの必要とするものを実施していない場" -"合、それらの要求を満たすことをあなたに任せるかもしれません。本章は、いくつか" -"のカスタマイズの選択肢を提供し、始めるために必要となるツールを提供します。" - -msgid "" -"When the fsfreeze -f command is issued, all ongoing " -"transactions in the file system are allowed to complete, new write system " -"calls are halted, and other calls that modify the file system are halted. " -"Most importantly, all dirty data, metadata, and log information are written " -"to disk." -msgstr "" -"fsfreeze -f コマンドが発行された場合、ファイルシステム内で" -"進行中のすべてのトランザクションが完了することが認められます。新規書き込みの" -"システムコールは停止されます。そして、ファイルシステムを変更する他のコールは" -"停止されます。最も重要なこととしては、すべてのダーティーデータ、メタデータ、" -"およびログ情報がディスクに書き込まれることです。" - -msgid "" -"When the fix is ready, the developer proposes a change and gets the change " -"reviewed." -msgstr "修正が用意できたら、開発者は変更を提案し、レビューを受けます。" - -msgid "" -"When the fix makes it into a milestone or release branch, it automatically " -"moves to:" -msgstr "" -"修正がマイルストーンやリリースブランチに取り込まれると、バグの状態は自動的に" -"以下のようになります。" - -msgid "" -"When the node is able to rejoin the cluster, just add it back to the ring. " -"The exact syntax you use to add a node to your swift cluster with " -"swift-ring-builder heavily depends on the original options used " -"when you originally created your cluster. Please refer back to those " -"commands." -msgstr "" -"ノードがクラスターに参加できるようになったら、ただリングに再度追加するだけで" -"す。swift-ring-builder を使用して swift クラスターにノードを追加" -"するための構文は、元々クラスターを作成したときに使用した元々のオプションに強" -"く依存します。作成時に使用したコマンドをもう一度見てください。" - -msgid "" -"When the snapshot is done, you can thaw the file system with the following " -"command, as root, inside of the instance:" -msgstr "" -"スナップショットの作成が終わったら、インスタンスの中で root として以下のコマ" -"ンドを用いて、ファイルシステムをフリーズ解除できます。" - -msgid "" -"When the stack script is done, you can open the screen session it started to " -"view all of the running OpenStack services: " -msgstr "" -"stack スクリプトが完了すると、起動した screen セッションを開いて、すべての動" -"作中の OpenStack サービスを表示できます。" - -msgid "" -"When this node fully booted, I ran through the same scenario of seeing what " -"instances were running so I could turn them back on. There were a total of " -"four. Three booted and one gave an error. It was the same error as before: " -"unable to find the backing disk. Seriously, what?" -msgstr "" -"そのノードが完全に起動した際、インスタンスが起動した時に何が起こるのかを見る" -"ため、私は同じシナリオを実行して、インスタンスを復旧した。インスタンスは全部" -"で4つあった。3つは起動し、1つはエラーになった。このエラーは以前のエラーと" -"同じだった。「unable to find the backing disk.」マジ、何で?" - -msgid "" -"When users provision resources, they can specify from which availability " -"zone they want their instance to be built. This allows cloud consumers to " -"ensure that their application resources are spread across disparate machines " -"to achieve high availability in the event of hardware failure." -msgstr "" -"リソースのプロビジョニングの際には、インスタンスを作成するアベイラビリティ" -"ゾーンを指定することができます。これによって、クラウドの利用者は、アプリケー" -"ションのリソースが異なるマシンに分散して配置され、ハードウェア故障が発生した" -"場合でも高可用性を達成することができます。" - -msgid "" -"When viewing the server information, you can see the metadata included on " -"the metadata line:" -msgstr "" -"サーバーの情報を表示するとき、metadata 行に含まれるメタデータを参照できます:" - -msgid "" -"When you create a deployment plan, focus on a few vital areas because they " -"are very hard to modify post deployment. The next two sections talk about " -"configurations for:" -msgstr "" -"デプロイメントプランを作成する場合、デプロイメント後の修正は困難であるため、" -"いくつかの重要な分野にフォーカスを当ててください。次の 2 章で以下の設定内容に" -"ついて説明していきます。" - -msgid "When you do this, the bug is created with:" -msgstr "バグ報告をすると、バグは次のステータスで作成されます。" - -msgid "" -"When you join the screen session that stack.sh starts with " -"screen -r stack, you are greeted with many screen windows:" -msgstr "" -"stack.shscreen -r stack で作成したセッションに " -"join すると、多数の screen ウィンドウが見えます。" - -msgid "" -"When you join the screen session that stack.sh starts with " -"screen -r stack, you see a screen for each service running, " -"which can be a few or several, depending on how many services you configured " -"DevStack to run." -msgstr "" -"stack.shscreen -r stack で作成したセッションに " -"join すると、動作中の各サービスのスクリーンを参照できます。これは、DevStack " -"が実行するよう設定したサービスの数に依存して、いくつかあるでしょう。" - -msgid "" -"When you reboot a compute node, first verify that it booted successfully. " -"This includes ensuring that the nova-compute service is running:" -"rebootcompute " -"nodemaintenance/debuggingcompute node reboot" -msgstr "" -"コンピュートノードを再起動した場合、まず正常に起動していることを検証します。" -"これには、nova-compute サービスの動作を確認することが含まれま" -"す。rebootcompute nodemaintenance/debuggingcompute node " -"reboot" - -msgid "" -"When you run any of the following operations, the services appear in their " -"own internal availability zone (CONF.internal_service_availability_zone): " -"The internal availability zone is hidden in euca-describe-" -"availability_zones (nonverbose)." -msgstr "" -"以下の操作のいずれかを実行する場合、サービスは独自の内部アベイラビリティゾー" -"ン(CONF.internal_service_availability_zone) に表示されます。" -"内部のアベイラビリティゾーンは、euca-describe-availability_zones " -"(nonverbose) に隠し設定されています。" - -msgid "" -"When you want to offer users different regions to provide legal " -"considerations for data storage, redundancy across earthquake fault lines, " -"or for low-latency API calls, you segregate your cloud. Use one of the " -"following OpenStack methods to segregate your cloud: cells, regions, availability zones, or host aggregates.segregation methodsscalingcloud segregation" -msgstr "" -"法的に考慮したデータストレージ、耐震ラインでの冗長性、低遅延の API コールを提" -"供する様々なリージョンを提供するには、クラウドを分割します。セルリージョンアベイラビリティゾーン" -"ホストアグリゲート のいずれかの OpenStack " -"メソッドを使用して、クラウドを分割します。分割メソッドスケーリングクラウド分割" - -msgid "Where Are the Logs?" -msgstr "ログはどこにあるのか?" - -msgid "" -"Where do you even begin troubleshooting something like this? An instance " -"that just randomly locks up when a command is issued. Is it the image? " -"Nopeit happens on all images. Is it the compute node? Nopeall nodes. Is the " -"instance locked up? No! New SSH connections work just fine!" -msgstr "" -"どこかであなたはこのような障害調査を行ったことがあるだろうか?インスタンスは" -"コマンドを打つ度に全くランダムにロックアップしてしまう。元になったイメージの" -"問題か?No-全てのイメージで同じ問題が発生する。コンピュートノードの問題か?" -"No-全てのノードで発生する。インスタンスはロックアップしたのか?No!新しいSSH" -"接続は問題なく機能する!" - -msgid "" -"Where floating IPs are configured in a deployment, each project will have a " -"limited number of floating IPs controlled by a quota. However, these need to " -"be allocated to the project from the central pool prior to their use—usually " -"by the administrator of the project. To allocate a floating IP to a project, " -"use the Allocate IP To Project button on the " -"Floating IPs tab of the Access & " -"Security page of the dashboard. The command line can also be used:" -"address poolIP addressesfloatinguser trainingfloating IPs" -msgstr "" -"Floating IP はクラウド全体で設定されますが、各プロジェクトはクォータにより " -"Floating IP 数を制限されているでしょう。使用する前に中央プールからプロジェク" -"トに確保する必要があります。一般的に、プロジェクトの管理者により行われます。" -"ダッシュボードのアクセスとセキュリティページの" -"Floating IPタブのFloating IP の確保ボタンを使用して、Floating IP をプロジェクトに確保します。コマンド" -"ラインを使用することもできます。address poolIP addressesfloatinguser trainingfloating IPs" - -msgid "" -"Whereas traditional applications required larger hardware to scale " -"(\"vertical scaling\"), cloud-based applications typically request more, " -"discrete hardware (\"horizontal scaling\"). If your cloud is successful, " -"eventually you must add resources to meet the increasing demand.scalingvertical vs. " -"horizontal" -msgstr "" -"従来のアプリケーションでは、スケーリングするには、より大きいハードウェアが必" -"要でした (垂直スケーリング) が、クラウドベースのアプリケーションは通常、別の" -"ハードウェアを必要とします(水平スケーリング)。クラウドを正常に設定できた場" -"合、需要が増すとそれに合うようにリソースを追加する必要がでてきます。" -"スケーリング垂直 " -"vs 水平" - -msgid "" -"Whether access to a specific resource might be granted or not according to " -"the permissions configured for the resource (currently available only for " -"the network resource). The actual authorization policies enforced in an " -"OpenStack service vary from deployment to deployment." -msgstr "" -"リソースに対して設定されたパーミッションに基づいて、特性のリソースに対するア" -"クセスを許可するかを決定する (今のところネットワークリソースでのみ利用可能)。" -"OpenStack により強制される実際の認可ポリシーは、導入の仕方により異なります。" - -msgid "" -"Whether you should enable Hyper-Threading on your CPUs depends upon your use " -"case. For example, disabling Hyper-Threading can be beneficial in intense " -"computing environments. We recommend that you do performance testing with " -"your local workload with both Hyper-Threading on and off to determine what " -"is more appropriate in your case.CPUs " -"(central processing units)enabling hyperthreading on" -msgstr "" -"CPU のハイパースレッディングを有効にするかどうかは、それぞれのユースケースに" -"より変わってきます。例えば、ハイパースレッディングを無効にすると、負荷の高い" -"コンピューティング環境で有用です。ハイパースレッディングがオン、オフの両方の" -"状態でローカルのワークロードを使用してパフォーマンスのテストを実施し、各ケー" -"スでいずれが適しているか決定するように推奨しています。CPU (central processing unit)ハイ" -"パースレッディングをオンにする" - -msgid "Which one results in the best cost-performance scenario I'm aiming for?" -msgstr "どちらが自分の意図した最高のコストパフォーマンスシナリオを実現するか?" - -msgid "" -"Which one results in the best cost-performance scenario you're aiming for?" -msgstr "何があなたが目指すコストパフォーマンスのシナリオはどれか?" - -msgid "" -"While OpenStack is composed of many components and moving parts, backing up " -"the critical data is quite simple.backup/recoveryitems included" -msgstr "" -"OpenStackは多くのコンポーネントから構成され、注意を払うべき箇所もたくさんあり" -"ますが、大事なデータのバックアップは非常に単純です。backup/recoveryitems included" - -msgid "" -"While bouncing this idea around in our heads, I was randomly typing commands " -"on the compute node: " -msgstr "" -"このアイデアが我々の頭を駆け巡る間、私はコンピュートノード上でコマンドをラン" -"ダムに叩いていた。 " - -msgid "" -"While instance information is stored in a number of database tables, the " -"table you most likely need to look at in relation to user instances is the " -"instances table.instancesdatabase informationdatabasesinstance " -"information inuser traininginstances" -msgstr "" -"インスタンスの情報が数多くのデータベースのテーブルに保存されますが、ユーザー" -"のインスタンスに関連して参照する必要がありそうなテーブルは、instances テーブ" -"ルです。instancesdatabase informationdatabasesinstance " -"information inuser traininginstances" - -msgid "" -"While it is important for an operator to be familiar with the steps involved " -"in deploying OpenStack, we also strongly encourage you to evaluate " -"configuration-management tools, such as Puppet or " -"Chef, which can help automate this deployment process." -"ChefPuppet" -msgstr "" -"オペレーターは OpenStack のデプロイに必要な手順に精通していることは重要です" -"が、一方で、 PuppetChefと" -"いった構成管理ツールの検証評価をすることを強くお勧めします。これらはデプロイ" -"手順を自動化する手助けとなります。ChefPuppet" - -msgid "" -"While monitoring system resources, I noticed a significant increase in " -"memory consumption while the EC2 API processed this request. I thought it " -"wasn't handling memory properlypossibly not releasing memory. If the API " -"received several of these requests, memory consumption quickly grew until " -"the system ran out of RAM and began using swap. Each node has 48 GB of RAM " -"and the \"nova-api\" process would consume all of it within minutes. Once " -"this happened, the entire system would become unusably slow until I " -"restarted the nova-api service." -msgstr "" -"システムリソースを監視しているうちに、EC2 API がこのリクエストを処理している" -"間、メモリー消費量が非常に増えていることに気が付きました。これは、メモリが開" -"放されず、正常に処理されていないと気づきました。API がこれらのいくつかのリク" -"エストを受け取ると、システムがメモリー不足になり、スワップを使い始めるまで、" -"メモリー消費がすぐに大きくなります。各ノードは 48GB メモリーを持ち、\"nova-" -"api\" プロセスが数分以内にそれらをすべて消費します。これが発生すると、nova-" -"api サービスを再起動するまで、システム全体が使えなくなるほど遅くなります。" - -msgid "" -"While our example contains all central services in a single location, it is " -"possible and indeed often a good idea to separate services onto different " -"physical servers. is a list of " -"deployment scenarios we've seen and their justifications.provisioning/deploymentdeployment " -"scenariosservicesseparation ofseparation of servicesdesign " -"considerationsseparation of services" -msgstr "" -"この例ではすべての中心的なサービスが1つの場所にありますが、サービスを分割して" -"それぞれ別の物理サーバに配置する事は可能であり、本当に良いアイデアです。" -" は構築のシナリオと設計の理由です。" -"プロビジョニング/構築構築シナリオサービス分離サービスの分離設計上の考慮事項サービスの分離" - -msgid "" -"While several resources already exist to help with deploying and installing " -"OpenStack, it's very important to make sure that you have your deployment " -"planned out ahead of time. This guide presumes that you have at least set " -"aside a rack for the OpenStack cloud but also offers suggestions for when " -"and what to scale." -msgstr "" -"OpenStack のデプロイやインストールの助けとなるリソースがすでに複数存在してい" -"る場合でも、時間に余裕を持ってデプロイメントの系っ買うを立てることは非常に重" -"要です。本書は、OpenStack クラウド用のラックを少なくとも 1 つ用意しているとの" -"前提で、何を度のタイミングでスケールするかの提案をしていきます。" - -msgid "" -"While the cloud can be run without the OpenStack Dashboard, we consider it to be indispensable, not just for user interaction " -"with the cloud, but also as a tool for operators. Additionally, the " -"dashboard's use of Django makes it a flexible framework for extension." -msgstr "" -"クラウドは OpenStack Dashboard がなくても稼働させること" -"は可能ですが、クラウドとの対話だけでなく運用担当者のツールとしても不可欠な要" -"素と判断しました。また、ダッシュボードに採用されている Django により、" -"拡張機能 のための柔軟なフレームワーク" -"となります。" - -msgid "" -"While this book doesn't describe installation, we do recommend automation " -"for deployment and configuration, discussed in this chapter." -msgstr "" -"本書はインストールについて説明していませんが、本章に記載されているように、配" -"備と設定を自動化することを強く推奨します。" - -msgid "" -"While version 3 of the Identity API is available, the client tools do not " -"yet implement those calls, and most OpenStack clouds are still implementing " -"Identity API v2.0.IdentityIdentity service API" -msgstr "" -"Identity API バージョン 3 が利用できますが、クライアントツールにはこれらの呼" -"び出しがまだ実装されておらず、多くの OpenStack クラウドは Identity API v2.0 " -"を実装しています。Identity serviceIdentity service API" - -msgid "" -"While we'd always recommend using your automated deployment system to " -"reinstall systems from scratch, sometimes you do need to remove OpenStack " -"from a system the hard way. Here's how:uninstall operationmaintenance/debugginguninstalling" -msgstr "" -"我々は常に、自動配備システムを使って、まっさらの状態からシステムを再インス" -"トールすることを進めていますが、時として OpenStack を地道にシステムから削除し" -"なければならない場合もあるでしょう。その場合には以下の手順となります。" -"uninstall operationmaintenance/debugginguninstalling" - -msgid "" -"While you might end up with unused partitions, such as partition 1 in disk " -"three and four of this example, this option allows for maximum utilization " -"of disk space. I/O performance might be an issue as a result of all disks " -"being used for all tasks." -msgstr "" -"この例では、ディスク 3 と 4 のパーティション 1 のように未使用のパーティション" -"が残る可能性もありますが、このオプションにより、ディスク領域の使用状況を最大" -"化することができます。すべてのディスクがすべてのタスクで利用されるため、I/O " -"のパフォーマンスが問題になる可能性があります。" - -msgid "Who This Book Is For" -msgstr "この本の対象読者" - -msgid "" -"Who uses it: DAIR is an integrated virtual environment that leverages the " -"CANARIE network to develop and test new information communication technology " -"(ICT) and other digital technologies. It combines such digital " -"infrastructure as advanced networking and cloud computing and storage to " -"create an environment for developing and testing innovative ICT " -"applications, protocols, and services; performing at-scale experimentation " -"for deployment; and facilitating a faster time to market.DAIRuse casesDAIROpenStack communityuse casesDAIR" -msgstr "" -"利用者:DAIR は新しい情報通信技術(ICT)と他のデジタル技術を開発・評価するた" -"めの CANARIE ネットワークを活用した統合仮想環境です。このシステムは、先進的な" -"ネットワーク、クラウドコンピューティング、ストレージといったデジタルインフラ" -"から構成されており、革新的な ICT アプリケーション、プロトコル、サービス、の開" -"発・評価環境の作成、デプロイのスケールに関する実験の実施、市場へのより早期の" -"投入促進を目的としています。DAIRuse casesDAIROpenStack communityuse casesDAIR" - -msgid "" -"Who uses it: researchers at CERN (European Organization for Nuclear " -"Research) conducting high-energy physics research.CERN (European Organization for Nuclear Research)use casesCERNOpenStack communityuse casesCERN" -msgstr "" -"利用者: 高エネルギー物理学の研究を指揮している CERN (European Organization " -"for Nuclear Research) の研究者。CERN " -"(European Organization for Nuclear Research)use casesCERNOpenStack communityuse casesCERN" - -msgid "" -"Who uses it: researchers from the Australian publicly funded research " -"sector. Use is across a wide variety of disciplines, with the purpose of " -"instances ranging from running simple web servers to using hundreds of cores " -"for high-throughput computing.NeCTAR " -"Research Clouduse casesNeCTAROpenStack communityuse casesNeCTAR" -msgstr "" -"利用者:オーストラリアの公的資金による研究部門からの研究者。用途は、シンプル" -"な Web サーバー用のインスタンスから高スループットコンピューティング用の数百の" -"コア使用まで、多種多様な専門分野に渡ります。NeCTAR Research Clouduse casesNeCTAROpenStack communityuse casesNeCTAR" - -msgid "" -"Who uses it: researchers from the MIT Computer Science and Artificial " -"Intelligence Lab.CSAIL (Computer " -"Science and Artificial Intelligence Lab)MIT CSAIL (Computer Science and Artificial " -"Intelligence Lab)use casesMIT CSAILOpenStack communityuse casesMIT CSAIL" -msgstr "" -"利用者:MIT Computer Science and Artificial Intelligence Lab からの研究者。" -"CSAIL (Computer Science and " -"Artificial Intelligence Lab)MIT CSAIL (Computer Science and Artificial Intelligence Lab)use casesMIT CSAILOpenStack communityuse casesMIT CSAIL" - -msgid "Why and How We Wrote This Book" -msgstr "この本をなぜ書いたか?どうやって書いたか?" - -msgid "Why not use OpenStack Networking?" -msgstr "OpenStack Networking を使用しない理由" - -msgid "Why use multi-host networking?" -msgstr "マルチホストネットワークを使用する理由" - -msgid "" -"Windows XP and later releases include a Volume Shadow Copy Service (VSS) " -"which provides a framework so that compliant applications can be " -"consistently backed up on a live filesystem. To use this framework, a VSS " -"requestor is run that signals to the VSS service that a consistent backup is " -"needed. The VSS service notifies compliant applications (called VSS writers) " -"to quiesce their data activity. The VSS service then tells the copy provider " -"to create a snapshot. Once the snapshot has been made, the VSS service " -"unfreezes VSS writers and normal I/O activity resumes." -msgstr "" -"Windows XP 以降は、従順なアプリケーションが動作中のファイルシステムで整合性の" -"あるバックアップを取得できるようにするフレームワークを提供する Volume Shadow " -"Copy Service (VSS) が含まれます。このフレームワークを使用するために、VSS リク" -"エスターが、整合性バックアップを必要とすることを VSS サービスに対してシグナル" -"を発行します。VSS サービスは、従順なアプリケーション (VSS ライターと言いま" -"す) に通知して、これらのデータ処理を休止します。そして、VSS サービスがコピー" -"プロバイダーにスナップショットを作成するよう指示します。スナップショットが作" -"成されると、VSS サービスが VSS ライターをフリーズ解除して、通常の I/O アク" -"ティビティーが再開されます。" - -msgid "" -"With nova-network, the nova database table contains a few " -"tables with networking information:databasesnova-network troubleshootingtroubleshootingnova-network database" -msgstr "" -"nova-network を用いると、nova データベーステーブルは、いく" -"つかのネットワーク情報を持つテーブルがあります。databasesnova-network troubleshootingtroubleshootingnova-network database" - -msgid "" -"With file-level storage, users access stored data using the operating " -"system's file system interface. Most users, if they have used a network " -"storage solution before, have encountered this form of networked storage. In " -"the Unix world, the most common form of this is NFS. In the Windows world, " -"the most common form is called CIFS (previously, SMB).migrationlive migrationstoragefile-level" -msgstr "" -"ファイルレベルのストレージでは、ユーザーは、オペレーティングシステムのファイ" -"ルシステムインターフェースを使用して保存したデータにアクセスします。ネット" -"ワークストレージソリューションの使用経験のあるユーザーの多くは、この形式の" -"ネットワークストレージを使用したことがあります。Unix の分野では、ネットワーク" -"ストレージで最も一般的なものが NFS で、Windows では CIFS (旧称 SMB) です。" -"migrationライブマイグレーションstorageファイルレベル" - -msgid "" -"With object storage, users access binary objects through a REST API. You may " -"be familiar with Amazon S3, which is a well-known example of an object " -"storage system. Object storage is implemented in OpenStack by the OpenStack " -"Object Storage (swift) project. If your intended users need to archive or " -"manage large datasets, you want to provide them with object storage. In " -"addition, OpenStack can store your virtual machine (VM) images inside of an object storage system, as an " -"alternative to storing the images on a file system.binarybinary objects" -msgstr "" -"オブジェクトストレージでは、REST API を使用してバイナリオブジェクトにアクセス" -"します。オブジェクトストレージで有名な例として、Amazon S3 は知られています。" -"オブジェクトストレージは、OpenStack Object Storage (swift) プロジェクトによ" -"り、OpenStack に実装されています。ユーザーが、大規模なデータセットをアーカイ" -"ブまたは管理する場合オブジェクトストレージを提供します。さらに OpenStack で" -"は、ファイルシステムにイメージを格納する代わりに、仮想マシン (VM) のイメージを、オブジェクトストレージシステム" -"の中に格納できます。バイナリバイナリオブジェクト" - -msgid "" -"With our upgrade to Grizzly in August 2013, we moved to OpenStack " -"Networking, neutron (quantum at the time). Compute nodes have two-gigabit " -"network interfaces and a separate management card for IPMI management. One " -"network interface is used for node-to-node communications. The other is used " -"as a trunk port for OpenStack managed VLANs. The controller node uses two " -"bonded 10g network interfaces for its public IP communications. Big pipes " -"are used here because images are served over this port, and it is also used " -"to connect to iSCSI storage, back-ending the image storage and database. The " -"controller node also has a gigabit interface that is used in trunk mode for " -"OpenStack managed VLAN traffic. This port handles traffic to the dhcp-agent " -"and metadata-proxy." -msgstr "" -"2013 年 8 月に Grizzly へとアップグレードしたときに、OpenStack Networking に" -"移行しました。コンピュートノードは、2 個の GbE NIC を持ち、IPMI 管理専用のマ" -"ネジメントカードを持ちます。1 つの NIC は、ノード間通信のために使用されます。" -"もう 1 つは、OpenStack が管理する VLAN のトランクポートとして使用されます。コ" -"ントローラーノードは、パブリック IP 通信のために、ボンドした 2 つの 10 GbE " -"NICを持ちます。イメージがこのポート経由で使用されるため、ビッグパイプがここで" -"使用されます。また、イメージストレージとデータベースのバックエンドとなる " -"iSCSI ストレージに接続するためにも使用されます。コントローラーノードは、" -"OpenStack が管理する VLAN 通信のためにトランクモードで使用される GbE NIC も持" -"ちます。このポートは、DHCP エージェントとメタデータプロキシーへの通信も処理し" -"ます。" - -msgid "" -"With so many considerations and options available, our hope is to provide a " -"few clearly-marked and tested paths for your OpenStack exploration. If " -"you're looking for additional ideas, check out , the OpenStack " -"Installation Guides, or the OpenStack User Stories page." -msgstr "" -"数多くの考慮事項およびオプションがあるため、本セクションでは、OpenStack をお" -"試しいただくための明確に記載した検証済みの方法を提供するように構成していま" -"す。本セクションに記載した以外の情報をお探しの場合には、OpenStack " -"Installation Guides、または OpenStack User Stories page をご確認ください。" - -msgid "" -"With the exception of Object Storage, upgrading from one version of " -"OpenStack to another can take a great deal of effort. This chapter provides " -"some guidance on the operational aspects that you should consider for " -"performing an upgrade for a basic architecture." -msgstr "" -"Object Storage 以外は、OpenStack のあるバージョンから別のバージョンにアップグ" -"レードすることは非常に難しいことです。本章は運用観点でいくつかのガイドライン" -"を提供します。これは、基本的なアーキテクチャー向けにアップグレードを実行する" -"際の考慮すべきことです。" - -msgid "" -"With the introduction of the full software-defined networking stack provided " -"by OpenStack Networking (neutron) in the Folsom release, development effort " -"on the initial networking code that remains part of the Compute component " -"has gradually lessened. While many still use nova-network " -"in production, there has been a long-term plan to remove the code in favor " -"of the more flexible and full-featured OpenStack Networking.novadeprecation of" -msgstr "" -"Folsom リリースにおいて OpenStack Networking (neutron) により提供された完全" -"な SDN スタックの導入により、Compute のコンポーネントの一部に残っている、初期" -"のネットワークのコードにおける開発の努力が徐々に少なくなってきました。まだた" -"くさん本番環境で nova-network を使用していますが、より柔軟" -"で完全な機能を持つ OpenStack Networking に移行して、そのコードを削除する長期" -"的な計画がありました。novadeprecation of" - -msgid "" -"With these two tables, you now have a good overview of what servers and " -"services make up your cloud." -msgstr "" -"これら2つの表で、どのサーバーとサービスがあなたのクラウドを構成しているのか、" -"概要を知ることができました。" - -msgid "" -"With this information in hand, we were sure that the problem had to do with " -"DHCP. We thought that for some reason, the instance wasn't getting a new IP " -"address and with no IP, it shut itself off from the network." -msgstr "" -"この情報により、我々は問題が DHCP 実行に起因するものと確信した。何らかの理由" -"でインスタンスが新しいIPアドレスを取得できず、その結果IPアドレスがなくなり、" -"インスタンスは自分自身をネットワークから切り離した、と考えた。" - -msgid "" -"With this option, you can assign different partitions to different RAID " -"arrays. You can allocate partition 1 of disk one and two to the /boot partition mirror. You can make partition 2 of all disks the root " -"partition mirror. You can use partition 3 of all disks for a cinder-" -"volumes LVM partition running on a RAID 10 array." -msgstr "" -"このオプションでは、パーティションごとに異なる RAID アレイにおくことができま" -"す。例えば、ディスク 1 とディスク 2 のパーティション 1 を /boot " -"パーティションのミラーとして、すべてのディスクのパーティション 2 をルートパー" -"ティションのミラーとして、すべてのディスクのパーティション 3 を RAID10 アレイ" -"の上の cinder-volumes の LVM パーティションとして割り当てること" -"ができます。" - -msgid "Within a VM" -msgstr "VM内" - -msgid "" -"Within this scope, you must complete these steps to successfully roll back " -"your environment:" -msgstr "" -"この範囲内で、これらの手順を完了して、正常に環境をロールバックする必要があり" -"ます。" - -msgid "" -"Without upgrade levels, an X+1 version Compute service can receive and " -"understand X version RPC messages, but it can only send out X+1 version RPC " -"messages. For example, if a nova-conductor process has been upgraded to X+1 version, then the conductor " -"service will be able to understand messages from X version nova-compute processes, but those compute services " -"will not be able to understand messages sent by the conductor service." -msgstr "" -"アップグレードレベルに関係なく、X+1 のバージョンの Compute サービスが X バー" -"ジョンの RPC メッセージを受信して理解できますが、X+1 のバージョンの RPC メッ" -"セージのみを送信できます。例えば、nova-" -"conductor プロセスが X+1 へとアップグレードされている場合、コン" -"ダクターサービスは、X バージョンの nova-" -"compute プロセスからのメッセージを理解できるようになります。しか" -"し、それらのコンピュートサービスは、コンダクターサービスにより送信されたメッ" -"セージを理解できません。" - -msgid "" -"Working directly with the database and SQL queries can provide you with " -"custom lists and reports of images. Technically, you can update properties " -"about images through the database, although this is not generally " -"recommended." -msgstr "" -"データベースと SQL クエリーを直接使うことで、イメージの独自のリストやレポート" -"を得ることができます。一般には、推奨されませんが、技術的にはデータベース経由" -"でイメージのプロパティを更新できます。" - -msgid "" -"Working from the physical interface inwards, we can see the chain of ports " -"and bridges. First, the bridge eth1-br, which contains the " -"physical network interface eth1 and the virtual interface " -"phy-eth1-br:" -msgstr "" -"物理インターフェースより内側に取り組むと、ポートとブリッジのチェインを確認で" -"きます。まず、物理インターフェース eth1 と仮想インター" -"フェース phy-eth1-br を含むブリッジ eth1-br です。" - -msgid "Working with Hardware" -msgstr "ハードウェアの取り扱い" - -msgid "Working with Roadmaps" -msgstr "ロードマップの取り扱い" - -msgid "Works with all guest operating systems." -msgstr "どんなゲストOSでも動きます。" - -msgid "" -"Write out \"dirty\" buffers to disk, similar to the Linux sync operation." -msgstr "" -"「ダーティー」バッファーをディスクに書き出します。Linux の sync 処理と似ています。" - -msgid "Xen" -msgstr "Xen" - -msgid "" -"You must mount the file system before you run the " -"fsfreeze command." -msgstr "" -"fsfreeze コマンドを実行する前に、ファイルシステ" -"ムをマウントする必要があります。" - -msgid "" -"You are prompted for a project name and an optional, but recommended, " -"description. Select the checkbox at the bottom of the form to enable this " -"project. By default, it is enabled, as shown in ." -msgstr "" -"プロジェクト名および任意の説明 (推奨) が要求されます。フォームの一番下の" -"チェックボックスを選択してこのプロジェクトを有効にします。のように、デフォルトでは、有効になっています。" - -msgid "You can also restore backed-up nova directories:" -msgstr "バックアップされた nova ディレクトリーもリストアできます。" - -msgid "" -"You can also specify block deviceblock device mapping at instance boot time " -"through the nova command-line client with this option set:" -msgstr "" -"nova コマンドラインクライアントに以下のようにオプションを付けて、インスタンス" -"の起動時にブロックデバイスblock " -"deviceのマッピングを指定することもできます。" - -msgid "" -"You can also use the Identity service (keystone) to see what services are " -"available in your cloud as well as what endpoints have been configured for " -"the services.Identitydisplaying services and endpoints with" -msgstr "" -"また、Identity (keystone) を使用してクラウドで利用可能なサービスと、サービス" -"用に設定済みのエンドポイントを確認することもできます。Identity serviceサービスおよびエン" -"ドポイントの表示" - -msgid "" -"You can configure some services, such as nova-api and " -"glance-api, to use multiple processes by changing a flag in " -"their configuration file—allowing them to share work between multiple cores " -"on the one machine." -msgstr "" -"nova-apiglance-api などのサービスは、環境設定" -"ファイルのフラグを変更することによって複数プロセスで処理させるように設定でき" -"ます。これによって 1 台のサーバー上にある複数のコアの間で処理を共有できるよう" -"になります。" - -msgid "" -"You can create a list of instances that are hosted on the compute node by " -"performing the following command:instancesmaintenance/debuggingmaintenance/debugginginstances" -msgstr "" -"以下のコマンドを実行して、コンピュートノードにホストしているインスタンスの一" -"覧を作成できます。instancesmaintenance/debuggingmaintenance/debugginginstances" - -msgid "" -"You can create automated alerts for critical processes by using Nagios and " -"NRPE. For example, to ensure that the nova-compute process is " -"running on compute nodes, create an alert on your Nagios server that looks " -"like this:" -msgstr "" -"NagiosとNRPEを使って、クリティカルなプロセスの自動化されたアラートを作成する" -"ことが可能です。nova-compute プロセスがコンピュートノードで動作" -"していることを保証するために、Nagiosサーバー上で次のようなアラートを作成しま" -"す。" - -msgid "" -"You can determine the package versions available for reversion by using the " -" command. If you removed the Grizzly repositories, you must " -"first reinstall them and run :" -msgstr "" -" コマンドを使用して、バージョンを戻すために利用できるパッケー" -"ジのバージョンを確認できます。Grizzly のリポジトリーを削除した場合、それらを" -"再インストールして、 を実行する必要があります。" - -msgid "" -"You can easily automate this process by creating a cron job that runs the " -"following script once per day:" -msgstr "" -"以下のようなcronジョブを一日に一度実行することで、簡単に自動化することも出来" -"ます。" - -msgid "" -"You can extend this reference architecture aslegacy networking (nova)optional " -"extensions follows:" -msgstr "" -"この参照アーキテクチャーは、レガシー" -"ネットワーク (nova)オプションの拡張機能以下のように拡張することが可能です:" - -msgid "" -"You can facilitate the horizontal expansion of your cloud by adding nodes. " -"Adding compute nodes is straightforward—they are easily picked up by the " -"existing installation. However, you must consider some important points when " -"you design your cluster to be highly available.compute nodesaddinghigh availabilityconfiguration " -"optionshigh availabilitycloud controller nodesaddingscalingadding cloud controller nodes" -msgstr "" -"ノードを追加することで、垂直に拡張するのが容易になります。コンピュートノード" -"の追加は単純で、既存のインストール環境から簡単にピックアップすることができま" -"す。しかし、高可用性のクラスターを設計するには、重要なポイントを考慮する必要" -"があります。コンピュートノード追加高可用性設定オプション高可用性クラウドコントローラーノード" -"追加スケーリングクラウドコントローラーノードの追" -"加" - -msgid "" -"You can find a matrix of the functionality provided by all of the supported " -"Block Storage drivers on the OpenStack wiki." -msgstr "" -"OpenStack wiki で、サポートされている全" -"ブロックストレージドライバーが提供する機能一覧を確認いただけます。" - -msgid "" -"You can find all of the documentation at the DevStack website." -msgstr "" -"すべてのドキュメントは DevStack の Web サイトにあります。" - -msgid "" -"You can find the full list of security-oriented teams you can join at Security Teams. The vulnerability management process is fully documented at Vulnerability Management." -msgstr "" -"あなたが参加できるセキュリティー関連のチームのリストは セキュリティーチーム にあります。脆弱性管理プロセスはすべてドキュメントにまとめられており、" -"" -"脆弱性管理 で参照できます。" - -msgid "" -"You can find the version of the Compute installation by using the " -"nova-managecommand: " -msgstr "" -"インストールされている Compute のバージョンを確認するには、nova-" -"managecommand を使用するこ" -"とができます: " - -msgid "" -"You can follow a similar pattern in other projects that use the Python Paste " -"framework. Simply create a middleware module and plug it in through " -"configuration. The middleware runs in sequence as part of that project's " -"pipeline and can call out to other services as necessary. No project core " -"code is touched. Look for a pipeline value in the project's " -"conf or ini configuration files in /etc/<" -"project> to identify projects that use Paste." -msgstr "" -"Python Paste フレームワークを使う他のすべてのプロジェクトで、類似のパターンに" -"従うことができます。単純にミドルウェアモジュールを作成し、環境定義によって組" -"み込んでください。そのミドルウェアはプロジェクトのパイプラインの一部として順" -"番に実行され、必要に応じて他のサービスを呼び出します。プロジェクトのコア・" -"コードは一切修正しません。Paste を使っているプロジェクトを確認するには、" -"/etc/<project> に格納されている、プロジェクトの " -"conf または ini 環境定義ファイルの中で " -"pipeline 変数を探してください。" - -msgid "" -"You can follow the progress being made on IPV6 support by watching the neutron IPv6 Subteam at work.LibertyIPv6 supportIPv6, enabling support forconfiguration " -"optionsIPv6 support" -msgstr "" -"neutron IPv6 Subteam at work を確認して、進行状況を確認し続" -"けられます。LibertyIPv6 supportIPv6, enabling support forconfiguration optionsIPv6 support" - -msgid "" -"You can modify this example script on each node to handle different services." -msgstr "" -"さまざまなサービスを処理するために、各ノードでこのサンプルスクリプトを修正で" -"きます。" - -msgid "" -"You can now access the contents of /mnt, which correspond to " -"the first partition of the instance's disk." -msgstr "" -"これで /mnt の中身にアクセスできます。これは、インスタンスのディ" -"スクの 1 番目のパーティションに対応します。" - -msgid "You can now get the related floating IP entry:" -msgstr "関連する Floating IP のエントリーが見つかります。" - -msgid "" -"You can now optionally migrate the instances back to their original compute " -"node." -msgstr "" -"インスタンスを元のコンピュートノードにマイグレーションすることもできます。" - -msgid "" -"You can obtain extra information about virtual machines that are running—" -"their CPU usage, the memory, the disk I/O or network I/O—per instance, by " -"running the nova diagnostics command withcompute nodesdiagnosingcommand-line " -"toolscompute node diagnostics a " -"server ID:" -msgstr "" -"実行中の仮想マシンの CPU 使用状況、メモリー、ディスク I/O、ネットワーク I/O " -"などの追加情報を取得するには、nova diagnostics コマンドに" -"コンピュートノード診断コマンドラインツールコンピュートノードの診断" -"サーバー ID を指定して実行します:" - -msgid "" -"You can obtain further statistics by looking for the number of successful " -"requests:" -msgstr "成功したリクエストを検索することで、更なる情報を取得できます。" - -msgid "You can optionally also deallocate the IP from the user's pool:" -msgstr "また、ユーザプールからIPを開放することもできます。" - -msgid "" -"You can perform a couple of tricks with the database to either more quickly " -"retrieve information or fix a data inconsistency error—for example, an " -"instance was terminated, but the status was not updated in the database. " -"These tricks are discussed throughout this book." -msgstr "" -"より迅速に情報を取得したり、データ不整合のエラーを修正したりするために、デー" -"タベースでいくつかの小技を実行できます。たとえば、インスタンスが終了していた" -"が、データベースの状態が更新されていなかった、という状況です。こうした小技が" -"このドキュメント全体を通して議論されています。" - -msgid "" -"You can read a small selection of use cases from the OpenStack community " -"with some technical details and further resources." -msgstr "" -"OpenStack コミュニティーのユースケースをいくつか参照できます。少しの技術的な" -"詳細と参考資料もあります。" - -msgid "" -"You can restrict a project's image storage by total number of bytes. " -"Currently, this quota is applied cloud-wide, so if you were to set an Image " -"quota limit of 5 GB, then all projects in your cloud will be able to store " -"only 5 GB of images and snapshots.Image servicequota setting" -msgstr "" -"プロジェクトのイメージ保存容量を合計バイト数で制限できます。現在、このクォー" -"タはクラウド全体に適用されます。そのため、イメージのクォータを 5 GB に設定す" -"る場合、クラウドの全プロジェクトが、5 GB 以内のイメージやスナップショットのみ" -"を保存できます。Image servicequota setting" - -msgid "" -"You can safely ignore the state of virbr0, which is a " -"default bridge created by libvirt and not used by OpenStack." -msgstr "" -"virbr0 の状態は無視することができます。なぜならそれは" -"libvirtが作成するデフォルトのブリッジで、OpenStackからは使われないからです。" - -msgid "" -"You can save resources by looking at the best fit for the hardware you have " -"in place already. You might have some high-density storage hardware " -"available. You could format and repurpose those servers for OpenStack Object " -"Storage. All of these considerations and input from users help you build " -"your use case and your deployment plan." -msgstr "" -"すでに設置済みのハードウェアに最適な方法で使用されていることをチェックするこ" -"とで、リソースを節約することができます。高濃度のストレージハードウェアがある" -"とします。このハードウェアをフォーマットして、OpenStack Object Storage 用に" -"サーバーの用途を変更することができます。ユーザーからのこのような検討やイン" -"プットすべてをベースにすることで、ユースケースやデプロイメントプランの作成が" -"容易になります。" - -msgid "" -"You can save time by understanding the use cases for the cloud you want to " -"create. Use cases for OpenStack are varied. Some include object storage " -"only; others require preconfigured compute resources to speed development-" -"environment set up; and others need fast provisioning of compute resources " -"that are already secured per tenant with private networks. Your users may " -"have need for highly redundant servers to make sure their legacy " -"applications continue to run. Perhaps a goal would be to architect these " -"legacy applications so that they run on multiple instances in a cloudy, " -"fault-tolerant way, but not make it a goal to add to those clusters over " -"time. Your users may indicate that they need scaling considerations because " -"of heavy Windows server use.provisioning/deploymenttips for" -msgstr "" -"作成するクラウドのユースケースを理解することで時間を節約することあできます。" -"OpenStack のユースケースはさまざまで、オブジェクトストレージのみのもの、開発" -"環境設定を加速するために事前設定されたコンピュートリソースが必要なもの、プラ" -"イベートネットワークでテナントごとにセキュリティが確保されたコンピュートリ" -"ソースの迅速にプロビジョニングするものもあります。ユーザーは、レガシーアプリ" -"ケーションが継続して実行されるように、非常に冗長化されたサーバーが必要な場合" -"もあります。おそらく、時間をかけてこれらのクラスターを追加するのが目的ではな" -"く、クラウドの耐障害性を確保したかたちで、複数のインスタンス上で実行するため" -"に、レガシーのアプリケーションを構築するのが目的の場合もあります。ユーザーに" -"よっては、負荷の高い Windows サーバーを使用するため、スケーリングを考慮する必" -"要があると指定する場合もあるでしょう。" -"プロビジョニング/デプロイメントtips for" - -msgid "You can scale to any number of spindles." -msgstr "スピンドル数を何個にでもスケールすることができます。" - -msgid "" -"You can use availability zones, host aggregates, or both to partition a nova " -"deployment.scalingavailability zones" -msgstr "" -"アベイラビリティゾーン、ホストアグリゲート、または両方を使用して、nova デプロ" -"イメントを分割することができます。ス" -"ケーリングアベイラビリティゾーン" - -msgid "" -"You could ask, \"Do I even need to build a cloud?\" If you want to start " -"using a compute or storage service by just swiping your credit card, you can " -"go to eNovance, HP, Rackspace, or other organizations to start using their " -"public OpenStack clouds. Using their OpenStack cloud resources is similar to " -"accessing the publicly available Amazon Web Services Elastic Compute Cloud " -"(EC2) or Simple Storage Solution (S3)." -msgstr "" -"「まだクラウドを構築する必要がありますか?」と質問したことでしょう。クレジット" -"カードを使うだけで、コンピュートサービスやストレージサービスを使いはじめたい" -"場合、eNovance、HP、Rackspace などのパブリック OpenStack クラウドを使うことが" -"できます。それらの OpenStack クラウドのリソースを使うことは、パブリックにアク" -"セスできる Amazon Web Services Elastic Compute Cloud (EC2) や Simple Storage " -"Solution (S3) にアクセスすることと同じです。" - -msgid "" -"You define the availability zone in which a specified compute host resides " -"locally on each server. An availability zone is commonly used to identify a " -"set of servers that have a common attribute. For instance, if some of the " -"racks in your data center are on a separate power source, you can put " -"servers in those racks in their own availability zone. Availability zones " -"can also help separate different classes of hardware." -msgstr "" -"指定したコンピュートホストがローカルでサーバー毎に所属するアベイラビリティ" -"ゾーンを定義します。アベイラビリティゾーンは一般的に、共通の属性を持つサー" -"バーを識別するために使用されます。例えば、データセンターのラックの一部が別の" -"電源を仕様している場合、このラックのサーバーを独自のアベイラビリティゾーンに" -"入れることができます。アベイラビリティゾーンは、異なるハードウェアクラスを分" -"割することもできます。" - -msgid "" -"You have an individual log file for each compute node as well as an " -"aggregated log that contains nova logs from all nodes." -msgstr "" -"全てのノードからのnovaのログを含む集約されたログだけでなく、個々のコンピュー" -"トノードのログも持つことになります。" - -msgid "" -"You may find that you can automate the partitioning itself. For example, MIT " -"uses Fully Automatic " -"Installation (FAI) to do the initial PXE-based partition and then " -"install using a combination of min/max and percentage-based partitioning." -"Fully Automatic Installation (FAI)" -msgstr "" -"パーティショニング自体を自動化可能であることが分かります。例えば、MIT は " -"Fully Automatic Installation " -"(FAI) を使用して、初期の PXE ベースのパーティション分割を行い、min/" -"max およびパーセントベースのパーティショニングを組み合わせてインストールして" -"いきます。Fully Automatic " -"Installation (FAI)" - -msgid "" -"You may need to explicitly install the ipset package if " -"your distribution does not install it as a dependency." -msgstr "" -"ipset パッケージが、お使いのディストリビューションにおい" -"て、依存関係でインストールされていない場合、それを明示的にインストールする必" -"要があるかもしれません。" - -msgid "" -"You may notice that all the existing logging messages are preceded by an " -"underscore and surrounded by parentheses, for example:" -msgstr "" -"以下に例を示しますが、全てのログメッセージはアンダースコアで始まり、括弧で括" -"られていることに気づいたでしょうか?" - -msgid "You might also see a message such as this:" -msgstr "このようなメッセージも確認できるかもしれません。" - -msgid "" -"You must also consider key hardware specifications for the performance of " -"user VMs, as well as budget and performance needs, including storage " -"performance (spindles/core), memory availability (RAM/core), network " -"bandwidthbandwidthhardware specifications and (Gbps/" -"core), and overall CPU performance (CPU/core)." -msgstr "" -"また、仮想マシンのパフォーマンス、ストレージの性能 (スピンドル/コア)、メモ" -"リーの空き容量 (RAM/コア)、ネットワークの帯域幅など、予算や性能のニーズに関し" -"て、主なハードウェアの仕様を考慮する必要もあります。帯域幅ハードウェアの仕様 (Gbps/コア)、全体的な CPU のパフォーマンス (CPU/コア)" - -msgid "" -"You must choose an operating system that can run on all of the physical " -"nodes. This example architecture is based on Red Hat Enterprise Linux, which " -"offers reliability, long-term support, certified testing, and is hardened. " -"Enterprise customers, now moving into OpenStack usage, typically require " -"these advantages." -msgstr "" -"全物理ノードで実行可能なオペレーティングシステムを選択する必要があります。こ" -"のアーキテクチャー例は、信頼性、長期的なサポート、認定テストが提供され、セ" -"キュリティ強化されている Red Hat Enterprise Linux をベースにしています。現" -"在、OpenStack の採用に向けて移行中の企業顧客には、通常このような利点が要求さ" -"れます。" - -msgid "" -"You must choose whether you want to support the Amazon EC2 compatibility " -"APIs, or just the OpenStack APIs. One issue you might encounter when running " -"both APIs is an inconsistent experience when referring to images and " -"instances." -msgstr "" -"Amazon EC2 互換 API をサポートしたいか、OpenStack API だけなのか、選択しなけ" -"ればなりません。両方の API を運用する場合、イメージとインスタンスを参照する際" -"の見え方が違うことが一つの論点になります。" - -msgid "" -"You must complete the following configurations on the server's hard drives:" -msgstr "" -"サーバーのハードディスクに対して、以下の環境設定を完了させなければなりませ" -"ん。" - -msgid "" -"You must first choose the operating system that runs on all of the physical " -"nodes. While OpenStack is supported on several distributions of Linux, we " -"used Ubuntu 12.04 LTS (Long Term Support), which is " -"used by the majority of the development community, has feature completeness " -"compared with other distributions and has clear future support plans." -msgstr "" -"まず最初に、物理ノード上で実行するオペレーティングシステムを選択する必要があ" -"ります。OpenStack は複数の Linux ディストリビューションでサポートされています" -"が、開発コミュニティの大半で使用されている Ubuntu 12.04 LTS (Long " -"Term Support) を使用しました。このディストリビューションは、他の" -"ディストリビューションと比較した場合、機能の完全性が高く、将来のサポートプラ" -"ンが明確に立てられています。" - -msgid "" -"You must have the appropriate credentials if you want to use the command-" -"line tools to make queries against your OpenStack cloud. By far, the easiest " -"way to obtain authentication credentials to use with " -"command-line clients is to use the OpenStack dashboard. Select " -"Project, click the Project tab, and click Access & Security " -"on the Compute category. On the " -"Access & Security page, click the " -"API Access tab to display two buttons, " -"Download OpenStack RC File and Download EC2 " -"Credentials, which let you generate files that you can source in " -"your shell to populate the environment variables the command-line tools " -"require to know where your service endpoints and your authentication " -"information are. The user you logged in to the dashboard dictates the " -"filename for the openrc file, such as demo-openrc.sh. " -"When logged in as admin, the file is named admin-openrc.sh.credentialsauthenticationcommand-line toolsgetting credentials" -msgstr "" -"コマンドラインツールを使用して OpenStack クラウドに対してクエリーを実行するに" -"は、適切な認証情報が必要です。コマンドラインクライアントで使用する" -"認証のクレデンシャルを取得する最も簡単な方法は、OpenStack ダッ" -"シュボードを使用する方法です。プロジェクト を選択" -"し、プロジェクトタブをクリックし、コ" -"ンピュートカテゴリーにあるアクセスとセキュリティ をクリックします。アクセスとセキュリティページにおいて、API アクセス タブをク" -"リックして、OpenStack RC ファイルのダウンロード と " -"EC2 認証情報のダウンロード の 2 つのボタンを表示します。" -"これらのボタンにより、コマンドラインツールがサービスエンドポイントと認証情報" -"の場所を知るのに必要な環境変数を読み込むために、シェルで元データとして使用す" -"ることのできるファイルを生成することができます。ダッシュボードにログインした" -"ユーザーによって、openrc ファイルのファイル名が決定します (例: " -"demo-openrc.sh)。admin としてログインした場合には、ファ" -"イル名は admin-openrc.sh となります。認証情報認証コマンドラインツール認証情報の取得" - -msgid "" -"You must have the matching private key to access instances associated with " -"this key." -msgstr "" -"この鍵と関連付けられたインスタンスにアクセスするために、対応する秘密鍵を持つ" -"必要があります。" - -msgid "" -"You must remove the image after each test. Even better, test whether you can " -"successfully delete an image from the Image Service." -msgstr "" -"毎回テスト後にイメージを削除する必要があります。イメージサービスからイメージ" -"が削除できるかのテストにしてしまえば、さらによいです。" - -msgid "" -"You must select the appropriate CPU and RAM allocation ratio for your " -"particular use case." -msgstr "" -"あなた自身のユースケースに合わせて、適切な CPU と RAM の割当比を選択しなけれ" -"ばなりません。" - -msgid "You need to size the controller with a core per service." -msgstr "" -"サービスごとに1コア割り当ててコントローラーをサイジングする必要があります。" - -msgid "" -"You should be doing sanity checks on the interfaces using command such as " -"ip a and brctl show to ensure that the interfaces " -"are actually up and configured the way that you think that they are." -msgstr "" -"また、ip abrctl show などのコマンドを使って、イ" -"ンターフェイスが実際にUPしているか、あなたが考えたとおりに設定されているか、" -"正当性を検査をすべきです。" - -msgid "" -"You should load balance user-facing services such as dashboard, nova-" -"api, or the Object Storage proxy. Use any standard HTTP load-" -"balancing method (DNS round robin, hardware load balancer, or software such " -"as Pound or HAProxy). One caveat with dashboard is the VNC proxy, which uses " -"the WebSocket protocol—something that an L7 load balancer might struggle " -"with. See also Horizon session storage." -msgstr "" -"ダッシュボード、nova-api、Object Storage プロキシなどのユーザー" -"が使用するサービスの負荷分散をする必要があります。標準の HTTP 負荷分散メソッ" -"ド (DNS ラウンドロビン、ハードウェアロードバランサー、Pound または HAProxy な" -"どのソフトウェア) を使用して下さい。ダッシュボードに関しては、L7 ロードバラン" -"サーで大変困難となる可能性があるため、VNC プロキシは注意が必要です。Horizon セッ" -"ションストレージも参照してください。" - -msgid "You should see a message about /dev/sdb." -msgstr "/dev/sdb に関するメッセージを確認したほうがいいです。" - -msgid "You should see a result similar to the following:" -msgstr "以下のような結果を確認できます:" - -msgid "" -"You should verify that you have the requisite backups to restore. Rolling " -"back upgrades is a tricky process because distributions tend to put much " -"more effort into testing upgrades than downgrades. Broken downgrades take " -"significantly more effort to troubleshoot and, resolve than broken upgrades. " -"Only you can weigh the risks of trying to push a failed upgrade forward " -"versus rolling it back. Generally, consider rolling back as the very last " -"option." -msgstr "" -"リストアするために必要なバックアップがあることを確認すべきです。ディストリ" -"ビューションは、ダウングレードよりもアップグレードをテストすることにかなりの" -"労力をかける傾向があるため、ローリングバックアップグレードは扱いにくいプロセ" -"スです。失敗したダウングレードは、失敗したアップグレードよりトラブルシュー" -"ティングと解決に非常により多くの労力を必要とします。失敗したアップグレードを" -"前に進め続けるリスク、ロールバックするリスクを比較して重み付けすることだけが" -"できます。一般的に、かなり最後の選択肢としてロールバックを検討してください。" - -msgid "" -"You want to keep an eye on the areas improving within OpenStack. The best " -"way to \"watch\" roadmaps for each project is to look at the blueprints that " -"are being approved for work on milestone releases. You can also learn from " -"PTL webinars that follow the OpenStack summits twice a year.OpenStack communityworking with roadmapsaspects to " -"watch" -msgstr "" -"OpenStack の中で改善されている領域を注目しつづけたいでしょう。各プロジェクト" -"のロードマップを「ウォッチ」する最善の方法は、今のマイルストーンリリースにお" -"いて取り組むために承認されたブループリントを確認することです。1 年に 2 回開催" -"されている OpenStack サミットの PTL によるウェビナーからも知ることができま" -"す。OpenStack " -"communityworking with roadmapsaspects to watch" - -msgid "" -"Your OpenStack cloud networking needs to fit into your existing networks " -"while also enabling the best design for your users and administrators, and " -"this chapter gives you in-depth information about networking decisions." -msgstr "" -"OpenStack クラウドのネットワークは、既存のネットワークに適合する必要がありま" -"す。また、ユーザーや管理者にとって最良の設定をできる必要もあります。この章" -"は、ネットワークの判断に関する深い情報を提供します。" - -msgid "" -"Your credentials are a combination of username, password, and tenant " -"(project). You can extract these values from the openrc.sh " -"discussed above. The token allows you to interact with your other service " -"endpoints without needing to reauthenticate for every request. Tokens are " -"typically good for 24 hours, and when the token expires, you are alerted " -"with a 401 (Unauthorized) response and you can request another token.catalog" -msgstr "" -"認証情報はユーザー名、パスワード、テナント (プロジェクト) の組み合わせです。" -"これらの値は、前述の openrc.sh から抽出することができます。トー" -"クンにより、要求ごとに再認証する必要なく他のエンドポイントとの対話を行うこと" -"ができます。トークンは通常 24 時間有効です。期限が切れると、401 " -"(Unauthorized) の応答で警告され、トークン をもう 1 つ要求することができます。カタログ" - -msgid "" -"Your first port of call should be the official OpenStack documentation, " -"found on . You can get " -"questions answered on ." -msgstr "" -"最初に確認すべき場所は OpenStack の公式ドキュメントです。 にあります。また、 で質問と回答を参照することができます。" - -msgid "ZFS" -msgstr "ZFS" - -msgid "ZFSZFS" -msgstr "ZFSZFS" - -msgid "admin" -msgstr "admin" - -msgid "and the following log statement into the __call__ method:" -msgstr "" -"そして以下のログ出力分を __call__ メソッドに挿入してください。" - -msgid "bandwidth_poll_interval" -msgstr "bandwidth_poll_interval" - -msgid "base_image_ref" -msgstr "base_image_ref" - -msgid "cinder-*" -msgstr "cinder-*" - -msgid "cinder-api" -msgstr "cinder-api" - -msgid "cinder-manage" -msgstr "cinder-manage" - -msgid "cinder-scheduler" -msgstr "cinder-scheduler" - -msgid "cinder-volume" -msgstr "cinder-volume" - -msgid "cores" -msgstr "cores" - -msgid "created_at" -msgstr "created_at" - -msgid "delete-on-terminate" -msgstr "delete-on-terminate" - -msgid "deleted_at" -msgstr "deleted_at" - -msgid "dev-name" -msgstr "dev-name" - -msgid "direction" -msgstr "方向" - -msgid "ethertype" -msgstr "ethertype" - -msgid "euca-describe-availability-zones verbose" -msgstr "euca-describe-availability-zones verbose" - -msgid "extra_specs" -msgstr "extra_specs" - -msgid "file" -msgstr "file" - -msgid "fixed-ips" -msgstr "fixed-ips" - -msgid "fixed_ips" -msgstr "fixed_ips" - -msgid "floating-ips" -msgstr "floating-ips" - -msgid "floating_ips" -msgstr "floating_ips" - -msgid "gigabytes" -msgstr "gigabytes" - -msgid "glance-*" -msgstr "glance-*" - -msgid "glance-manage" -msgstr "glance-manage" - -msgid "" -"grep -hE \"connection ?=\" /etc/nova/nova.conf /etc/glance/glance-*.conf /" -"etc/cinder/cinder.conf /etc/keystone/keystone.conf" -msgstr "" -"grep -hE \"connection ?=\" /etc/nova/nova.conf /etc/glance/glance-*.conf /" -"etc/cinder/cinder.conf /etc/keystone/keystone.conf" - -msgid "group_hosts" -msgstr "group_hosts" - -msgid "guest-file-flush" -msgstr "guest-file-flush" - -msgid "guest-fsfreeze" -msgstr "guest-fsfreeze" - -msgid "guest-fsfreeze-thaw" -msgstr "guest-fsfreeze-thaw" - -msgid "heal_instance_info_cache_interval" -msgstr "heal_instance_info_cache_interval" - -msgid "horizon" -msgstr "horizon" - -msgid "host_state_interval" -msgstr "host_state_interval" - -msgid "hosts_up" -msgstr "hosts_up" - -msgid "id" -msgstr "id" - -msgid "image_cache_manager_interval" -msgstr "image_cache_manager_interval" - -msgid "image_location" -msgstr "image_location" - -msgid "image_properties" -msgstr "image_properties" - -msgid "image_type" -msgstr "image_type" - -msgid "images" -msgstr "images" - -msgid "img_signature uses the signature called signature_64" -msgstr "img_signature は signature_64 という署名を使用します" - -msgid "" -"img_signature_certificate_uuid uses the value from cert_uuid in section 5 " -"above" -msgstr "" -"img_signature_certificate_uuid は、上のセクション 5 にある cert_uuid の値を使" -"用します" - -msgid "img_signature_hash_method matches 'SHA-256' in section 2 above" -msgstr "" -"img_signature_hash_method は、上のセクション 2 にある SHA-256 と一致します" - -msgid "img_signature_key_type matches 'RSA-PSS' in section 2 above" -msgstr "" -"img_signature_key_type は、上のセクション 2 にある RSA-PSS と一致します" - -msgid "injected-file-content-bytes" -msgstr "injected-file-content-bytes" - -msgid "injected-file-path-bytes" -msgstr "injected-file-path-bytes" - -msgid "injected-files" -msgstr "injected-files" - -msgid "instance_delete_interval" -msgstr "instance_delete_interval" - -msgid "instance_uuid" -msgstr "instance_uuid" - -msgid "instances" -msgstr "instances" - -msgid "ip_scheduler.py" -msgstr "ip_scheduler.py" - -msgid "ip_whitelist.py" -msgstr "ip_whitelist.py" - -msgid "iptables" -msgstr "iptables" - -msgid "key" -msgstr "key" - -msgid "key*" -msgstr "key*" - -msgid "key-pairs" -msgstr "key-pairs" - -msgid "keystone-*" -msgstr "keystone-*" - -msgid "keystone-manage" -msgstr "keystone-manage" - -msgid "launched_at" -msgstr "launched_at" - -msgid "libvirt" -msgstr "libvirt" - -msgid "local.conf" -msgstr "local.conf" - -msgid "m1.large" -msgstr "m1.large" - -msgid "m1.medium" -msgstr "m1.medium" - -msgid "m1.small" -msgstr "m1.small" - -msgid "m1.tiny" -msgstr "m1.tiny" - -msgid "m1.xlarge" -msgstr "m1.xlarge" - -msgid "member" -msgstr "member" - -msgid "metadata-items" -msgstr "metadata-items" - -msgid "misc (swift, dnsmasq)" -msgstr "その他 (swift, dnsmasq)" - -msgid "multi-host*" -msgstr "マルチホスト*" - -msgid "my.instance.ip.address" -msgstr "my.instance.ip.address" - -msgid "n-sch" -msgstr "n-sch" - -msgid "n-{name}" -msgstr "n-{name}" - -msgid "neutron-*" -msgstr "neutron-*" - -msgid "nova host-list (os-hosts)" -msgstr "nova host-list (os-hosts)" - -msgid "nova-*" -msgstr "nova-*" - -msgid "nova-api" -msgstr "nova-api" - -msgid "nova-manage" -msgstr "nova-manage" - -msgid "nova-network" -msgstr "nova-network" - -msgid "" -"or the Command-Line Interface Reference ." -msgstr "" -"または Command-Line Interface Reference 。" - -msgid "port_range_max" -msgstr "port_range_max" - -msgid "port_range_min" -msgstr "port_range_min" - -msgid "protocol" -msgstr "プロトコル" - -msgid "python-cinderclient (cinder CLI)" -msgstr "python-cinderclient (cinder CLI)" - -msgid "python-glanceclient (glance CLI)" -msgstr "python-glanceclient (glance CLI)" - -msgid "python-keystoneclient (keystone CLI)" -msgstr "python-keystoneclient (keystone CLI)" - -msgid "python-neutronclient (neutron CLI)" -msgstr "python-neutronclient (neutron CLI)" - -msgid "python-novaclient (nova CLI)" -msgstr "python-novaclient (nova CLI)" - -msgid "python-swiftclient (swift CLI)" -msgstr "python-swiftclient (swift CLI)" - -msgid "quotaName" -msgstr "quotaName" - -msgid "quotaValue" -msgstr "quotaValue" - -msgid "ram" -msgstr "ram" - -msgid "reclaim_instance_interval" -msgstr "reclaim_instance_interval" - -msgid "remote_ip_prefix" -msgstr "remote_ip_prefix" - -msgid "rsyslog Client Configuration" -msgstr "rsyslog クライアント設定" - -msgid "rsyslog Server Configuration" -msgstr "rsyslog サーバー設定" - -msgid "s-{name}" -msgstr "s-{name}" - -msgid "scheduled_at" -msgstr "scheduled_at" - -msgid "security-group-rules" -msgstr "security-group-rules" - -msgid "security-groups" -msgstr "security-groups" - -msgid "shell" -msgstr "shell" - -msgid "shelved_offload_time" -msgstr "shelved_offload_time" - -msgid "shelved_poll_interval" -msgstr "shelved_poll_interval" - -msgid "size (GB)" -msgstr "size (GB)" - -msgid "snapshot" -msgstr "スナップショット" - -msgid "snapshots" -msgstr "snapshots" - -msgid "sync_power_state_interval" -msgstr "sync_power_state_interval" - -msgid "tcpdump" -msgstr "tcpdump" - -msgid "tenantID" -msgstr "tenantID" - -msgid "tenantName" -msgstr "tenantName" - -msgid "terminated_at" -msgstr "terminated_at" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -msgid "translator-credits" -msgstr "" -"Akihiro MOTOKI , 2013\n" -"Akira Yoshiyama , 2013\n" -"Masanori Itoh , 2013\n" -"masayukig , 2013\n" -"*はたらくpokotan* <>, 2013\n" -"Tsutomu TAKEKAWA , 2013\n" -"doki701 , 2013\n" -"Tomoyuki KATO , 2012-2013\n" -"tmak , 2013" - -msgid "type" -msgstr "type" - -msgid "update_service_capabilities" -msgstr "update_service_capabilities" - -msgid "updated_at" -msgstr "updated_at" - -msgid "username" -msgstr "ユーザー名" - -msgid "value" -msgstr "value" - -msgid "" -"vlan20 is the VLAN that the data center gave us for outgoing Internet " -"access. It's a correct VLAN and is also attached to bond0." -msgstr "" -"vlan20 はデータセンターが外向けのインターネットアクセス用に我々に付与した " -"VLAN である。これは正しい VLAN で bond0 にアタッチされている。" - -msgid "volume_usage_poll_interval" -msgstr "volume_usage_poll_interval" - -msgid "volumes" -msgstr "volumes" - -msgid "where nova is the database you want to back up." -msgstr "ここで nova はバックアップ対象のデータベースです。" - -msgid "" -"will keep the RPC version locked across the specified services to the RPC " -"version used in X+1. As all instances of a particular service are upgraded " -"to the newer version, the corresponding line can be removed from " -"nova.conf." -msgstr "" -"指定したサービスをまたがり、RPC バージョンを X+1 の RPC バージョンに固定しま" -"す。特定のサービスのすべてのインスタンスがより新しいバージョンにアップグレー" -"ドされた後、対応する行を nova.conf から削除できます。" - -msgid " " -msgstr " " - -msgid "“A Tutorial and Primer”" -msgstr "“A Tutorial and Primer”" - -msgid "“CERN Cloud Infrastructure User Guide”" -msgstr "“CERN Cloud Infrastructure User Guide”" - -msgid "“OpenStack in Production: A tale of 3 OpenStack Clouds”" -msgstr "“OpenStack in Production: A tale of 3 OpenStack Clouds”" - -msgid "“Review of CERN Data Centre Infrastructure”" -msgstr "“Review of CERN Data Centre Infrastructure”" - -msgid "" -"“The Cloud” has been described as a volatile environment where servers can " -"be created and terminated at will. While this may be true, it does not mean " -"that your servers must be volatile. Ensuring that your cloud's hardware is " -"stable and configured correctly means that your cloud environment remains up " -"and running. Basically, put effort into creating a stable hardware " -"environment so that you can host a cloud that users may treat as unstable " -"and volatile.serversavoiding volatility inhardwarescalability " -"planningscalinghardware procurement" -msgstr "" -"「クラウド」とは、サーバーを自由に作成、終了でき、不安定な環境と説明されてき" -"ました。これは真実ですが、サーバー自体が不安定なわけではありません。クラウド" -"のハードウェアは、安定して正しく設定されているようにすることで、クラウド環境" -"は稼動状態を保ちます。基本的に、安定したハードウェア環境を構築するように努め" -"ることで、ユーザーが不安定かつ変化しやすいものとして処理を行う可能性のあるク" -"ラウドをホストすることができます。servers不安定性の回避ハードウェア拡張性プランニングスケーリングハードウェア調達" -"" - -msgid "“The NIST Definition of Cloud Computing”" -msgstr "“The NIST Definition of Cloud Computing”" diff --git a/doc/openstack-ops/locale/openstack-ops.pot b/doc/openstack-ops/locale/openstack-ops.pot deleted file mode 100644 index ed6aebcb..00000000 --- a/doc/openstack-ops/locale/openstack-ops.pot +++ /dev/null @@ -1,11266 +0,0 @@ -msgid "" -msgstr "" -"Project-Id-Version: PACKAGE VERSION\n" -"POT-Creation-Date: 2016-03-19 06:33+0000\n" -"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" -"Last-Translator: FULL NAME \n" -"Language-Team: LANGUAGE \n" -"MIME-Version: 1.0\n" -"Content-Type: text/plain; charset=UTF-8\n" -"Content-Transfer-Encoding: 8bit\n" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:10(title) -msgid "Upgrades" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:12(para) -msgid "With the exception of Object Storage, upgrading from one version of OpenStack to another can take a great deal of effort. This chapter provides some guidance on the operational aspects that you should consider for performing an upgrade for a basic architecture." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:19(title) -msgid "Pre-upgrade considerations" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:22(title) -msgid "Upgrade planning" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:25(para) -msgid "Thoroughly review the release notes to learn about new, updated, and deprecated features. Find incompatibilities between versions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:31(para) -msgid "Consider the impact of an upgrade to users. The upgrade process interrupts management of your environment including the dashboard. If you properly prepare for the upgrade, existing instances, networking, and storage should continue to operate. However, instances might experience intermittent network interruptions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:38(para) -msgid "Consider the approach to upgrading your environment. You can perform an upgrade with operational instances, but this is a dangerous approach. You might consider using live migration to temporarily relocate instances to other compute nodes while performing upgrades. However, you must ensure database consistency throughout the process; otherwise your environment might become unstable. Also, don't forget to provide sufficient notice to your users, including giving them plenty of time to perform their own backups." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:48(para) -msgid "Consider adopting structure and options from the service configuration files and merging them with existing configuration files. The OpenStack Configuration Reference contains new, updated, and deprecated options for most services." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:57(para) -msgid "Like all major system upgrades, your upgrade could fail for one or more reasons. You should prepare for this situation by having the ability to roll back your environment to the previous release, including databases, configuration files, and packages. We provide an example process for rolling back your environment in .upgradingprocess overviewrollbackspreparing forupgradingpreparation for" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:74(para) -msgid "Develop an upgrade procedure and assess it thoroughly by using a test environment similar to your production environment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:81(title) -msgid "Pre-upgrade testing environment" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:82(para) -msgid "The most important step is the pre-upgrade testing. If you are upgrading immediately after release of a new version, undiscovered bugs might hinder your progress. Some deployers prefer to wait until the first point release is announced. However, if you have a significant deployment, you might follow the development and testing of the release to ensure that bugs for your use cases are fixed.upgradingpre-upgrade testing" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:93(para) -msgid "Each OpenStack cloud is different even if you have a near-identical architecture as described in this guide. As a result, you must still test upgrades between versions in your environment using an approximate clone of your environment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:98(para) -msgid "However, that is not to say that it needs to be the same size or use identical hardware as the production environment. It is important to consider the hardware and scale of the cloud that you are upgrading. The following tips can help you minimise the cost:upgradingcontrolling cost of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:109(term) -msgid "Use your own cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:111(para) -msgid "The simplest place to start testing the next version of OpenStack is by setting up a new environment inside your own cloud. This might seem odd, especially the double virtualization used in running compute nodes. But it is a sure way to very quickly test your configuration." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:119(term) -msgid "Use a public cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:122(para) -msgid "Consider using a public cloud to test the scalability limits of your cloud controller configuration. Most public clouds bill by the hour, which means it can be inexpensive to perform even a test with many nodes.cloud controllersscalability and" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:133(term) -msgid "Make another storage endpoint on the same system" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:136(para) -msgid "If you use an external storage plug-in or shared file system with your cloud, you can test whether it works by creating a second share or endpoint. This allows you to test the system before entrusting the new version on to your storage." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:145(term) -msgid "Watch the network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:148(para) -msgid "Even at smaller-scale testing, look for excess network packets to determine whether something is going horribly wrong in inter-component communication." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:155(para) -msgid "To set up the test environment, you can use one of several methods:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:162(para) -msgid "Do a full manual install by using the OpenStack Installation Guide for your platform. Review the final configuration files and installed packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:169(para) -msgid "Create a clone of your automated configuration infrastructure with changed package repository URLs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:171(para) -msgid "Alter the configuration until it works." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:174(para) -msgid "Either approach is valid. Use the approach that matches your experience." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:176(para) -msgid "An upgrade pre-testing system is excellent for getting the configuration to work. However, it is important to note that the historical use of the system and differences in user interaction can affect the success of upgrades." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:180(para) -msgid "If possible, we highly recommend that you dump your production database tables and test the upgrade in your development environment using this data. Several MySQL bugs have been uncovered during database migrations because of slight table differences between a fresh installation and tables that migrated from one version to another. This will have impact on large real datasets, which you do not want to encounter during a production outage." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:188(para) -msgid "Artificial scale testing can go only so far. After your cloud is upgraded, you must pay careful attention to the performance aspects of your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:196(title) -msgid "Upgrade Levels" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:197(para) -msgid "Upgrade levels are a feature added to OpenStack Compute since the Grizzly release to provide version locking on the RPC (Message Queue) communications between the various Compute services." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:201(para) -msgid "This functionality is an important piece of the puzzle when it comes to live upgrades and is conceptually similar to the existing API versioning that allows OpenStack services of different versions to communicate without issue." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:205(para) -msgid "Without upgrade levels, an X+1 version Compute service can receive and understand X version RPC messages, but it can only send out X+1 version RPC messages. For example, if a nova-conductor process has been upgraded to X+1 version, then the conductor service will be able to understand messages from X version nova-compute processes, but those compute services will not be able to understand messages sent by the conductor service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:214(para) -msgid "During an upgrade, operators can add configuration options to nova.conf which lock the version of RPC messages and allow live upgrading of the services without interruption caused by version mismatch. The configuration options allow the specification of RPC version numbers if desired, but release name alias are also supported. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:224(para) -msgid "will keep the RPC version locked across the specified services to the RPC version used in X+1. As all instances of a particular service are upgraded to the newer version, the corresponding line can be removed from nova.conf." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:228(para) -msgid "Using this functionality, ideally one would lock the RPC version to the OpenStack version being upgraded from on nova-compute nodes, to ensure that, for example X+1 version nova-compute processes will continue to work with X version nova-conductor processes while the upgrade completes. Once the upgrade of nova-compute processes is complete, the operator can move onto upgrading nova-conductor and remove the version locking for nova-compute in nova.conf." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:246(title) -msgid "Upgrade process" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:248(para) -msgid "This section describes the process to upgrade a basic OpenStack deployment based on the basic two-node architecture in the OpenStack Installation Guide. All nodes must run a supported distribution of Linux with a recent kernel and the current release packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:255(title) -msgid "Prerequisites" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:258(para) -msgid "Perform some cleaning of the environment prior to starting the upgrade process to ensure a consistent state. For example, instances not fully purged from the system after deletion might cause indeterminate behavior." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:264(para) -msgid "For environments using the OpenStack Networking service (neutron), verify the release version of the database. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:272(title) -msgid "Perform a backup" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:275(para) -msgid "Save the configuration files on all nodes. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:283(para) -msgid "You can modify this example script on each node to handle different services." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:288(para) -msgid "Make a full database backup of your production data. As of Kilo, database downgrades are not supported, and the only method available to get back to a prior database version will be to restore from backup." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:294(para) -msgid "Consider updating your SQL server configuration as described in the OpenStack Installation Guide." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:303(title) -msgid "Manage repositories" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:305(para) -msgid "On all nodes:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:307(para) -msgid "Remove the repository for the previous release packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:310(para) -msgid "Add the repository for the new release packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:313(para) -msgid "Update the repository database." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:318(title) -msgid "Upgrade packages on each node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:319(para) -msgid "Depending on your specific configuration, upgrading all packages might restart or break services supplemental to your OpenStack environment. For example, if you use the TGT iSCSI framework for Block Storage volumes and the upgrade includes new packages for it, the package manager might restart the TGT iSCSI services and impact connectivity to volumes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:325(para) -msgid "If the package manager prompts you to update configuration files, reject the changes. The package manager appends a suffix to newer versions of configuration files. Consider reviewing and adopting content from these files." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:330(para) -msgid "You may need to explicitly install the ipset package if your distribution does not install it as a dependency." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:335(title) -msgid "Update services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:336(para) -msgid "To update a service on each node, you generally modify one or more configuration files, stop the service, synchronize the database schema, and start the service. Some services require different steps. We recommend verifying operation of each service before proceeding to the next service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:341(para) -msgid "The order you should upgrade services, and any changes from the general upgrade process is described below:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:344(title) ./doc/openstack-ops/section_arch_example-neutron.xml:532(title) -msgid "Controller node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:346(para) -msgid "OpenStack Identity - Clear any expired tokens before synchronizing the database." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:350(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1369(title) -msgid "OpenStack Image service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:353(para) -msgid "OpenStack Compute, including networking components." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:357(para) ./doc/openstack-ops/section_arch_example-neutron.xml:94(para) ./doc/openstack-ops/section_arch_example-neutron.xml:198(term) -msgid "OpenStack Networking" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:360(para) -msgid "OpenStack Block Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:363(para) -msgid "OpenStack dashboard - In typical environments, updating the dashboard only requires restarting the Apache HTTP service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:368(para) -msgid "OpenStack Orchestration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:371(para) -msgid "OpenStack Telemetry - In typical environments, updating the Telemetry service only requires restarting the service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:376(para) -msgid "OpenStack Compute - Edit the configuration file and restart the service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:380(para) ./doc/openstack-ops/ch_ops_upgrades.xml:394(para) -msgid "OpenStack Networking - Edit the configuration file and restart the service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:385(title) ./doc/openstack-ops/ch_ops_log_monitor.xml:109(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:117(para) -msgid "Compute nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:387(para) -msgid "OpenStack Block Storage - Updating the Block Storage service only requires restarting the service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:392(title) -msgid "Storage nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:400(title) -msgid "Final steps" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:401(para) -msgid "On all distributions, you must perform some final tasks to complete the upgrade process.upgradingfinal steps" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:407(para) -msgid "Decrease DHCP timeouts by modifying /etc/nova/nova.conf on the compute nodes back to the original value for your environment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:410(para) -msgid "Update all .ini files to match passwords and pipelines as required for the OpenStack release in your environment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:413(para) -msgid "After migration, users see different results from and . To ensure users see the same images in the list commands, edit the /etc/glance/policy.json and /etc/nova/policy.json files to contain \"context_is_admin\": \"role:admin\", which limits access to private images for projects." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:420(para) -msgid "Verify proper operation of your environment. Then, notify your users that their cloud is operating normally again." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:427(title) -msgid "Rolling back a failed upgrade" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:429(para) -msgid "Upgrades involve complex operations and can fail. Before attempting any upgrade, you should make a full database backup of your production data. As of Kilo, database downgrades are not supported, and the only method available to get back to a prior database version will be to restore from backup." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:435(para) -msgid "This section provides guidance for rolling back to a previous release of OpenStack. All distributions follow a similar procedure.rollbacksprocess forupgradingrolling back failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:445(para) -msgid "A common scenario is to take down production management services in preparation for an upgrade, completed part of the upgrade process, and discovered one or more problems not encountered during testing. As a consequence, you must roll back your environment to the original \"known good\" state. You also made sure that you did not make any state changes after attempting the upgrade process; no new instances, networks, storage volumes, and so on. Any of these new resources will be in a frozen state after the databases are restored from backup." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:454(para) -msgid "Within this scope, you must complete these steps to successfully roll back your environment:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:458(para) -msgid "Roll back configuration files." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:462(para) -msgid "Restore databases from backup." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:466(para) -msgid "Roll back packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:470(para) -msgid "You should verify that you have the requisite backups to restore. Rolling back upgrades is a tricky process because distributions tend to put much more effort into testing upgrades than downgrades. Broken downgrades take significantly more effort to troubleshoot and, resolve than broken upgrades. Only you can weigh the risks of trying to push a failed upgrade forward versus rolling it back. Generally, consider rolling back as the very last option." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:479(para) -msgid "The following steps described for Ubuntu have worked on at least one production environment, but they might not work for all environments." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:484(title) -msgid "To perform the rollback" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:487(para) -msgid "Stop all OpenStack services." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:490(para) -msgid "Copy contents of configuration backup directories that you created during the upgrade process back to /etc/<service> directory." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:495(para) -msgid "Restore databases from the RELEASE_NAME-db-backup.sql backup file that you created with the command during the upgrade process:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:500(replaceable) -msgid "RELEASE_NAME" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:504(para) -msgid "Downgrade OpenStack packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:507(para) -msgid "Downgrading packages is by far the most complicated step; it is highly dependent on the distribution and the overall administration of the system." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:514(para) -msgid "Determine which OpenStack packages are installed on your system. Use the command. Filter for OpenStack packages, filter again to omit packages explicitly marked in the deinstall state, and save the final output to a file. For example, the following command covers a controller node with keystone, glance, nova, neutron, and cinder:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:562(para) -msgid "Depending on the type of server, the contents and order of your package list might vary from this example." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:569(para) -msgid "You can determine the package versions available for reversion by using the command. If you removed the Grizzly repositories, you must first reinstall them and run :" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:600(para) -msgid "This tells us the currently installed version of the package, newest candidate version, and all versions along with the repository that contains each version. Look for the appropriate Grizzly version—1:2013.1.4-0ubuntu1~cloud0 in this case. The process of manually picking through this list of packages is rather tedious and prone to errors. You should consider using the following script to help with this process:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:652(para) -msgid "If you decide to continue this step manually, don't forget to change neutron to quantum where applicable." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:659(para) -msgid "Use the command to install specific versions of each package by specifying <package-name>=<version>. The script in the previous step conveniently created a list of package=version pairs for you:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upgrades.xml:668(para) -msgid "This step completes the rollback procedure. You should remove the upgrade release repository and run to prevent accidental upgrades until you solve whatever issue caused you to roll back your environment." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/app_roadmaps.xml:45(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_ac01.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:10(title) -msgid "Working with Roadmaps" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:12(para) -msgid "The good news: OpenStack has unprecedented transparency when it comes to providing information about what's coming up. The bad news: each release moves very quickly. The purpose of this appendix is to highlight some of the useful pages to track, and take an educated guess at what is coming up in the next release and perhaps further afield.Kiloupcoming release ofOpenStack communityworking with roadmapsrelease cycle" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:28(para) -msgid "OpenStack follows a six month release cycle, typically releasing in April/May and October/November each year. At the start of each cycle, the community gathers in a single location for a design summit. At the summit, the features for the coming releases are discussed, prioritized, and planned. shows an example release cycle, with dates showing milestone releases, code freeze, and string freeze dates, along with an example of when the summit occurs. Milestones are interim releases within the cycle that are available as packages for download and testing. Code freeze is putting a stop to adding new features to the release. String freeze is putting a stop to changing any strings within the source code." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:41(title) -msgid "Release cycle diagram" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:51(title) -msgid "Information Available to You" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:53(para) -msgid "There are several good sources of information available that you can use to track your OpenStack development desires.OpenStack communityworking with roadmapsinformation available" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:63(para) -msgid "Release notes are maintained on the OpenStack wiki, and also shown here:" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:69(th) -msgid "Series" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:71(th) -msgid "Status" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:73(th) -msgid "Releases" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:75(th) -msgid "Date" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:81(para) -msgid "Liberty" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:84(link) -msgid "Under Development" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:87(para) -msgid "2015.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:89(para) -msgid "Oct, 2015" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:93(para) -msgid "Kilo" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:96(link) -msgid "Current stable release, security-supported" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:100(link) -msgid "2015.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:102(para) -msgid "Apr 30, 2015" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:105(para) -msgid "Juno" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:107(link) -msgid "Security-supported" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:111(link) -msgid "2014.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:113(para) -msgid "Oct 16, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:117(para) -msgid "Icehouse" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:119(link) ./doc/openstack-ops/app_roadmaps.xml:151(para) ./doc/openstack-ops/app_roadmaps.xml:197(para) ./doc/openstack-ops/app_roadmaps.xml:246(para) ./doc/openstack-ops/app_roadmaps.xml:289(para) -msgid "End-of-life" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:122(link) -msgid "2014.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:124(para) -msgid "Apr 17, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:129(link) -msgid "2014.1.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:131(para) -msgid "Jun 9, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:136(link) -msgid "2014.1.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:138(para) -msgid "Aug 8, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:143(link) -msgid "2014.1.3" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:145(para) -msgid "Oct 2, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:149(para) ./doc/openstack-ops/section_arch_example-neutron.xml:57(para) ./doc/openstack-ops/section_arch_example-nova.xml:81(para) -msgid "Havana" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:153(link) -msgid "2013.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:156(para) ./doc/openstack-ops/app_roadmaps.xml:202(para) -msgid "Apr 4, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:162(link) ./doc/openstack-ops/app_roadmaps.xml:190(link) -msgid "2013.2.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:164(para) ./doc/openstack-ops/app_roadmaps.xml:192(para) -msgid "Dec 16, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:169(link) -msgid "2013.2.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:171(para) -msgid "Feb 13, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:176(link) -msgid "2013.2.3" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:178(para) -msgid "Apr 3, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:183(link) -msgid "2013.2.4" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:185(para) -msgid "Sep 22, 2014" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:195(para) -msgid "Grizzly" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:199(link) -msgid "2013.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:208(link) -msgid "2013.1.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:210(para) -msgid "May 9, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:216(link) -msgid "2013.1.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:218(para) -msgid "Jun 6, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:224(link) -msgid "2013.1.3" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:226(para) -msgid "Aug 8, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:232(link) -msgid "2013.1.4" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:234(para) -msgid "Oct 17, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:239(link) -msgid "2013.1.5" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:241(para) -msgid "Mar 20, 2015" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:244(para) -msgid "Folsom" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:248(link) -msgid "2012.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:251(para) -msgid "Sep 27, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:257(link) -msgid "2012.2.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:259(para) -msgid "Nov 29, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:265(link) -msgid "2012.2.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:267(para) -msgid "Dec 13, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:273(link) -msgid "2012.2.3" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:275(para) -msgid "Jan 31, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:281(link) -msgid "2012.2.4" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:283(para) -msgid "Apr 11, 2013" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:287(para) -msgid "Essex" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:291(link) -msgid "2012.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:294(para) -msgid "Apr 5, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:300(link) -msgid "2012.1.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:302(para) -msgid "Jun 22, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:308(link) -msgid "2012.1.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:310(para) -msgid "Aug 10, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:316(link) -msgid "2012.1.3" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:318(para) -msgid "Oct 12, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:322(para) -msgid "Diablo" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:324(para) ./doc/openstack-ops/app_roadmaps.xml:343(para) ./doc/openstack-ops/app_roadmaps.xml:354(para) ./doc/openstack-ops/app_roadmaps.xml:365(para) -msgid "Deprecated" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:326(link) -msgid "2011.3" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:329(para) -msgid "Sep 22, 2011" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:335(link) -msgid "2011.3.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:337(para) -msgid "Jan 19, 2012" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:341(para) -msgid "Cactus" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:345(link) -msgid "2011.2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:348(para) -msgid "Apr 15, 2011" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:352(para) -msgid "Bexar" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:356(link) -msgid "2011.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:359(para) -msgid "Feb 3, 2011" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:363(para) -msgid "Austin" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:367(link) -msgid "2010.1" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:370(para) -msgid "Oct 21, 2010" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:375(para) -msgid "Here are some other resources:" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:379(link) -msgid "A breakdown of current features under development, with their target milestone" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:385(link) -msgid "A list of all features, including those not yet under development" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:390(link) -msgid "Rough-draft design discussions (\"etherpads\") from the last design summit" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:395(link) -msgid "List of individual code changes under review" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:402(title) -msgid "Influencing the Roadmap" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:404(para) -msgid "OpenStack truly welcomes your ideas (and contributions) and highly values feedback from real-world users of the software. By learning a little about the process that drives feature development, you can participate and perhaps get the additions you desire.OpenStack communityworking with roadmapsinfluencing" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:416(para) -msgid "Feature requests typically start their life in Etherpad, a collaborative editing tool, which is used to take coordinating notes at a design summit session specific to the feature. This then leads to the creation of a blueprint on the Launchpad site for the particular project, which is used to describe the feature more formally. Blueprints are then approved by project team members, and development can begin." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:423(para) -msgid "Therefore, the fastest way to get your feature request up for consideration is to create an Etherpad with your ideas and propose a session to the design summit. If the design summit has already passed, you may also create a blueprint directly. Read this blog post about how to work with blueprints the perspective of Victoria Martínez, a developer intern." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:431(para) -msgid "The roadmap for the next release as it is developed can be seen at Releases." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:434(para) -msgid "To determine the potential features going in to future releases, or to look at features implemented previously, take a look at the existing blueprints such as OpenStack Compute (nova) Blueprints, OpenStack Identity (keystone) Blueprints, and release notes." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:441(para) -msgid "Aside from the direct-to-blueprint pathway, there is another very well-regarded mechanism to influence the development roadmap: the user survey. Found at , it allows you to provide details of your deployments and needs, anonymously by default. Each cycle, the user committee analyzes the results and produces a report, including providing specific information to the technical committee and project team leads." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:452(title) -msgid "Aspects to Watch" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:454(para) -msgid "You want to keep an eye on the areas improving within OpenStack. The best way to \"watch\" roadmaps for each project is to look at the blueprints that are being approved for work on milestone releases. You can also learn from PTL webinars that follow the OpenStack summits twice a year.OpenStack communityworking with roadmapsaspects to watch" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:467(title) -msgid "Driver Quality Improvements" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:469(para) -msgid "A major quality push has occurred across drivers and plug-ins in Block Storage, Compute, and Networking. Particularly, developers of Compute and Networking drivers that require proprietary or hardware products are now required to provide an automated external testing system for use during the development process." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:477(title) -msgid "Easier Upgrades" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:479(para) -msgid "One of the most requested features since OpenStack began (for components other than Object Storage, which tends to \"just work\"): easier upgrades. In all recent releases internal messaging communication is versioned, meaning services can theoretically drop back to backward-compatible behavior. This allows you to run later versions of some components, while keeping older versions of others." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:486(para) -msgid "In addition, database migrations are now tested with the Turbo Hipster tool. This tool tests database migration performance on copies of real-world user databases." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:490(para) -msgid "These changes have facilitated the first proper OpenStack upgrade guide, found in , and will continue to improve in the next release.Kiloupgrades in" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:500(title) -msgid "Deprecation of Nova Network" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:502(para) -msgid "With the introduction of the full software-defined networking stack provided by OpenStack Networking (neutron) in the Folsom release, development effort on the initial networking code that remains part of the Compute component has gradually lessened. While many still use nova-network in production, there has been a long-term plan to remove the code in favor of the more flexible and full-featured OpenStack Networking.novadeprecation of" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:514(para) -msgid "An attempt was made to deprecate nova-network during the Havana release, which was aborted due to the lack of equivalent functionality (such as the FlatDHCP multi-host high-availability mode mentioned in this guide), lack of a migration path between versions, insufficient testing, and simplicity when used for the more straightforward use cases nova-network traditionally supported. Though significant effort has been made to address these concerns, nova-network was not be deprecated in the Juno release. In addition, to a limited degree, patches to nova-network have again begin to be accepted, such as adding a per-network settings feature and SR-IOV support in Juno.Junonova network deprecation" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:531(para) -msgid "This leaves you with an important point of decision when designing your cloud. OpenStack Networking is robust enough to use with a small number of limitations (performance issues in some scenarios, only basic high availability of layer 3 systems) and provides many more features than nova-network. However, if you do not have the more complex use cases that can benefit from fuller software-defined networking capabilities, or are uncomfortable with the new concepts introduced, nova-network may continue to be a viable option for the next 12 months." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:541(para) -msgid "Similarly, if you have an existing cloud and are looking to upgrade from nova-network to OpenStack Networking, you should have the option to delay the upgrade for this period of time. However, each release of OpenStack brings significant new innovation, and regardless of your use of networking methodology, it is likely best to begin planning for an upgrade within a reasonable timeframe of each release." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:549(para) -msgid "As mentioned, there's currently no way to cleanly migrate from nova-network to neutron. We recommend that you keep a migration in mind and what that process might involve for when a proper migration path is released." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:557(title) -msgid "Distributed Virtual Router" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:559(para) -msgid "One of the long-time complaints surrounding OpenStack Networking was the lack of high availability for the layer 3 components. The Juno release introduced Distributed Virtual Router (DVR), which aims to solve this problem." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:563(para) -msgid "Early indications are that it does do this well for a base set of scenarios, such as using the ML2 plug-in with Open vSwitch, one flat external network and VXLAN tenant networks. However, it does appear that there are problems with the use of VLANs, IPv6, Floating IPs, high north-south traffic scenarios and large numbers of compute nodes. It is expected these will improve significantly with the next release, but bug reports on specific issues are highly desirable." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:574(title) -msgid "Replacement of Open vSwitch Plug-in with Modular Layer 2" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:577(para) -msgid "The Modular Layer 2 plug-in is a framework allowing OpenStack Networking to simultaneously utilize the variety of layer-2 networking technologies found in complex real-world data centers. It currently works with the existing Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is intended to replace and deprecate the monolithic plug-ins associated with those L2 agents." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:586(title) -msgid "New API Versions" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:588(para) -msgid "The third version of the Compute API was broadly discussed and worked on during the Havana and Icehouse release cycles. Current discussions indicate that the V2 API will remain for many releases, and the next iteration of the API will be denoted v2.1 and have similar properties to the existing v2.0, rather than an entirely new v3 API. This is a great time to evaluate all API and provide comments while the next generation APIs are being defined. A new working group was formed specifically to improve OpenStack APIs and create design guidelines, which you are welcome to join. KiloCompute V3 API" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:605(title) -msgid "OpenStack on OpenStack (TripleO)" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:607(para) -msgid "This project continues to improve and you may consider using it for greenfield deployments, though according to the latest user survey results it remains to see widespread uptake." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:614(title) -msgid "Data processing service for OpenStack (sahara)" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:616(para) -msgid "A much-requested answer to big data problems, a dedicated team has been making solid progress on a Hadoop-as-a-Service project." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:621(title) -msgid "Bare metal Deployment (ironic)" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:623(para) -msgid "The bare-metal deployment has been widely lauded, and development continues. The Juno release brought the OpenStack Bare metal drive into the Compute project, and it was aimed to deprecate the existing bare-metal driver in Kilo. If you are a current user of the bare metal driver, a particular blueprint to follow is Deprecate the bare metal driver.KiloCompute bare-metal deployment" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:637(title) -msgid "Database as a Service (trove)" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:639(para) -msgid "The OpenStack community has had a database-as-a-service tool in development for some time, and we saw the first integrated release of it in Icehouse. From its release it was able to deploy database servers out of the box in a highly available way, initially supporting only MySQL. Juno introduced support for Mongo (including clustering), PostgreSQL and Couchbase, in addition to replication functionality for MySQL. In Kilo, more advanced clustering capability was delivered, in addition to better integration with other OpenStack components such as Networking. Junodatabase-as-a-service tool" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:655(title) -msgid "Message Service (zaqar)" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:657(para) -msgid "A service to provide queues of messages and notifications was released." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:661(title) -msgid "DNS service (designate)" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:663(para) -msgid "A long requested service, to provide the ability to manipulate DNS entries associated with OpenStack resources has gathered a following. The designate project was also released." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:668(title) -msgid "Scheduler Improvements" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:670(para) -msgid "Both Compute and Block Storage rely on schedulers to determine where to place virtual machines or volumes. In Havana, the Compute scheduler underwent significant improvement, while in Icehouse it was the scheduler in Block Storage that received a boost. Further down the track, an effort started this cycle that aims to create a holistic scheduler covering both will come to fruition. Some of the work that was done in Kilo can be found under the Gantt project.Kiloscheduler improvements" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:684(title) -msgid "Block Storage Improvements" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:686(para) -msgid "Block Storage is considered a stable project, with wide uptake and a long track record of quality drivers. The team has discussed many areas of work at the summits, including better error reporting, automated discovery, and thin provisioning features." -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:694(title) -msgid "Toward a Python SDK" -msgstr "" - -#: ./doc/openstack-ops/app_roadmaps.xml:696(para) -msgid "Though many successfully use the various python-*client code as an effective SDK for interacting with OpenStack, consistency between the projects and documentation availability waxes and wanes. To combat this, an effort to improve the experience has started. Cross-project development efforts in OpenStack have a checkered history, such as the unified client project having several false starts. However, the early signs for the SDK project are promising, and we expect to see results during the Juno cycle." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/ch_ops_projects_users.xml:110(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0901.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/ch_ops_projects_users.xml:860(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0902.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:12(title) -msgid "Managing Projects and Users" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:14(para) -msgid "An OpenStack cloud does not have much value without users. This chapter covers topics that relate to managing users, projects, and quotas. This chapter describes users and projects as described by version 2 of the OpenStack Identity API." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:20(para) -msgid "While version 3 of the Identity API is available, the client tools do not yet implement those calls, and most OpenStack clouds are still implementing Identity API v2.0.IdentityIdentity service API" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:30(title) -msgid "Projects or Tenants?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:32(para) -msgid "In OpenStack user interfaces and documentation, a group of users is referred to as a project or tenant. These terms are interchangeable.user managementterminology fortenantdefinition ofprojectsdefinition of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:49(para) -msgid "The initial implementation of OpenStack Compute had its own authentication system and used the term project. When authentication moved into the OpenStack Identity (keystone) project, it used the term tenant to refer to a group of users. Because of this legacy, some of the OpenStack tools refer to projects and some refer to tenants." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:58(para) -msgid "This guide uses the term project, unless an example shows interaction with a tool that uses the term tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:65(title) -msgid "Managing Projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:67(para) -msgid "Users must be associated with at least one project, though they may belong to many. Therefore, you should add at least one project before adding users.user managementadding projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:76(title) -msgid "Adding Projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:78(para) -msgid "To create a project through the OpenStack dashboard:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:82(para) -msgid "Log in as an administrative user." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:86(para) -msgid "Select the Identity tab in the left navigation bar." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:91(para) -msgid "Under Identity tab, click Projects." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:96(para) -msgid "Click the Create Project button." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:100(para) -msgid "You are prompted for a project name and an optional, but recommended, description. Select the checkbox at the bottom of the form to enable this project. By default, it is enabled, as shown in ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:106(title) -msgid "Dashboard's Create Project form" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:115(para) -msgid "It is also possible to add project members and adjust the project quotas. We'll discuss those actions later, but in practice, it can be quite convenient to deal with all these operations at one time." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:119(para) -msgid "To add a project through the command line, you must use the OpenStack command line client." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:124(para) -msgid "This command creates a project named \"demo.\" Optionally, you can add a description string by appending --description tenant-description, which can be very useful. You can also create a group in a disabled state by appending --disable to the command. By default, projects are created in an enabled state." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:134(title) -msgid "Quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:136(para) -msgid "To prevent system capacities from being exhausted without notification, you can set up quotas. Quotas are operational limits. For example, the number of gigabytes allowed per tenant can be controlled to ensure that a single tenant cannot consume all of the disk space. Quotas are currently enforced at the tenant (or project) level, rather than the user level.quotasuser managementquotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:151(para) -msgid "Because without sensible quotas a single tenant could use up all the available resources, default quotas are shipped with OpenStack. You should pay attention to which quota settings make sense for your hardware capabilities." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:157(para) -msgid "Using the command-line interface, you can manage quotas for the OpenStack Compute service and the Block Storage service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:160(para) -msgid "Typically, default values are changed because a tenant requires more than the OpenStack default of 10 volumes per tenant, or more than the OpenStack default of 1 TB of disk space on a compute node." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:165(para) -msgid "To view all tenants, run: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:179(title) -msgid "Set Image Quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:181(para) -msgid "You can restrict a project's image storage by total number of bytes. Currently, this quota is applied cloud-wide, so if you were to set an Image quota limit of 5 GB, then all projects in your cloud will be able to store only 5 GB of images and snapshots.Image servicequota setting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:191(para) -msgid "To enable this feature, edit the /etc/glance/glance-api.conf file, and under the [DEFAULT] section, add:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:197(para) -msgid "For example, to restrict a project's image storage to 5 GB, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:203(para) -msgid "There is a configuration option in glance-api.conf that limits the number of members allowed per image, called image_member_quota, set to 128 by default. That setting is a different quota from the storage quota.image quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:214(title) -msgid "Set Compute Service Quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:216(para) -msgid "As an administrative user, you can update the Compute service quotas for an existing tenant, as well as update the quota defaults for a new tenant.ComputeCompute service See ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:225(caption) -msgid "Compute quota descriptions" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:235(th) -msgid "Quota" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:237(th) ./doc/openstack-ops/ch_ops_projects_users.xml:607(th) ./doc/openstack-ops/ch_ops_user_facing.xml:411(emphasis) ./doc/openstack-ops/section_arch_example-neutron.xml:260(th) -msgid "Description" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:239(th) ./doc/openstack-ops/ch_ops_projects_users.xml:605(th) -msgid "Property name" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:245(para) -msgid "Fixed IPs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:247(para) -msgid "Number of fixed IP addresses allowed per tenant. This number must be equal to or greater than the number of allowed instances." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:251(systemitem) -msgid "fixed-ips" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:255(para) ./doc/openstack-ops/ch_ops_user_facing.xml:2187(title) -msgid "Floating IPs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:257(para) -msgid "Number of floating IP addresses allowed per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:260(systemitem) -msgid "floating-ips" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:264(para) -msgid "Injected file content bytes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:266(para) -msgid "Number of content bytes allowed per injected file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:269(systemitem) -msgid "injected-file-content-bytes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:274(para) -msgid "Injected file path bytes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:276(para) -msgid "Number of bytes allowed per injected file path." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:279(systemitem) -msgid "injected-file-path-bytes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:284(para) -msgid "Injected files" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:286(para) -msgid "Number of injected files allowed per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:288(systemitem) -msgid "injected-files" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:292(para) ./doc/openstack-ops/ch_ops_user_facing.xml:1899(title) ./doc/openstack-ops/ch_ops_maintenance.xml:267(title) -msgid "Instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:294(para) -msgid "Number of instances allowed per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:296(systemitem) ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:821(literal) -msgid "instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:300(para) -msgid "Key pairs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:302(para) -msgid "Number of key pairs allowed per user." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:304(systemitem) -msgid "key-pairs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:308(para) -msgid "Metadata items" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:310(para) -msgid "Number of metadata items allowed per instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:313(systemitem) -msgid "metadata-items" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:317(para) -msgid "RAM" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:319(para) -msgid "Megabytes of instance RAM allowed per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:322(systemitem) -msgid "ram" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:326(para) -msgid "Security group rules" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:328(para) -msgid "Number of rules per security group." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:330(systemitem) -msgid "security-group-rules" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:335(para) -msgid "Security groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:337(para) -msgid "Number of security groups per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:339(systemitem) -msgid "security-groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:343(para) ./doc/openstack-ops/ch_ops_user_facing.xml:462(para) -msgid "VCPUs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:345(para) -msgid "Number of instance cores allowed per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:347(systemitem) -msgid "cores" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:353(title) -msgid "View and update compute quotas for a tenant (project)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:355(para) -msgid "As an administrative user, you can use the nova quota-* commands, which are provided by the python-novaclient package, to view and update tenant quotas." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:361(title) -msgid "To view and update default quota values" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:364(para) ./doc/openstack-ops/ch_ops_projects_users.xml:650(para) -msgid "List all default quotas for all tenants, as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:368(para) ./doc/openstack-ops/ch_ops_projects_users.xml:394(para) ./doc/openstack-ops/ch_ops_projects_users.xml:417(para) ./doc/openstack-ops/ch_ops_projects_users.xml:453(para) ./doc/openstack-ops/ch_ops_projects_users.xml:654(para) ./doc/openstack-ops/ch_ops_projects_users.xml:681(para) ./doc/openstack-ops/ch_ops_user_facing.xml:252(para) -msgid "For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:390(para) -msgid "Update a default value for a new tenant, as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:392(replaceable) ./doc/openstack-ops/ch_ops_customize.xml:803(code) -msgid "key" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:392(replaceable) -msgid "value" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:403(title) -msgid "To view quota values for a tenant (project)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:406(para) ./doc/openstack-ops/ch_ops_projects_users.xml:698(para) -msgid "Place the tenant ID in a variable:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:408(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:445(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:679(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:700(replaceable) -msgid "tenantName" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:412(para) -msgid "List the currently set quota values for a tenant, as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:440(title) -msgid "To update quota values for a tenant (project)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:443(para) -msgid "Obtain the tenant ID, as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:449(para) ./doc/openstack-ops/ch_ops_projects_users.xml:704(para) -msgid "Update a particular quota value, as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:451(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:706(replaceable) -msgid "quotaName" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:451(replaceable) -msgid "quotaValue" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:451(replaceable) ./doc/openstack-ops/ch_ops_projects_users.xml:706(replaceable) -msgid "tenantID" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:475(para) -msgid "To view a list of options for the quota-update command, run:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:486(title) -msgid "Set Object Storage Quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:488(para) -msgid "There are currently two categories of quotas for Object Storage:account quotascontainersquota settingObject Storagequota setting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:503(term) -msgid "Container quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:506(para) -msgid "Limit the total size (in bytes) or number of objects that can be stored in a single container." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:512(term) -msgid "Account quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:515(para) -msgid "Limit the total size (in bytes) that a user has available in the Object Storage service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:522(para) -msgid "To take advantage of either container quotas or account quotas, your Object Storage proxy server must have container_quotas or account_quotas (or both) added to the [pipeline:main] pipeline. Each quota type also requires its own section in the proxy-server.conf file:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:539(para) -msgid "To view and update Object Storage quotas, use the swift command provided by the python-swiftclient package. Any user included in the project can view the quotas placed on their project. To update Object Storage quotas on a project, you must have the role of ResellerAdmin in the project that the quota is being applied to." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:548(para) -msgid "To view account quotas placed on a project:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:561(para) -msgid "To apply or update account quotas on a project:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:566(para) -msgid "For example, to place a 5 GB quota on an account:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:571(para) -msgid "To verify the quota, run the swift stat command again:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:587(title) -msgid "Set Block Storage Quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:589(para) -msgid "As an administrative user, you can update the Block Storage service quotas for a tenant, as well as update the quota defaults for a new tenant. See .Block Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:597(caption) -msgid "Block Storage quota descriptions" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:613(para) -msgid "gigabytes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:615(para) -msgid "Number of volume gigabytes allowed per tenant" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:620(para) -msgid "snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:622(para) -msgid "Number of Block Storage snapshots allowed per tenant." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:627(para) -msgid "volumes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:629(para) -msgid "Number of Block Storage volumes allowed per tenant" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:638(title) -msgid "View and update Block Storage quotas for a tenant (project)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:641(para) -msgid "As an administrative user, you can use the cinder quota-* commands, which are provided by the python-cinderclient package, to view and update tenant quotas." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:647(title) -msgid "To view and update default Block Storage quota values" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:667(para) -msgid "To update a default value for a new tenant, update the property in the /etc/cinder/cinder.conf file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:674(title) -msgid "To view Block Storage quotas for a tenant (project)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:677(para) -msgid "View quotas for the tenant, as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:695(title) -msgid "To update Block Storage quotas for a tenant (project)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:706(replaceable) -msgid "NewValue" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:708(para) -msgid "For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:727(title) -msgid "User Management" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:729(para) -msgid "The command-line tools for managing users are inconvenient to use directly. They require issuing multiple commands to complete a single task, and they use UUIDs rather than symbolic names for many items. In practice, humans typically do not use these tools directly. Fortunately, the OpenStack dashboard provides a reasonable interface to this. In addition, many sites write custom tools for local needs to enforce local policies and provide levels of self-service to users that aren't currently available with packaged tools.user managementcreating new users" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:744(title) -msgid "Creating New Users" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:746(para) -msgid "To create a user, you need the following information:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:750(para) -msgid "Username" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:754(para) -msgid "Email address" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:758(para) -msgid "Password" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:762(para) -msgid "Primary project" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:766(para) -msgid "Role" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:770(para) -msgid "Enabled" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:774(para) -msgid "Username and email address are self-explanatory, though your site may have local conventions you should observe. The primary project is simply the first project the user is associated with and must exist prior to creating the user. Role is almost always going to be \"member.\" Out of the box, OpenStack comes with two roles defined:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:785(term) -msgid "member" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:788(para) -msgid "A typical user" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:793(term) -msgid "admin" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:796(para) -msgid "An administrative super user, which has full permissions across all projects and should be used with great care" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:802(para) -msgid "It is possible to define other roles, but doing so is uncommon." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:805(para) -msgid "Once you've gathered this information, creating the user in the dashboard is just another web form similar to what we've seen before and can be found by clicking the Users link in the Identity navigation bar and then clicking the Create User button at the top right." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:810(para) -msgid "Modifying users is also done from this Users page. If you have a large number of users, this page can get quite crowded. The Filter search box at the top of the page can be used to limit the users listing. A form very similar to the user creation dialog can be pulled up by selecting Edit from the actions dropdown menu at the end of the line for the user you are modifying." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:819(title) -msgid "Associating Users with Projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:821(para) -msgid "Many sites run with users being associated with only one project. This is a more conservative and simpler choice both for administration and for users. Administratively, if a user reports a problem with an instance or quota, it is obvious which project this relates to. Users needn't worry about what project they are acting in if they are only in one project. However, note that, by default, any user can affect the resources of any other user within their project. It is also possible to associate users with multiple projects if that makes sense for your organization.Project Members tabuser managementassociating users with projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:837(para) -msgid "Associating existing users with an additional project or removing them from an older project is done from the Projects page of the dashboard by selecting Modify Users from the Actions column, as shown in ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:842(para) -msgid "From this view, you can do a number of useful things, as well as a few dangerous ones." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:845(para) -msgid "The first column of this form, named All Users, includes a list of all the users in your cloud who are not already associated with this project. The second column shows all the users who are. These lists can be quite long, but they can be limited by typing a substring of the username you are looking for in the filter field at the top of the column." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:852(para) -msgid "From here, click the + icon to add users to the project. Click the - to remove them." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:856(title) -msgid "Edit Project Members tab" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:865(para) -msgid "The dangerous possibility comes with the ability to change member roles. This is the dropdown list below the username in the Project Members list. In virtually all cases, this value should be set to Member. This example purposefully shows an administrative user where this value is admin." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:872(para) -msgid "The admin is global, not per project, so granting a user the admin role in any project gives the user administrative rights across the whole cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:877(para) -msgid "Typical use is to only create administrative users in a single project, by convention the admin project, which is created by default during cloud setup. If your administrative users also use the cloud to launch and manage instances, it is strongly recommended that you use separate user accounts for administrative access and normal operations and that they be in distinct projects.accounts" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:887(title) -msgid "Customizing Authorization" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:889(para) -msgid "The default authorization settings allow administrative users only to create resources on behalf of a different project. OpenStack handles two kinds of authorization policies:authorization" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:900(term) -msgid "Operation based" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:903(para) -msgid "Policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:910(term) -msgid "Resource based" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:913(para) -msgid "Whether access to a specific resource might be granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in an OpenStack service vary from deployment to deployment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:922(para) -msgid "The policy engine reads entries from the policy.json file. The actual location of this file might vary from distribution to distribution: for nova, it is typically in /etc/nova/policy.json. You can update entries while the system is running, and you do not have to restart services. Currently, the only way to update such policies is to edit the policy file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:929(para) -msgid "The OpenStack service's policy engine matches a policy directly. A rule indicates evaluation of the elements of such policies. For instance, in a compute:create: [[\"rule:admin_or_owner\"]] statement, the policy is compute:create, and the rule is admin_or_owner." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:935(para) -msgid "Policies are triggered by an OpenStack policy engine whenever one of them matches an OpenStack API operation or a specific attribute being used in a given operation. For instance, the engine tests the create:compute policy every time a user sends a POST /v2/{tenant_id}/servers request to the OpenStack Compute API server. Policies can be also related to specific API extensions. For instance, if a user needs an extension like compute_extension:rescue, the attributes defined by the provider extensions trigger the rule test for that operation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:945(para) -msgid "An authorization policy can be composed by one or more rules. If more rules are specified, evaluation policy is successful if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached. These are the rules defined:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:955(term) -msgid "Role-based rules" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:958(para) -msgid "Evaluate successfully if the user submitting the request has the specified role. For instance, \"role:admin\" is successful if the user submitting the request is an administrator." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:966(term) -msgid "Field-based rules" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:969(para) -msgid "Evaluate successfully if a field of the resource specified in the current request matches a specific value. For instance, \"field:networks:shared=True\" is successful if the attribute shared of the network resource is set to true." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:978(term) -msgid "Generic rules" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:981(para) -msgid "Compare an attribute in the resource with an attribute extracted from the user's security credentials and evaluates successfully if the comparison is successful. For instance, \"tenant_id:%(tenant_id)s\" is successful if the tenant identifier in the resource is equal to the tenant identifier of the user submitting the request." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:991(para) -msgid "Here are snippets of the default nova policy.json file:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1021(para) -msgid "Shows a rule that evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (tenant identifier is equal)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1027(para) -msgid "Shows the default policy, which is always evaluated if an API operation does not match any of the policies in policy.json." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1033(para) -msgid "Shows a policy restricting the ability to manipulate flavors to administrators using the Admin API only.admin API" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1041(para) -msgid "In some cases, some operations should be restricted to administrators only. Therefore, as a further example, let us consider how this sample policy file could be modified in a scenario where we enable users to create their own flavors:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1050(title) -msgid "Users Who Disrupt Other Users" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1052(para) -msgid "Users on your cloud can disrupt other users, sometimes intentionally and maliciously and other times by accident. Understanding the situation allows you to make a better decision on how to handle the disruption.user managementhandling disruptive users" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1061(para) -msgid "For example, a group of users have instances that are utilizing a large amount of compute resources for very compute-intensive tasks. This is driving the load up on compute nodes and affecting other users. In this situation, review your user use cases. You may find that high compute scenarios are common, and should then plan for proper segregation in your cloud, such as host aggregation or regions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1068(para) -msgid "Another example is a user consuming a very large amount of bandwidthbandwidthrecognizing DDOS attacks. Again, the key is to understand what the user is doing. If she naturally needs a high amount of bandwidth, you might have to limit her transmission rate as to not affect other users or move her to an area with more bandwidth available. On the other hand, maybe her instance has been hacked and is part of a botnet launching DDOS attacks. Resolution of this issue is the same as though any other server on your network has been hacked. Contact the user and give her time to respond. If she doesn't respond, shut down the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1082(para) -msgid "A final example is if a user is hammering cloud resources repeatedly. Contact the user and learn what he is trying to do. Maybe he doesn't understand that what he's doing is inappropriate, or maybe there is an issue with the resource he is trying to access that is causing his requests to queue or lag." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1091(title) ./doc/openstack-ops/ch_ops_log_monitor.xml:1045(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:281(title) ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1289(title) ./doc/openstack-ops/ch_ops_lay_of_land.xml:789(title) -msgid "Summary" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_projects_users.xml:1093(para) -msgid "One key element of systems administration that is often overlooked is that end users are the reason systems administrators exist. Don't go the BOFH route and terminate every user who causes an alert to go off. Work with users to understand what they're trying to accomplish and see how your environment can better assist them in achieving their goals. Meet your users needs by organizing your users into projects, applying policies, managing quotas, and working with them.systems administrationuser management" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:12(title) -msgid "Advanced Configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:14(para) -msgid "OpenStack is intended to work well across a variety of installation flavors, from very small private clouds to large public clouds. To achieve this, the developers add configuration options to their code that allow the behavior of the various components to be tweaked depending on your needs. Unfortunately, it is not possible to cover all possible deployments with the default configuration values.advanced configurationconfiguration optionsconfiguration optionswide availability of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:29(para) -msgid "At the time of writing, OpenStack has more than 3,000 configuration options. You can see them documented at the OpenStack configuration reference guide. This chapter cannot hope to document all of these, but we do try to introduce the important concepts so that you know where to go digging for more information." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:37(title) -msgid "Differences Between Various Drivers" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:39(para) -msgid "Many OpenStack projects implement a driver layer, and each of these drivers will implement its own configuration options. For example, in OpenStack Compute (nova), there are various hypervisor drivers implemented—libvirt, xenserver, hyper-v, and vmware, for example. Not all of these hypervisor drivers have the same features, and each has different tuning requirements.hypervisorsdifferences betweendriversdifferences between" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:55(para) -msgid "The currently implemented hypervisors are listed on the OpenStack documentation website. You can see a matrix of the various features in OpenStack Compute (nova) hypervisor drivers on the OpenStack wiki at the Hypervisor support matrix page." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:63(para) -msgid "The point we are trying to make here is that just because an option exists doesn't mean that option is relevant to your driver choices. Normally, the documentation notes which drivers the configuration applies to." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:70(title) -msgid "Implementing Periodic Tasks" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:72(para) -msgid "Another common concept across various OpenStack projects is that of periodic tasks. Periodic tasks are much like cron jobs on traditional Unix systems, but they are run inside an OpenStack process. For example, when OpenStack Compute (nova) needs to work out what images it can remove from its local cache, it runs a periodic task to do this.periodic tasksconfiguration optionsperiodic task implementation" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:85(para) -msgid "Periodic tasks are important to understand because of limitations in the threading model that OpenStack uses. OpenStack uses cooperative threading in Python, which means that if something long and complicated is running, it will block other tasks inside that process from running unless it voluntarily yields execution to another cooperative thread.cooperative threading" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:94(para) -msgid "A tangible example of this is the nova-compute process. In order to manage the image cache with libvirt, nova-compute has a periodic process that scans the contents of the image cache. Part of this scan is calculating a checksum for each of the images and making sure that checksum matches what nova-compute expects it to be. However, images can be very large, and these checksums can take a long time to generate. At one point, before it was reported as a bug and fixed, nova-compute would block on this task and stop responding to RPC requests. This was visible to users as failure of operations such as spawning or deleting instances." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:106(para) -msgid "The take away from this is if you observe an OpenStack process that appears to \"stop\" for a while and then continue to process normally, you should check that periodic tasks aren't the problem. One way to do this is to disable the periodic tasks by setting their interval to zero. Additionally, you can configure how often these periodic tasks run—in some cases, it might make sense to run them at a different frequency from the default." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:114(para) -msgid "The frequency is defined separately for each periodic task. Therefore, to disable every periodic task in OpenStack Compute (nova), you would need to set a number of configuration options to zero. The current list of configuration options you would need to set to zero are:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:121(literal) -msgid "bandwidth_poll_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:125(literal) -msgid "sync_power_state_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:129(literal) -msgid "heal_instance_info_cache_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:133(literal) -msgid "host_state_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:137(literal) -msgid "image_cache_manager_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:141(literal) -msgid "reclaim_instance_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:145(literal) -msgid "volume_usage_poll_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:149(literal) -msgid "shelved_poll_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:153(literal) -msgid "shelved_offload_time" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:157(literal) -msgid "instance_delete_interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:161(para) -msgid "To set a configuration option to zero, include a line such as image_cache_manager_interval=0 in your nova.conf file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:165(para) -msgid "This list will change between releases, so please refer to your configuration guide for up-to-date information." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:170(title) -msgid "Specific Configuration Topics" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:172(para) -msgid "This section covers specific examples of configuration options you might consider tuning. It is by no means an exhaustive list." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:176(title) -msgid "Security Configuration for Compute, Networking, and Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:179(para) -msgid "The OpenStack Security Guide provides a deep dive into securing an OpenStack cloud, including SSL/TLS, key management, PKI and certificate management, data transport and privacy concerns, and compliance.security issuesconfiguration optionsconfiguration optionssecurity" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:195(title) -msgid "High Availability" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:197(para) -msgid "The OpenStack High Availability Guide offers suggestions for elimination of a single point of failure that could cause system downtime. While it is not a completely prescriptive document, it offers methods and techniques for avoiding downtime and data loss.high availabilityconfiguration optionshigh availability" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:212(title) -msgid "Enabling IPv6 Support" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:214(para) -msgid "You can follow the progress being made on IPV6 support by watching the neutron IPv6 Subteam at work.LibertyIPv6 supportIPv6, enabling support forconfiguration optionsIPv6 support" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:228(para) -msgid "By modifying your configuration setup, you can set up IPv6 when using nova-network for networking, and a tested setup is documented for FlatDHCP and a multi-host configuration. The key is to make nova-network think a radvd command ran successfully. The entire configuration is detailed in a Cybera blog post, “An IPv6 enabled cloud”." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:238(title) -msgid "Geographical Considerations for Object Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_advanced_configuration.xml:240(para) -msgid "Support for global clustering of object storage servers is available for all supported releases. You would implement these global clusters to ensure replication across geographic areas in case of a natural disaster and also to ensure that users can write or access their objects more quickly based on the closest data center. You configure a default region with one zone for each cluster, but be sure your network (WAN) can handle the additional request and response load between zones as you add more zones and build a ring that handles more zones. Refer to Geographically Distributed Clusters in the documentation for additional information.Object Storagegeographical considerationsstoragegeographical considerationsconfiguration optionsgeographical storage considerations" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/ch_arch_storage.xml:593(None) ./doc/openstack-ops/ch_arch_storage.xml:610(None) ./doc/openstack-ops/ch_arch_storage.xml:623(None) ./doc/openstack-ops/ch_arch_storage.xml:630(None) ./doc/openstack-ops/ch_arch_storage.xml:643(None) ./doc/openstack-ops/ch_arch_storage.xml:650(None) ./doc/openstack-ops/ch_arch_storage.xml:657(None) ./doc/openstack-ops/ch_arch_storage.xml:670(None) ./doc/openstack-ops/ch_arch_storage.xml:677(None) ./doc/openstack-ops/ch_arch_storage.xml:690(None) ./doc/openstack-ops/ch_arch_storage.xml:701(None) ./doc/openstack-ops/ch_arch_storage.xml:707(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/Check_mark_23x20_02.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:12(title) -msgid "Storage Decisions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:14(para) -msgid "Storage is found in many parts of the OpenStack stack, and the differing types can cause confusion to even experienced cloud engineers. This section focuses on persistent storage options you can configure with your cloud. It's important to understand the distinction between ephemeral storage and persistent storage." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:22(title) -msgid "Ephemeral Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:24(para) -msgid "If you deploy only the OpenStack Compute Service (nova), your users do not have access to any form of persistent storage by default. The disks associated with VMs are \"ephemeral,\" meaning that (from the user's point of view) they effectively disappear when a virtual machine is terminated.storageephemeral" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:36(title) -msgid "Persistent Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:38(para) -msgid "Persistent storage means that the storage resource outlives any other resource and is always available, regardless of the state of a running instance." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:42(para) -msgid "Today, OpenStack clouds explicitly support three types of persistent storage: object storage, block storage, and file system storage. swiftObject Storage APIpersistent storageobjectspersistent storage ofObject StorageObject Storage APIstorageobject storageshared file system storageshared file systems service" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:75(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:211(title) -msgid "Object Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:77(para) -msgid "With object storage, users access binary objects through a REST API. You may be familiar with Amazon S3, which is a well-known example of an object storage system. Object storage is implemented in OpenStack by the OpenStack Object Storage (swift) project. If your intended users need to archive or manage large datasets, you want to provide them with object storage. In addition, OpenStack can store your virtual machine (VM) images inside of an object storage system, as an alternative to storing the images on a file system.binarybinary objects" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:91(para) -msgid "OpenStack Object Storage provides a highly scalable, highly available storage solution by relaxing some of the constraints of traditional file systems. In designing and procuring for such a cluster, it is important to understand some key concepts about its operation. Essentially, this type of storage is built on the idea that all storage hardware fails, at every level, at some point. Infrequently encountered failures that would hamstring other storage systems, such as issues taking down RAID cards or entire servers, are handled gracefully with OpenStack Object Storage.scalingObject Storage and" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:105(para) -msgid "A good document describing the Object Storage architecture is found within the developer documentation—read this first. Once you understand the architecture, you should know what a proxy server does and how zones work. However, some important points are often missed at first glance." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:112(para) -msgid "When designing your cluster, you must consider durability and availability. Understand that the predominant source of these is the spread and placement of your data, rather than the reliability of the hardware. Consider the default value of the number of replicas, which is three. This means that before an object is marked as having been written, at least two copies exist—in case a single server fails to write, the third copy may or may not yet exist when the write operation initially returns. Altering this number increases the robustness of your data, but reduces the amount of storage you have available. Next, look at the placement of your servers. Consider spreading them widely throughout your data center's network and power-failure zones. Is a zone a rack, a server, or a disk?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:139(para) -msgid "Among object, container, and account servers" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:145(para) -msgid "Between those servers and the proxies" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:149(para) -msgid "Between the proxies and your users" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:125(para) -msgid "Object Storage's network patterns might seem unfamiliar at first. Consider these main traffic flows: objectsstorage decisions andcontainersstorage decisions andaccount server" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:154(para) -msgid "Object Storage is very \"chatty\" among servers hosting data—even a small cluster does megabytes/second of traffic, which is predominantly, “Do you have the object?”/“Yes I have the object!” Of course, if the answer to the aforementioned question is negative or the request times out, replication of the object begins." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:160(para) -msgid "Consider the scenario where an entire server fails and 24 TB of data needs to be transferred \"immediately\" to remain at three copies—this can put significant load on the network." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:166(para) -msgid "Another fact that's often forgotten is that when a new file is being uploaded, the proxy server must write out as many streams as there are replicas—giving a multiple of network traffic. For a three-replica cluster, 10 Gbps in means 30 Gbps out. Combining this with the previous high bandwidth bandwidthprivate vs. public network recommendations demands of replication is what results in the recommendation that your private network be of significantly higher bandwidth than your public need be. Oh, and OpenStack Object Storage communicates internally with unencrypted, unauthenticated rsync for performance—you do want the private network to be private." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:182(para) -msgid "The remaining point on bandwidth is the public-facing portion. The swift-proxy service is stateless, which means that you can easily add more and use HTTP load-balancing methods to share bandwidth and availability between them." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:188(para) -msgid "More proxies means more bandwidth, if your storage can keep up." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:193(title) ./doc/openstack-ops/ch_ops_backup_recovery.xml:196(title) ./doc/openstack-ops/ch_ops_user_facing.xml:958(title) -msgid "Block Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:195(para) -msgid "Block storage (sometimes referred to as volume storage) provides users with access to block-storage devices. Users interact with block storage by attaching volumes to their running VM instances.volume storageblock storagestorageblock storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:208(para) -msgid "These volumes are persistent: they can be detached from one instance and re-attached to another, and the data remains intact. Block storage is implemented in OpenStack by the OpenStack Block Storage (cinder) project, which supports multiple back ends in the form of drivers. Your choice of a storage back end must be supported by a Block Storage driver." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:215(para) -msgid "Most block storage drivers allow the instance to have direct access to the underlying storage hardware's block device. This helps increase the overall read/write IO. However, support for utilizing files as volumes is also well established, with full support for NFS, GlusterFS and others." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:221(para) -msgid "These drivers work a little differently than a traditional \"block\" storage driver. On an NFS or GlusterFS file system, a single file is created and then mapped as a \"virtual\" volume into the instance. This mapping/translation is similar to how OpenStack utilizes QEMU's file-based virtual machines stored in /var/lib/nova/instances." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:230(title) ./doc/openstack-ops/ch_ops_user_facing.xml:1060(title) -msgid "Shared File Systems Service" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:242(para) -msgid "Create a share specifying its size, shared file system protocol, visibility level" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:247(para) -msgid "Create a share on either a share server or standalone, depending on the selected back-end mode, with or without using a share network." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:254(para) -msgid "Specify access rules and security services for existing shares." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:258(para) -msgid "Combine several shares in groups to keep data consistency inside the groups for the following safe group operations." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:262(para) -msgid "Create a snapshot of a selected share or a share group for storing the existing shares consistently or creating new shares from that snapshot in a consistent way" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:267(para) -msgid "Create a share from a snapshot." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:270(para) -msgid "Set rate limits and quotas for specific shares and snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:273(para) -msgid "View usage of share resources" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:276(para) -msgid "Remove shares." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:283(para) -msgid "Mounted to any number of client machines." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:286(para) -msgid "Detached from one instance and attached to another without data loss. During this process the data are safe unless the Shared File Systems service itself is changed or removed." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:231(para) -msgid "The Shared File Systems service provides a set of services for management of Shared File Systems in a multi-tenant cloud environment. Users interact with Shared File Systems service by mounting remote File Systems on their instances with the following usage of those systems for file storing and exchange. Shared File Systems service provides you with shares. A share is a remote, mountable file system. You can mount a share to and access a share from several hosts by several users at a time. With shares, user can also: Like Block Storage, the Shared File Systems service is persistent. It can be: Shares are provided by the Shared File Systems service. In OpenStack, Shared File Systems service is implemented by Shared File System (manila) project, which supports multiple back-ends in the form of drivers. The Shared File Systems service can be configured to provision shares from one or more back-ends. Share servers are, mostly, virtual machines that export file shares via different protocols such as NFS, CIFS, GlusterFS, or HDFS." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:303(title) -msgid "OpenStack Storage Concepts" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:305(para) -msgid " explains the different storage concepts provided by OpenStack.block devicestorageoverview of concepts" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:315(caption) -msgid "OpenStack storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:331(th) -msgid "Ephemeral storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:333(th) -msgid "Block storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:335(th) ./doc/openstack-ops/section_arch_example-nova.xml:163(para) -msgid "Object storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:337(th) -msgid "Shared File System storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:343(para) -msgid "Used to…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:345(para) -msgid "Run operating system and scratch space" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:347(para) -msgid "Add additional persistent storage to a virtual machine (VM)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:350(para) -msgid "Store data, including VM images" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:352(para) -msgid "Add additional persistent storage to a virtual machine" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:356(para) -msgid "Accessed through…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:358(para) -msgid "A file system" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:360(para) -msgid "A block device that can be partitioned, formatted, and mounted (such as, /dev/vdc)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:363(para) -msgid "The REST API" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:365(para) -msgid "A Shared File Systems service share (either manila managed or an external one registered in manila) that can be partitioned, formatted and mounted (such as /dev/vdc)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:371(para) -msgid "Accessible from…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:373(para) ./doc/openstack-ops/ch_arch_storage.xml:375(para) ./doc/openstack-ops/ch_arch_storage.xml:379(para) -msgid "Within a VM" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:377(para) -msgid "Anywhere" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:383(para) -msgid "Managed by…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:385(para) -msgid "OpenStack Compute (nova)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:387(para) -msgid "OpenStack Block Storage (cinder)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:389(para) ./doc/openstack-ops/ch_arch_storage.xml:779(term) ./doc/openstack-ops/section_arch_example-nova.xml:165(para) -msgid "OpenStack Object Storage (swift)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:391(para) -msgid "OpenStack Shared File System Storage (manila)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:395(para) -msgid "Persists until…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:397(para) -msgid "VM is terminated" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:399(para) ./doc/openstack-ops/ch_arch_storage.xml:401(para) ./doc/openstack-ops/ch_arch_storage.xml:403(para) -msgid "Deleted by user" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:407(para) -msgid "Sizing determined by…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:409(para) -msgid "Administrator configuration of size settings, known as flavors" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:412(para) ./doc/openstack-ops/ch_arch_storage.xml:420(para) -msgid "User specification in initial request" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:414(para) -msgid "Amount of available physical storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:425(para) -msgid "Requests for extension" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:430(para) -msgid "Available user-level quotes" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:435(para) -msgid "Limitations applied by Administrator" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:445(para) -msgid "Encryption set by…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:447(para) -msgid "Parameter in nova.conf" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:449(para) -msgid "Admin establishing encrypted volume type, then user selecting encrypted volume" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:453(para) -msgid "Not yet available" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:455(para) -msgid "Shared File Systems service does not apply any additional encryption above what the share’s back-end storage provides" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:460(para) -msgid "Example of typical usage…" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:462(para) -msgid "10 GB first disk, 30 GB second disk" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:464(para) -msgid "1 TB disk" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:466(para) -msgid "10s of TBs of dataset storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:468(para) -msgid "Depends completely on the size of back-end storage specified when a share was being created. In case of thin provisioning it can be partial space reservation (for more details see Capabilities and Extra-Specs specification)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:476(title) -msgid "File-level Storage (for Live Migration)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:478(para) -msgid "With file-level storage, users access stored data using the operating system's file system interface. Most users, if they have used a network storage solution before, have encountered this form of networked storage. In the Unix world, the most common form of this is NFS. In the Windows world, the most common form is called CIFS (previously, SMB).migrationlive migrationstoragefile-level" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:493(para) -msgid "OpenStack clouds do not present file-level storage to end users. However, it is important to consider file-level storage for storing instances under /var/lib/nova/instances when designing your cloud, since you must have a shared file system if you want to support live migration." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:502(title) -msgid "Choosing Storage Back Ends" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:504(para) -msgid "Users will indicate different needs for their cloud use cases. Some may need fast access to many objects that do not change often, or want to set a time-to-live (TTL) value on a file. Others may access only storage that is mounted with the file system itself, but want it to be replicated instantly when starting a new instance. For other systems, ephemeral storage—storage that is released when a VM attached to it is shut down— is the preferred way. When you select storage back ends, storagechoosing back endsstorage back endback end interactionsstoreask the following questions on behalf of your users:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:525(para) -msgid "Do my users need block storage?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:529(para) -msgid "Do my users need object storage?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:533(para) -msgid "Do I need to support live migration?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:537(para) -msgid "Should my persistent storage drives be contained in my compute nodes, or should I use external storage?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:542(para) -msgid "What is the platter count I can achieve? Do more spindles result in better I/O despite network access?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:547(para) -msgid "Which one results in the best cost-performance scenario I'm aiming for?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:552(para) -msgid "How do I manage the storage operationally?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:556(para) -msgid "How redundant and distributed is the storage? What happens if a storage node fails? To what extent can it mitigate my data-loss disaster scenarios?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:562(para) -msgid "To deploy your storage by using only commodity hardware, you can use a number of open-source packages, as shown in ." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:567(caption) -msgid "Persistent file-based storage support" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:571(th) ./doc/openstack-ops/ch_arch_storage.xml:599(para) ./doc/openstack-ops/ch_arch_storage.xml:605(para) ./doc/openstack-ops/ch_arch_storage.xml:614(para) ./doc/openstack-ops/ch_arch_storage.xml:685(para) ./doc/openstack-ops/ch_arch_storage.xml:694(para) -msgid " " -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:573(th) -msgid "Object" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:575(th) -msgid "Block" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:578(para) -msgid "This list of open source file-level shared storage solutions is not exhaustive; other open source solutions exist (MooseFS). Your organization may already have deployed a file-level shared storage solution that you can use." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:577(th) -msgid "File-level" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:588(para) -msgid "Swift" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:603(para) -msgid "LVM" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:618(para) -msgid "Ceph" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:634(para) -msgid "Experimental" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:638(para) -msgid "Gluster" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:663(para) -msgid "NFS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:683(para) -msgid "ZFS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:697(para) -msgid "Sheepdog" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:716(title) -msgid "Storage Driver Support" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:718(para) -msgid "In addition to the open source technologies, there are a number of proprietary solutions that are officially supported by OpenStack Block Storage.storagestorage driver support They are offered by the following vendors:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:728(para) -msgid "IBM (Storwize family/SVC, XIV)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:732(para) -msgid "NetApp" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:736(para) -msgid "Nexenta" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:740(para) -msgid "SolidFire" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:744(para) -msgid "You can find a matrix of the functionality provided by all of the supported Block Storage drivers on the OpenStack wiki." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:750(para) -msgid "Also, you need to decide whether you want to support object storage in your cloud. The two common use cases for providing object storage in a compute cloud are:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:756(para) -msgid "To provide users with a persistent storage mechanism" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:760(para) -msgid "As a scalable, reliable data store for virtual machine images" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:766(title) -msgid "Commodity Storage Back-end Technologies" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:768(para) -msgid "This section provides a high-level overview of the differences among the different commodity storage back end technologies. Depending on your cloud user's needs, you can implement one or many of these technologies in different combinations:storagecommodity storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:782(para) -msgid "The official OpenStack Object Store implementation. It is a mature technology that has been used for several years in production by Rackspace as the technology behind Rackspace Cloud Files. As it is highly scalable, it is well-suited to managing petabytes of storage. OpenStack Object Storage's advantages are better integration with OpenStack (integrates with OpenStack Identity, works with the OpenStack dashboard interface) and better support for multiple data center deployment through support of asynchronous eventual consistency replication." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:793(para) -msgid "Therefore, if you eventually plan on distributing your storage cluster across multiple data centers, if you need unified accounts for your users for both compute and object storage, or if you want to control your object storage with the OpenStack dashboard, you should consider OpenStack Object Storage. More detail can be found about OpenStack Object Storage in the section below.accounts" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:806(term) -msgid "CephCeph" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:811(para) -msgid "A scalable storage solution that replicates data across commodity storage nodes. Ceph was originally developed by one of the founders of DreamHost and is currently used in production there." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:816(para) -msgid "Ceph was designed to expose different types of storage interfaces to the end user: it supports object storage, block storage, and file-system interfaces, although the file-system interface is not yet considered production-ready. Ceph supports the same API as swift for object storage and can be used as a back end for cinder block storage as well as back-end storage for glance images. Ceph supports \"thin provisioning,\" implemented using copy-on-write." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:825(para) -msgid "This can be useful when booting from volume because a new volume can be provisioned very quickly. Ceph also supports keystone-based authentication (as of version 0.56), so it can be a seamless swap in for the default OpenStack swift implementation." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:831(para) -msgid "Ceph's advantages are that it gives the administrator more fine-grained control over data distribution and replication strategies, enables you to consolidate your object and block storage, enables very fast provisioning of boot-from-volume instances using thin provisioning, and supports a distributed file-system interface, though this interface is not yet recommended for use in production deployment by the Ceph project." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:841(para) -msgid "If you want to manage your object and block storage within a single system, or if you want to support fast boot-from-volume, you should consider Ceph." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:848(term) -msgid "GlusterGlusterFS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:853(para) -msgid "A distributed, shared file system. As of Gluster version 3.3, you can use Gluster to consolidate your object storage and file storage into one unified file and object storage solution, which is called Gluster For OpenStack (GFO). GFO uses a customized version of swift that enables Gluster to be used as the back-end storage." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:860(para) -msgid "The main reason to use GFO rather than regular swift is if you also want to support a distributed file system, either to support shared storage live migration or to provide it as a separate service to your end users. If you want to manage your object and file storage within a single system, you should consider GFO." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:870(term) -msgid "LVMLVM (Logical Volume Manager)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:875(para) -msgid "The Logical Volume Manager is a Linux-based system that provides an abstraction layer on top of physical disks to expose logical volumes to the operating system. The LVM back-end implements block storage as LVM logical partitions." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:880(para) -msgid "On each host that will house block storage, an administrator must initially create a volume group dedicated to Block Storage volumes. Blocks are created from LVM logical volumes." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:885(para) -msgid "LVM does not provide any replication. Typically, administrators configure RAID on nodes that use LVM as block storage to protect against failures of individual hard drives. However, RAID does not protect against a failure of the entire host." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:895(term) -msgid "ZFSZFS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:900(para) -msgid "The Solaris iSCSI driver for OpenStack Block Storage implements blocks as ZFS entities. ZFS is a file system that also has the functionality of a volume manager. This is unlike on a Linux system, where there is a separation of volume manager (LVM) and file system (such as, ext3, ext4, xfs, and btrfs). ZFS has a number of advantages over ext4, including improved data-integrity checking." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:908(para) -msgid "The ZFS back end for OpenStack Block Storage supports only Solaris-based systems, such as Illumos. While there is a Linux port of ZFS, it is not included in any of the standard Linux distributions, and it has not been tested with OpenStack Block Storage. As with LVM, ZFS does not provide replication across hosts on its own; you need to add a replication solution on top of ZFS if your cloud needs to be able to handle storage-node failures." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:917(para) -msgid "We don't recommend ZFS unless you have previous experience with deploying it, since the ZFS back end for Block Storage requires a Solaris-based operating system, and we assume that your experience is primarily with Linux-based systems." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:925(term) -msgid "SheepdogSheepdog" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:930(para) -msgid "Sheepdog is a userspace distributed storage system. Sheepdog scales to several hundred nodes, and has powerful virtual disk management features like snapshot, cloning, rollback, thin provisioning." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:934(para) -msgid "It is essentially an object storage system that manages disks and aggregates the space and performance of disks linearly in hyper scale on commodity hardware in a smart way. On top of its object store, Sheepdog provides elastic volume service and http service. Sheepdog does not assume anything about kernel version and can work nicely with xattr-supported file systems." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:946(title) ./doc/openstack-ops/ch_arch_network_design.xml:526(title) ./doc/openstack-ops/ch_arch_compute_nodes.xml:610(title) ./doc/openstack-ops/ch_arch_provision.xml:367(title) ./doc/openstack-ops/ch_ops_customize.xml:1158(title) -msgid "Conclusion" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_storage.xml:948(para) -msgid "We hope that you now have some considerations in mind and questions to ask your future cloud users about their storage use cases. As you can see, your storage decisions will also influence your network design for performance and security needs. Continue with us to make more informed decisions about your OpenStack cloud design." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:12(title) -msgid "Network Design" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:14(para) -msgid "OpenStack provides a rich networking environment, and this chapter details the requirements and options to deliberate when designing your cloud.network designfirst stepsdesign considerationsnetwork design" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:27(para) -msgid "If this is the first time you are deploying a cloud infrastructure in your organization, after reading this section, your first conversations should be with your networking team. Network usage in a running cloud is vastly different from traditional network deployments and has the potential to be disruptive at both a connectivity and a policy level.cloud computingvs. traditional deployments" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:39(para) -msgid "For example, you must plan the number of IP addresses that you need for both your guest instances as well as management infrastructure. Additionally, you must research and discuss cloud network connectivity through proxy servers and firewalls." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:44(para) -msgid "In this chapter, we'll give some examples of network implementations to consider and provide information about some of the network layouts that OpenStack uses. Finally, we have some brief notes on the networking services that are essential for stable operation." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:50(title) -msgid "Management Network" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:52(para) -msgid "A management network (a separate network for use by your cloud operators) typically consists of a separate switch and separate NICs (network interface cards), and is a recommended option. This segregation prevents system administration and the monitoring of system access from being disrupted by traffic generated by guests.NICs (network interface cards)management networknetwork designmanagement network" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:67(para) -msgid "Consider creating other private networks for communication between internal components of OpenStack, such as the message queue and OpenStack Compute. Using a virtual local area network (VLAN) works well for these scenarios because it provides a method for creating multiple virtual networks on a physical network." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:75(title) -msgid "Public Addressing Options" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:77(para) -msgid "There are two main types of IP addresses for guest virtual machines: fixed IPs and floating IPs. Fixed IPs are assigned to instances on boot, whereas floating IP addresses can change their association between instances by action of the user. Both types of IP addresses can be either public or private, depending on your use case.IP addressespublic addressing optionsnetwork designpublic addressing options" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:91(para) -msgid "Fixed IP addresses are required, whereas it is possible to run OpenStack without floating IPs. One of the most common use cases for floating IPs is to provide public IP addresses to a private cloud, where there are a limited number of IP addresses available. Another is for a public cloud user to have a \"static\" IP address that can be reassigned when an instance is upgraded or moved.IP addressesstaticstatic IP addresses" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:104(para) -msgid "Fixed IP addresses can be private for private clouds, or public for public clouds. When an instance terminates, its fixed IP is lost. It is worth noting that newer users of cloud computing may find their ephemeral nature frustrating.IP addressesfixedfixed IP addresses" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:117(title) -msgid "IP Address Planning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:119(para) -msgid "An OpenStack installation can potentially have many subnets (ranges of IP addresses) and different types of services in each. An IP address plan can assist with a shared understanding of network partition purposes and scalability. Control services can have public and private IP addresses, and as noted above, there are a couple of options for an instance's public addresses.IP addressesaddress planningnetwork designIP address planning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:134(para) -msgid "An IP address plan might be broken down into the following sections:IP addressessections of" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:143(term) -msgid "Subnet router" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:146(para) -msgid "Packets leaving the subnet go via this address, which could be a dedicated router or a nova-network service." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:153(term) -msgid "Control services public interfaces" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:156(para) -msgid "Public access to swift-proxy, nova-api, glance-api, and horizon come to these addresses, which could be on one side of a load balancer or pointing at individual machines." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:164(term) -msgid "Object Storage cluster internal communications" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:167(para) -msgid "Traffic among object/account/container servers and between these and the proxy server's internal interface uses this private network.containerscontainer serversobjectsobject serversaccount server" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:184(term) -msgid "Compute and storage communications" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:187(para) -msgid "If ephemeral or block storage is external to the compute node, this network is used." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:193(term) -msgid "Out-of-band remote management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:196(para) -msgid "If a dedicated remote access controller chip is included in servers, often these are on a separate network." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:202(term) -msgid "In-band remote management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:205(para) -msgid "Often, an extra (such as 1 GB) interface on compute or storage nodes is used for system administrators or monitoring tools to access the host instead of going through the public interface." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:213(term) -msgid "Spare space for future growth" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:216(para) -msgid "Adding more public-facing control services or guest instance IPs should always be part of your plan." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:222(para) -msgid "For example, take a deployment that has both OpenStack Compute and Object Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26 available. One way to segregate the space might be as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:245(para) -msgid "A similar approach can be taken with public IP addresses, taking note that large, flat ranges are preferred for use with guest instance IPs. Take into account that for some OpenStack networking options, a public IP address in the range of a guest instance public IP address is assigned to the nova-compute host." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:253(title) -msgid "Network Topology" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:255(para) -msgid "OpenStack Compute with nova-network provides predefined network deployment models, each with its own strengths and weaknesses. The selection of a network manager changes your network topology, so the choice should be made carefully. You also have a choice between the tried-and-true legacy nova-network settings or the neutron project for OpenStack Networking. Both offer networking for launched instances with different implementations and requirements.networksdeployment optionsnetworksnetwork managersnetwork designnetwork topologydeployment options" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:278(para) -msgid "For OpenStack Networking with the neutron project, typical configurations are documented with the idea that any setup you can configure with real hardware you can re-create with a software-defined equivalent. Each tenant can contain typical network elements such as routers, and services such as DHCP." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:284(para) -msgid " describes the networking deployment options for both legacy nova-network options and an equivalent neutron configuration.provisioning/deploymentnetwork deployment options" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:294(caption) -msgid "Networking deployment options" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:306(th) -msgid "Network deployment model" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:308(th) -msgid "Strengths" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:310(th) -msgid "Weaknesses" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:312(th) -msgid "Neutron equivalent" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:318(para) -msgid "Flat" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:320(para) -msgid "Extremely simple topology." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:320(para) -msgid "No DHCP overhead." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:323(para) -msgid "Requires file injection into the instance to configure network interfaces." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:326(td) -msgid "Configure a single bridge as the integration bridge (br-int) and connect it to a physical network interface with the Modular Layer 2 (ML2) plug-in, which uses Open vSwitch by default." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:332(para) ./doc/openstack-ops/section_arch_example-nova.xml:128(para) -msgid "FlatDHCP" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:334(para) -msgid "Relatively simple to deploy." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:334(para) -msgid "Standard networking." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:335(para) -msgid "Works with all guest operating systems." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:338(para) ./doc/openstack-ops/ch_arch_network_design.xml:350(para) -msgid "Requires its own DHCP broadcast domain." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:340(td) -msgid "Configure DHCP agents and routing agents. Network Address Translation (NAT) performed outside of compute nodes, typically on one or more network nodes." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:346(para) -msgid "VlanManager" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:348(para) -msgid "Each tenant is isolated to its own VLANs." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:350(para) ./doc/openstack-ops/ch_arch_network_design.xml:372(para) -msgid "More complex to set up." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:351(para) -msgid "Requires many VLANs to be trunked onto a single port." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:352(para) -msgid "Standard VLAN number limitation." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:353(para) -msgid "Switches must support 802.1q VLAN tagging." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:356(para) -msgid "Isolated tenant networks implement some form of isolation of layer 2 traffic between distinct networks. VLAN tagging is key concept, where traffic is “tagged” with an ordinal identifier for the VLAN. Isolated network implementations may or may not include additional services like DHCP, NAT, and routing." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:364(para) -msgid "FlatDHCP Multi-host with high availability (HA)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:367(para) -msgid "Networking failure is isolated to the VMs running on the affected hypervisor." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:368(para) -msgid "DHCP traffic can be isolated within an individual host." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:369(para) -msgid "Network traffic is distributed to the compute nodes." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:372(para) -msgid "Compute nodes typically need IP addresses accessible by external networks." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:374(para) -msgid "Options must be carefully configured for live migration to work with networking services." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:377(para) -msgid "Configure neutron with multiple DHCP and layer-3 agents. Network nodes are not able to failover to each other, so the controller runs networking services, such as DHCP. Compute nodes run the ML2 plug-in with support for agents such as Open vSwitch or Linux Bridge." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:386(para) -msgid "Both nova-network and neutron services provide similar capabilities, such as VLAN between VMs. You also can provide multiple NICs on VMs with either service. Further discussion follows." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:392(title) -msgid "VLAN Configuration Within OpenStack VMs" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:394(para) -msgid "VLAN configuration can be as simple or as complicated as desired. The use of VLANs has the benefit of allowing each project its own subnet and broadcast segregation from other projects. To allow OpenStack to efficiently use VLANs, you must allocate a VLAN range (one for each project) and turn each compute node switch port into a trunk port.networksVLANVLAN networknetwork designnetwork topologyVLAN with OpenStack VMs" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:413(para) -msgid "For example, if you estimate that your cloud must support a maximum of 100 projects, pick a free VLAN range that your network infrastructure is currently not using (such as VLAN 200–299). You must configure OpenStack with this range and also configure your switch ports to allow VLAN traffic from that range." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:421(title) -msgid "Multi-NIC Provisioning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:423(para) -msgid "OpenStack Networking with neutron and OpenStack Compute with nova-network have the ability to assign multiple NICs to instances. For nova-network this can be done on a per-request basis, with each additional NIC using up an entire subnet or VLAN, reducing the total number of supported projects.MultiNicnetwork designnetwork topologymulti-NIC provisioning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:440(title) -msgid "Multi-Host and Single-Host Networking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:442(para) -msgid "The nova-network service has the ability to operate in a multi-host or single-host mode. Multi-host is when each compute node runs a copy of nova-network and the instances on that compute node use the compute node as a gateway to the Internet. The compute nodes also host the floating IPs and security groups for instances on that node. Single-host is when a central server—for example, the cloud controller—runs the nova-network service. All compute nodes forward traffic from the instances to the cloud controller. The cloud controller then forwards traffic to the Internet. The cloud controller hosts the floating IPs and security groups for all instances on all compute nodes in the cloud.single-host networkingnetworksmulti-hostmulti-host networkingnetwork designnetwork topologymulti- vs. single-host networking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:469(para) -msgid "There are benefits to both modes. Single-node has the downside of a single point of failure. If the cloud controller is not available, instances cannot communicate on the network. This is not true with multi-host, but multi-host requires that each compute node has a public IP address to communicate on the Internet. If you are not able to obtain a significant block of public IP addresses, multi-host might not be an option." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:480(title) -msgid "Services for Networking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:482(para) -msgid "OpenStack, like any network application, has a number of standard considerations to apply, such as NTP and DNS.network designservices for networking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:490(title) -msgid "NTP" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:492(para) -msgid "Time synchronization is a critical element to ensure continued operation of OpenStack components. Correct time is necessary to avoid errors in instance scheduling, replication of objects in the object store, and even matching log timestamps for debugging.networksNetwork Time Protocol (NTP)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:502(para) -msgid "All servers running OpenStack components should be able to access an appropriate NTP server. You may decide to set up one locally or use the public pools available from the Network Time Protocol project." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:510(title) -msgid "DNS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:512(para) -msgid "OpenStack does not currently provide DNS services, aside from the dnsmasq daemon, which resides on nova-network hosts. You could consider providing a dynamic DNS service to allow instances to update a DNS entry with new IP addresses. You can also consider making a generic forward and reverse DNS mapping for instances' IP addresses, such as vm-203-0-113-123.example.com.DNS (Domain Name Server, Service or System)DNS service choices" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_network_design.xml:528(para) -msgid "Armed with your IP address layout and numbers and knowledge about the topologies and services you can use, it's now time to prepare the network for your installation. Be sure to also check out the OpenStack Security Guide for tips on securing your network. We wish you a good relationship with your networking team!" -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:19(title) -msgid "Acknowledgments" -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:20(para) -msgid "The OpenStack Foundation supported the creation of this book with plane tickets to Austin, lodging (including one adventurous evening without power after a windstorm), and delicious food. For about USD $10,000, we could collaborate intensively for a week in the same room at the Rackspace Austin office. The authors are all members of the OpenStack Foundation, which you can join. Go to the Foundation web site at http://openstack.org/join." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:28(para) -msgid "We want to acknowledge our excellent host Rackers at Rackspace in Austin:" -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:32(para) -msgid "Emma Richards of Rackspace Guest Relations took excellent care of our lunch orders and even set aside a pile of sticky notes that had fallen off the walls." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:37(para) -msgid "Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room reshuffle and helped us settle in for the week." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:41(para) -msgid "The Real Estate team at Rackspace in Austin, also known as \"The Victors,\" were super responsive." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:45(para) -msgid "Adam Powell in Racker IT supplied us with bandwidth each day and second monitors for those of us needing more screens." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:49(para) -msgid "On Wednesday night we had a fun happy hour with the Austin OpenStack Meetup group and Racker Katie Schmidt took great care of our group." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:54(para) -msgid "We also had some excellent input from outside of the room:" -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:57(para) -msgid "Tim Bell from CERN gave us feedback on the outline before we started and reviewed it mid-week." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:61(para) -msgid "Sébastien Han has written excellent blogs and generously gave his permission for re-use." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:65(para) -msgid "Oisin Feeley read it, made some edits, and provided emailed feedback right when we asked." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:69(para) -msgid "Inside the book sprint room with us each day was our book sprint facilitator Adam Hyde. Without his tireless support and encouragement, we would have thought a book of this scope was impossible in five days. Adam has proven the book sprint method effectively again and again. He creates both tools and faith in collaborative authoring at www.booksprints.net." -msgstr "" - -#: ./doc/openstack-ops/acknowledgements.xml:77(para) -msgid "We couldn't have pulled it off without so much supportive help and encouragement." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/preface_ops.xml:591(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_00in01.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:12(title) -msgid "Preface" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:14(para) -msgid "OpenStack is an open source platform that lets you build an Infrastructure as a Service (IaaS) cloud that runs on commodity hardware." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:19(title) -msgid "Introduction to OpenStack" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:21(para) -msgid "OpenStack believes in open source, open design, and open development, all in an open community that encourages participation by anyone. The long-term vision for OpenStack is to produce a ubiquitous open source cloud computing platform that meets the needs of public and private cloud providers regardless of size. OpenStack services control large pools of compute, storage, and networking resources throughout a data center." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:30(para) -msgid "The technology behind OpenStack consists of a series of interrelated projects delivering various components for a cloud infrastructure solution. Each service provides an open API so that all of these resources can be managed through a dashboard that gives administrators control while empowering users to provision resources through a web interface, a command-line client, or software development kits that support the API. Many OpenStack APIs are extensible, meaning you can keep compatibility with a core set of calls while providing access to more resources and innovating through API extensions. The OpenStack project is a global collaboration of developers and cloud computing technologists. The project produces an open standard cloud computing platform for both public and private clouds. By focusing on ease of implementation, massive scalability, a variety of rich features, and tremendous extensibility, the project aims to deliver a practical and reliable cloud solution for all types of organizations." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:50(title) -msgid "Getting Started with OpenStack" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:52(para) -msgid "As an open source project, one of the unique aspects of OpenStack is that it has many different levels at which you can begin to engage with it—you don't have to do everything yourself." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:58(title) -msgid "Using OpenStack" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:60(para) -msgid "You could ask, \"Do I even need to build a cloud?\" If you want to start using a compute or storage service by just swiping your credit card, you can go to eNovance, HP, Rackspace, or other organizations to start using their public OpenStack clouds. Using their OpenStack cloud resources is similar to accessing the publicly available Amazon Web Services Elastic Compute Cloud (EC2) or Simple Storage Solution (S3)." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:71(title) -msgid "Plug and Play OpenStack" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:73(para) -msgid "However, the enticing part of OpenStack might be to build your own private cloud, and there are several ways to accomplish this goal. Perhaps the simplest of all is an appliance-style solution. You purchase an appliance, unpack it, plug in the power and the network, and watch it transform into an OpenStack cloud with minimal additional configuration." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:79(para) -msgid "However, hardware choice is important for many applications, so if that applies to you, consider that there are several software distributions available that you can run on servers, storage, and network products of your choosing. Canonical (where OpenStack replaced Eucalyptus as the default cloud option in 2011), Red Hat, and SUSE offer enterprise OpenStack solutions and support. You may also want to take a look at some of the specialized distributions, such as those from Rackspace, Piston, SwiftStack, or Cloudscaling." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:88(para) -msgid "Alternatively, if you want someone to help guide you through the decisions about the underlying hardware or your applications, perhaps adding in a few features or integrating components along the way, consider contacting one of the system integrators with OpenStack experience, such as Mirantis or Metacloud." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:94(para) -msgid "If your preference is to build your own OpenStack expertise internally, a good way to kick-start that might be to attend or arrange a training session. The OpenStack Foundation has a Training Marketplace where you can look for nearby events. Also, the OpenStack community is working to produce open source training materials." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:104(title) -msgid "Roll Your Own OpenStack" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:106(para) -msgid "However, this guide has a different audience—those seeking flexibility from the OpenStack framework by deploying do-it-yourself solutions." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:110(para) -msgid "OpenStack is designed for horizontal scalability, so you can easily add new compute, network, and storage resources to grow your cloud over time. In addition to the pervasiveness of massive OpenStack public clouds, many organizations, such as PayPal, Intel, and Comcast, build large-scale private clouds. OpenStack offers much more than a typical software package because it lets you integrate a number of different technologies to construct a cloud. This approach provides great flexibility, but the number of options might be daunting at first." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:123(title) -msgid "Who This Book Is For" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:124(para) -msgid "This book is for those of you starting to run OpenStack clouds as well as those of you who were handed an operational one and want to keep it running well. Perhaps you're on a DevOps team, perhaps you are a system administrator starting to dabble in the cloud, or maybe you want to get on the OpenStack cloud team at your company. This book is for all of you." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:130(para) -msgid "This guide assumes that you are familiar with a Linux distribution that supports OpenStack, SQL databases, and virtualization. You must be comfortable administering and configuring multiple Linux machines for networking. You must install and maintain an SQL database and occasionally run queries against it." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:136(para) -msgid "One of the most complex aspects of an OpenStack cloud is the networking configuration. You should be familiar with concepts such as DHCP, Linux bridges, VLANs, and iptables. You must also have access to a network hardware expert who can configure the switches and routers required in your OpenStack cloud." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:142(para) -msgid "Cloud computing is quite an advanced topic, and this book requires a lot of background knowledge. However, if you are fairly new to cloud computing, we recommend that you make use of the at the back of the book, as well as the online documentation for OpenStack and additional resources mentioned in this book in ." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:151(title) -msgid "Further Reading" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:152(para) -msgid "There are other books on the OpenStack documentation website that can help you get the job done." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:157(title) -msgid "OpenStack Guides" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:159(term) -msgid "OpenStack Installation Guides" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:161(para) -msgid "Describes a manual installation process, as in, by hand, without automation, for multiple distributions based on a packaging system:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:168(link) ./doc/openstack-ops/ch_ops_resources.xml:16(link) -msgid "Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise Server 12" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:174(link) -msgid "Installation Guide for Red Hat Enterprise Linux 7 and CentOS 7" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:180(link) ./doc/openstack-ops/ch_ops_resources.xml:27(link) -msgid "Installation Guide for Ubuntu 14.04 (LTS) Server" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:188(link) -msgid "OpenStack Configuration Reference" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:192(para) -msgid "Contains a reference listing of all configuration options for core and integrated OpenStack services by release version" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:199(link) ./doc/openstack-ops/ch_ops_resources.xml:31(link) -msgid "OpenStack Cloud Administrator Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:203(para) -msgid "Contains how-to information for managing an OpenStack cloud as needed for your use cases, such as storage, computing, or software-defined-networking" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:211(link) -msgid "OpenStack High Availability Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:215(para) -msgid "Describes potential strategies for making your OpenStack services and related controllers and data stores highly available" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:222(link) -msgid "OpenStack Security Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:226(para) -msgid "Provides best practices and conceptual information about securing an OpenStack cloud" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:232(link) -msgid "Virtual Machine Image Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:236(para) -msgid "Shows you how to obtain, create, and modify virtual machine images that are compatible with OpenStack" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:242(link) -msgid "OpenStack End User Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:246(para) -msgid "Shows OpenStack end users how to create and manage resources in an OpenStack cloud with the OpenStack dashboard and OpenStack client commands" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:253(link) -msgid "OpenStack Admin User Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:257(para) -msgid "Shows OpenStack administrators how to create and manage resources in an OpenStack cloud with the OpenStack dashboard and OpenStack client commands" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:265(link) -msgid "Networking Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:269(para) -msgid "This guide targets OpenStack administrators seeking to deploy and manage OpenStack Networking (neutron)." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:275(link) -msgid "OpenStack API Guide" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:279(para) -msgid "A brief overview of how to send REST API requests to endpoints for OpenStack services" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:288(title) -msgid "How This Book Is Organized" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:290(para) -msgid "This book is organized into two parts: the architecture decisions for designing OpenStack clouds and the repeated operations for running OpenStack clouds." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:294(emphasis) -msgid "Part I:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:301(para) -msgid "Because of all the decisions the other chapters discuss, this chapter describes the decisions made for this particular book and much of the justification for the example architecture." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:312(para) -msgid "While this book doesn't describe installation, we do recommend automation for deployment and configuration, discussed in this chapter." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:322(para) -msgid "The cloud controller is an invention for the sake of consolidating and describing which services run on which nodes. This chapter discusses hardware and network considerations as well as how to design the cloud controller for performance and separation of services." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:335(para) -msgid "This chapter describes the compute nodes, which are dedicated to running virtual machines. Some hardware choices come into play here, as well as logging and networking descriptions." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:346(para) -msgid "This chapter discusses the growth of your cloud resources through scaling and segregation considerations." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:356(para) -msgid "As with other architecture decisions, storage concepts within OpenStack offer many options. This chapter lays out the choices for you." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:366(para) -msgid "Your OpenStack cloud networking needs to fit into your existing networks while also enabling the best design for your users and administrators, and this chapter gives you in-depth information about networking decisions." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:374(emphasis) -msgid "Part II:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:381(para) -msgid "This chapter is written to let you get your hands wrapped around your OpenStack cloud through command-line tools and understanding what is already set up in your cloud." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:392(para) -msgid "This chapter walks through user-enabling processes that all admins must face to manage users, give them quotas to parcel out resources, and so on." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:402(para) -msgid "This chapter shows you how to use OpenStack cloud resources and how to train your users." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:411(para) -msgid "This chapter goes into the common failures that the authors have seen while running clouds in production, including troubleshooting." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:421(para) -msgid "Because network troubleshooting is especially difficult with virtual resources, this chapter is chock-full of helpful tips and tricks for tracing network traffic, finding the root cause of networking failures, and debugging related services, such as DHCP and DNS." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:434(para) -msgid "This chapter shows you where OpenStack places logs and how to best read and manage logs for monitoring purposes." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:444(para) -msgid "This chapter describes what you need to back up within OpenStack as well as best practices for recovering backups." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:454(para) -msgid "For readers who need to get a specialized feature into OpenStack, this chapter describes how to use DevStack to write custom middleware or a custom scheduler to rebalance your resources." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:465(para) -msgid "Because OpenStack is so, well, open, this chapter is dedicated to helping you navigate the community and find out where you can help and where you can get help." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:475(para) -msgid "Much of OpenStack is driver-oriented, so you can plug in different solutions to the base set of services. This chapter describes some advanced configuration topics." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:486(para) -msgid "This chapter provides upgrade information based on the architectures used in this book." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:494(emphasis) -msgid "Back matter:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:501(para) -msgid "You can read a small selection of use cases from the OpenStack community with some technical details and further resources." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:511(para) -msgid "These are shared legendary tales of image disappearances, VM massacres, and crazy troubleshooting techniques that result in hard-learned lessons and wisdom." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:522(para) -msgid "Read about how to track the OpenStack roadmap through the open and transparent development processes." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:531(para) -msgid "So many OpenStack resources are available online because of the fast-moving nature of the project, but there are also resources listed here that the authors found helpful while learning themselves." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:542(para) -msgid "A list of terms used in this book is included, which is a subset of the larger OpenStack glossary available online." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:551(title) -msgid "Why and How We Wrote This Book" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:553(para) -msgid "We wrote this book because we have deployed and maintained OpenStack clouds for at least a year and we wanted to share this knowledge with others. After months of being the point people for an OpenStack cloud, we also wanted to have a document to hand to our system administrators so that they'd know how to operate the cloud on a daily basis—both reactively and pro-actively. We wanted to provide more detailed technical information about the decisions that deployers make along the way." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:565(para) -msgid "Design and create an architecture for your first nontrivial OpenStack cloud. After you read this guide, you'll know which questions to ask and how to organize your compute, networking, and storage resources and the associated software packages." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:573(para) -msgid "Perform the day-to-day tasks required to administer a cloud." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:563(para) -msgid "We wrote this book to help you:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:578(para) -msgid "We wrote this book in a book sprint, which is a facilitated, rapid development production method for books. For more information, see the BookSprints site. Your authors cobbled this book together in five days during February 2013, fueled by caffeine and the best takeout food that Austin, Texas, could offer." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:586(para) -msgid "On the first day, we filled white boards with colorful sticky notes to start to shape this nebulous book about how to architect and operate clouds:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:596(para) -msgid "We wrote furiously from our own experiences and bounced ideas between each other. At regular intervals we reviewed the shape and organization of the book and further molded it, leading to what you see today." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:601(para) -msgid "The team includes:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:605(term) -msgid "Tom Fifield" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:608(para) -msgid "After learning about scalability in computing from particle physics experiments, such as ATLAS at the Large Hadron Collider (LHC) at CERN, Tom worked on OpenStack clouds in production to support the Australian public research sector. Tom currently serves as an OpenStack community manager and works on OpenStack documentation in his spare time." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:619(term) -msgid "Diane Fleming" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:622(para) -msgid "Diane works on the OpenStack API documentation tirelessly. She helped out wherever she could on this project." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:629(term) -msgid "Anne Gentle" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:632(para) -msgid "Anne is the documentation coordinator for OpenStack and also served as an individual contributor to the Google Documentation Summit in 2011, working with the Open Street Maps team. She has worked on book sprints in the past, with FLOSS Manuals’ Adam Hyde facilitating. Anne lives in Austin, Texas." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:642(term) -msgid "Lorin Hochstein" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:645(para) -msgid "An academic turned software-developer-slash-operator, Lorin worked as the lead architect for Cloud Services at Nimbis Services, where he deploys OpenStack for technical computing applications. He has been working with OpenStack since the Cactus release. Previously, he worked on high-performance computing extensions for OpenStack at University of Southern California's Information Sciences Institute (USC-ISI)." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:657(term) -msgid "Adam Hyde" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:660(para) -msgid "Adam facilitated this book sprint. He also founded the book sprint methodology and is the most experienced book-sprint facilitator around. See for more information. Adam founded FLOSS Manuals—a community of some 3,000 individuals developing Free Manuals about Free Software. He is also the founder and project manager for Booktype, an open source project for writing, editing, and publishing books online and in print." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:673(term) -msgid "Jonathan Proulx" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:676(para) -msgid "Jon has been piloting an OpenStack cloud as a senior technical architect at the MIT Computer Science and Artificial Intelligence Lab for his researchers to have as much computing power as they need. He started contributing to OpenStack documentation and reviewing the documentation so that he could accelerate his learning." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:686(term) -msgid "Everett Toews" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:689(para) -msgid "Everett is a developer advocate at Rackspace making OpenStack and the Rackspace Cloud easy to use. Sometimes developer, sometimes advocate, and sometimes operator, he's built web applications, taught workshops, given presentations around the world, and deployed OpenStack for production use by academia and business." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:699(term) -msgid "Joe Topjian" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:702(para) -msgid "Joe has designed and deployed several clouds at Cybera, a nonprofit where they are building e-infrastructure to support entrepreneurs and local researchers in Alberta, Canada. He also actively maintains and operates these clouds as a systems architect, and his experiences have generated a wealth of troubleshooting skills for cloud environments." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:713(term) -msgid "OpenStack community members" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:716(para) -msgid "Many individual efforts keep a community book alive. Our community members updated content for this book year-round. Also, a year after the first sprint, Jon Proulx hosted a second two-day mini-sprint at MIT with the goal of updating the book for the latest release. Since the book's inception, more than 30 contributors have supported this book. We have a tool chain for reviews, continuous builds, and translations. Writers and developers continuously review patches, enter doc bugs, edit content, and fix doc bugs. We want to recognize their efforts!" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:728(para) -msgid "The following people have contributed to this book: Akihiro Motoki, Alejandro Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, Benjamin Stassart, Chandan Kumar, Chris Ricker, David Cramer, David Wittman, Denny Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, James E. Blair, Jay Clark, Jeff White, Jeremy Stanley, K Jonathan Harker, KATO Tomoyuki, Lana Brindley, Laura Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew Kassawara, Michael Still, Monty Taylor, Nermina Miller, Nigel Williams, Phil Hopkins, Russell Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha Peilicke, Sean M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, Summer Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun \"Daisy\" Guo, Zhengguang Ou, and ZhiQiang Fan." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:749(title) -msgid "How to Contribute to This Book" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:751(para) -msgid "The genesis of this book was an in-person event, but now that the book is in your hands, we want you to contribute to it. OpenStack documentation follows the coding principles of iterative work, with bug logging, investigating, and fixing. We also store the source content on GitHub and invite collaborators through the OpenStack Gerrit installation, which offers reviews. For the O'Reilly edition of this book, we are using the company's Atlas system, which also stores source content on GitHub and enables collaboration among contributors." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:761(para) -msgid "Learn more about how to contribute to the OpenStack docs at OpenStack Documentation Contributor Guide." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:765(para) -msgid "If you find a bug and can't fix it or aren't sure it's really a doc bug, log a bug at OpenStack Manuals. Tag the bug under Extra options with the ops-guide tag to indicate that the bug is in this guide. You can assign the bug to yourself if you know how to fix it. Also, a member of the OpenStack doc-core team can triage the doc bug." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:778(title) -msgid "Conventions Used in This Book" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:780(para) -msgid "The following typographical conventions are used in this book:" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:785(emphasis) -msgid "Italic" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:788(para) -msgid "Indicates new terms, URLs, email addresses, filenames, and file extensions." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:794(literal) -msgid "Constant width" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:797(para) -msgid "Used for program listings, as well as within paragraphs to refer to program elements such as variable or function names, databases, data types, environment variables, statements, and keywords." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:808(para) -msgid "Shows commands or other text that should be typed literally by the user." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:814(replaceable) -msgid "Constant width italic" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:817(para) -msgid "Shows text that should be replaced with user-supplied values or by values determined by context." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:823(term) -msgid "Command prompts" -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:826(para) -msgid "Commands prefixed with the # prompt should be executed by the root user. These examples can also be executed using the sudo command, if available." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:831(para) -msgid "Commands prefixed with the $ prompt can be executed by any user, including root." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:839(para) -msgid "This element signifies a tip or suggestion." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:843(para) -msgid "This element signifies a general note." -msgstr "" - -#: ./doc/openstack-ops/preface_ops.xml:847(para) -msgid "This element indicates a warning or caution." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:10(title) -msgid "Use Cases" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:12(para) -msgid "This appendix contains a small selection of use cases from the community, with more technical detail than usual. Further examples can be found on the OpenStack website." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:19(title) -msgid "NeCTAR" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:21(para) -msgid "Who uses it: researchers from the Australian publicly funded research sector. Use is across a wide variety of disciplines, with the purpose of instances ranging from running simple web servers to using hundreds of cores for high-throughput computing.NeCTAR Research Clouduse casesNeCTAROpenStack communityuse casesNeCTAR" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:40(title) ./doc/openstack-ops/app_usecases.xml:110(title) ./doc/openstack-ops/app_usecases.xml:202(title) ./doc/openstack-ops/app_usecases.xml:258(title) -msgid "Deployment" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:42(para) -msgid "Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight sites with approximately 4,000 cores per site." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:45(para) -msgid "Each site runs a different configuration, as a resource cells in an OpenStack Compute cells setup. Some sites span multiple data centers, some use off compute node storage with a shared file system, and some use on compute node storage with a non-shared file system. Each site deploys the Image service with an Object Storage back end. A central Identity, dashboard, and Compute API service are used. A login to the dashboard triggers a SAML login with Shibboleth, which creates an account in the Identity service with an SQL back end. An Object Storage Global Cluster is used across several sites." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:56(para) -msgid "Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core and approximately 40 GB of ephemeral storage per core." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:59(para) -msgid "All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The OpenStack version in use is typically the current stable version, with 5 to 10 percent back-ported code from trunk and modifications." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:66(title) ./doc/openstack-ops/app_usecases.xml:166(title) ./doc/openstack-ops/app_usecases.xml:227(title) ./doc/openstack-ops/app_usecases.xml:280(title) ./doc/openstack-ops/ch_ops_resources.xml:11(title) -msgid "Resources" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:70(link) -msgid "OpenStack.org case study" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:75(link) -msgid "NeCTAR-RC GitHub" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:80(link) -msgid "NeCTAR website" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:88(title) -msgid "MIT CSAIL" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:90(para) -msgid "Who uses it: researchers from the MIT Computer Science and Artificial Intelligence Lab.CSAIL (Computer Science and Artificial Intelligence Lab)MIT CSAIL (Computer Science and Artificial Intelligence Lab)use casesMIT CSAILOpenStack communityuse casesMIT CSAIL" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:112(para) -msgid "The CSAIL cloud is currently 64 physical nodes with a total of 768 physical cores and 3,456 GB of RAM. Persistent data storage is largely outside the cloud on NFS, with cloud resources focused on compute resources. There are more than 130 users in more than 40 projects, typically running 2,000–2,500 vCPUs in 300 to 400 instances." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:118(para) -msgid "We initially deployed on Ubuntu 12.04 with the Essex release of OpenStack using FlatDHCP multi-host networking." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:121(para) -msgid "The software stack is still Ubuntu 12.04 LTS, but now with OpenStack Havana from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed using FAI and Puppet for configuration management. The FAI and Puppet combination is used lab-wide, not only for OpenStack. There is a single cloud controller node, which also acts as network controller, with the remainder of the server hardware dedicated to compute nodes." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:129(para) -msgid "Host aggregates and instance-type extra specs are used to provide two different resource allocation ratios. The default resource allocation ratios we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use instance types that require non-oversubscribed hosts where cpu_ratio and ram_ratio are both set to 1.0. Since we have hyper-threading enabled on our compute nodes, this provides one vCPU per CPU thread, or two vCPUs per physical core." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:138(para) -msgid "With our upgrade to Grizzly in August 2013, we moved to OpenStack Networking, neutron (quantum at the time). Compute nodes have two-gigabit network interfaces and a separate management card for IPMI management. One network interface is used for node-to-node communications. The other is used as a trunk port for OpenStack managed VLANs. The controller node uses two bonded 10g network interfaces for its public IP communications. Big pipes are used here because images are served over this port, and it is also used to connect to iSCSI storage, back-ending the image storage and database. The controller node also has a gigabit interface that is used in trunk mode for OpenStack managed VLAN traffic. This port handles traffic to the dhcp-agent and metadata-proxy." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:151(para) -msgid "We approximate the older nova-network multi-host HA setup by using \"provider VLAN networks\" that connect instances directly to existing publicly addressable networks and use existing physical routers as their default gateway. This means that if our network controller goes down, running instances still have their network available, and no single Linux host becomes a traffic bottleneck. We are able to do this because we have a sufficient supply of IPv4 addresses to cover all of our instances and thus don't need NAT and don't use floating IP addresses. We provide a single generic public network to all projects and additional existing VLANs on a project-by-project basis as needed. Individual projects are also allowed to create their own private GRE based networks." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:170(link) -msgid "CSAIL homepage" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:178(title) -msgid "DAIR" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:180(para) -msgid "Who uses it: DAIR is an integrated virtual environment that leverages the CANARIE network to develop and test new information communication technology (ICT) and other digital technologies. It combines such digital infrastructure as advanced networking and cloud computing and storage to create an environment for developing and testing innovative ICT applications, protocols, and services; performing at-scale experimentation for deployment; and facilitating a faster time to market.DAIRuse casesDAIROpenStack communityuse casesDAIR" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:204(para) -msgid "DAIR is hosted at two different data centers across Canada: one in Alberta and the other in Quebec. It consists of a cloud controller at each location, although, one is designated the \"master\" controller that is in charge of central authentication and quotas. This is done through custom scripts and light modifications to OpenStack. DAIR is currently running Havana." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:211(para) -msgid "For Object Storage, each region has a swift environment." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:213(para) -msgid "A NetApp appliance is used in each region for both block storage and instance storage. There are future plans to move the instances off the NetApp appliance and onto a distributed file system such as Ceph or GlusterFS." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:218(para) -msgid "VlanManager is used extensively for network management. All servers have two bonded 10GbE NICs that are connected to two redundant switches. DAIR is set up to use single-node networking where the cloud controller is the gateway for all instances on all compute nodes. Internal OpenStack traffic (for example, storage traffic) does not go through the cloud controller." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:231(link) -msgid "DAIR homepage" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:239(title) -msgid "CERN" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:241(para) -msgid "Who uses it: researchers at CERN (European Organization for Nuclear Research) conducting high-energy physics research.CERN (European Organization for Nuclear Research)use casesCERNOpenStack communityuse casesCERN" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:260(para) -msgid "The environment is largely based on Scientific Linux 6, which is Red Hat compatible. We use KVM as our primary hypervisor, although tests are ongoing with Hyper-V on Windows Server 2008." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:264(para) -msgid "We use the Puppet Labs OpenStack modules to configure Compute, Image service, Identity, and dashboard. Puppet is used widely for instance configuration, and Foreman is used as a GUI for reporting and instance provisioning." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:269(para) -msgid "Users and groups are managed through Active Directory and imported into the Identity service using LDAP. CLIs are available for nova and Euca2ools to do this." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:273(para) -msgid "There are three clouds currently running at CERN, totaling about 4,700 compute nodes, with approximately 120,000 cores. The CERN IT cloud aims to expand to 300,000 cores by 2015." -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:284(link) -msgid "“OpenStack in Production: A tale of 3 OpenStack Clouds”" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:289(link) -msgid "“Review of CERN Data Centre Infrastructure”" -msgstr "" - -#: ./doc/openstack-ops/app_usecases.xml:294(link) -msgid "“CERN Cloud Infrastructure User Guide”" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:12(title) -msgid "Logging and Monitoring" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:14(para) -msgid "As an OpenStack cloud is composed of so many different services, there are a large number of log files. This chapter aims to assist you in locating and working with them and describes other ways to track the status of your deployment.debugginglogging/monitoring; maintenance/debugging" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:24(title) -msgid "Where Are the Logs?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:26(para) -msgid "Most services use the convention of writing their log files to subdirectories of the /var/log directory, as listed in .cloud controllerslog informationlogging/monitoringlog location" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:39(caption) -msgid "OpenStack log locations" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:43(th) -msgid "Node type" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:45(th) -msgid "Service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:47(th) -msgid "Log location" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:53(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:61(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:69(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:77(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:85(para) ./doc/openstack-ops/ch_ops_log_monitor.xml:93(para) -msgid "Cloud controller" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:55(code) -msgid "nova-*" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:57(code) -msgid "/var/log/nova" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:63(code) -msgid "glance-*" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:65(code) -msgid "/var/log/glance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:71(code) -msgid "cinder-*" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:73(code) -msgid "/var/log/cinder" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:79(code) -msgid "keystone-*" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:81(code) -msgid "/var/log/keystone" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:87(code) -msgid "neutron-*" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:89(code) -msgid "/var/log/neutron" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:95(para) ./doc/openstack-ops/ch_ops_customize.xml:332(code) ./doc/openstack-ops/ch_ops_customize.xml:811(code) -msgid "horizon" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:97(code) -msgid "/var/log/apache2/" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:101(para) -msgid "All nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:103(para) -msgid "misc (swift, dnsmasq)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:105(code) -msgid "/var/log/syslog" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:111(para) -msgid "libvirt" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:113(code) -msgid "/var/log/libvirt/libvirtd.log" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:119(para) -msgid "Console (boot up messages) for VM instances:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:121(code) -msgid "/var/lib/nova/instances/instance-" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:122(code) -msgid "<instance id>/console.log" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:127(para) -msgid "Block Storage nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:129(para) -msgid "cinder-volume" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:131(code) -msgid "/var/log/cinder/cinder-volume.log" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:139(title) -msgid "Reading the Logs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:141(para) -msgid "OpenStack services use the standard logging levels, at increasing severity: DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE. That is, messages only appear in the logs if they are more \"severe\" than the particular log level, with DEBUG allowing all log statements through. For example, TRACE is logged only if the software has a stack trace, while INFO is logged for every message including those that are only for information.logging/monitoringlogging levels" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:153(para) -msgid "To disable DEBUG-level logging, edit /etc/nova/nova.conf as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:158(para) -msgid "Keystone is handled a little differently. To modify the logging level, edit the /etc/keystone/logging.conf file and look at the logger_root and handler_file sections." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:163(para) -msgid "Logging for horizon is configured in /etc/openstack_dashboard/local_settings.py. Because horizon is a Django web application, it follows the Django Logging framework conventions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:169(para) -msgid "The first step in finding the source of an error is typically to search for a CRITICAL, TRACE, or ERROR message in the log starting at the bottom of the log file.logging/monitoringreading log messages" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:177(para) -msgid "Here is an example of a CRITICAL log message, with the corresponding TRACE (Python traceback) immediately following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:211(para) -msgid "In this example, cinder-volumes failed to start and has provided a stack trace, since its volume back end has been unable to set up the storage volume—probably because the LVM volume that is expected from the configuration does not exist." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:216(para) -msgid "Here is an example error log:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:221(para) -msgid "In this error, a nova service has failed to connect to the RabbitMQ server because it got a connection refused error." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:226(title) -msgid "Tracing Instance Requests" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:228(para) -msgid "When an instance fails to behave properly, you will often have to trace activity associated with that instance across the log files of various nova-* services and across both the cloud controller and compute nodes.instancestracing instance requestslogging/monitoringtracing instance requests" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:241(para) -msgid "The typical way is to trace the UUID associated with an instance across the service logs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:244(para) -msgid "Consider the following example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:253(para) -msgid "Here, the ID associated with the instance is faf7ded8-4a46-413b-b113-f19590746ffe. If you search for this string on the cloud controller in the /var/log/nova-*.log files, it appears in nova-api.log and nova-scheduler.log. If you search for this on the compute nodes in /var/log/nova-*.log, it appears in nova-network.log and nova-compute.log. If no ERROR or CRITICAL messages appear, the most recent log entry that reports this may provide a hint about what has gone wrong." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:267(title) -msgid "Adding Custom Logging Statements" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:269(para) -msgid "If there is not enough information in the existing logs, you may need to add your own custom logging statements to the nova-* services.customizationcustom log statementslogging/monitoringadding custom log statements" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:281(para) -msgid "The source files are located in /usr/lib/python2.7/dist-packages/nova." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:284(para) -msgid "To add logging statements, the following line should be near the top of the file. For most files, these should already be there:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:290(para) -msgid "To add a DEBUG logging statement, you would do:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:294(para) -msgid "You may notice that all the existing logging messages are preceded by an underscore and surrounded by parentheses, for example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:299(para) -msgid "This formatting is used to support translation of logging messages into different languages using the gettext internationalization library. You don't need to do this for your own custom log messages. However, if you want to contribute the code back to the OpenStack project that includes logging statements, you must surround your log messages with underscores and parentheses." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:309(title) -msgid "RabbitMQ Web Management Interface or rabbitmqctl" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:311(para) -msgid "Aside from connection failures, RabbitMQ log files are generally not useful for debugging OpenStack related issues. Instead, we recommend you use the RabbitMQ web management interface.RabbitMQlogging/monitoringRabbitMQ web management interface Enable it on your cloud controller:cloud controllersenabling RabbitMQ" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:330(para) -msgid "The RabbitMQ web management interface is accessible on your cloud controller at http://localhost:55672." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:334(para) -msgid "Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port 55672. RabbitMQ versions 3.0 and above use port 15672 instead. You can check which version of RabbitMQ you have running on your local Ubuntu machine by doing:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:343(para) -msgid "An alternative to enabling the RabbitMQ web management interface is to use the rabbitmqctl commands. For example, rabbitmqctl list_queues| grep cinder displays any messages left in the queue. If there are messages, it's a possible sign that cinder services didn't connect properly to rabbitmq and might have to be restarted." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:350(para) -msgid "Items to monitor for RabbitMQ include the number of items in each of the queues and the processing time statistics for the server." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:355(title) -msgid "Centrally Managing Logs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:357(para) -msgid "Because your cloud is most likely composed of many servers, you must check logs on each of those servers to properly piece an event together. A better solution is to send the logs of all servers to a central location so that they can all be accessed from the same area.logging/monitoringcentral log management" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:367(para) -msgid "Ubuntu uses rsyslog as the default logging service. Since it is natively able to send logs to a remote location, you don't have to install anything extra to enable this feature, just modify the configuration file. In doing this, consider running your logging over a management network or using an encrypted VPN to avoid interception." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:374(title) -msgid "rsyslog Client Configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:376(para) -msgid "To begin, configure all OpenStack components to log to syslog in addition to their standard log file location. Also configure each component to log to a different syslog facility. This makes it easier to split the logs into individual components on the central server:rsyslog" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:384(para) -msgid "nova.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:389(para) -msgid "glance-api.conf and glance-registry.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:395(para) -msgid "cinder.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:400(para) -msgid "keystone.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:405(para) -msgid "By default, Object Storage logs to syslog." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:407(para) -msgid "Next, create /etc/rsyslog.d/client.conf with the following line:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:412(para) -msgid "This instructs rsyslog to send all logs to the IP listed. In this example, the IP points to the cloud controller." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:417(title) -msgid "rsyslog Server Configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:419(para) -msgid "Designate a server as the central logging server. The best practice is to choose a server that is solely dedicated to this purpose. Create a file called /etc/rsyslog.d/server.conf with the following contents:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:444(para) -msgid "This example configuration handles the nova service only. It first configures rsyslog to act as a server that runs on port 514. Next, it creates a series of logging templates. Logging templates control where received logs are stored. Using the last example, a nova log from c01.example.com goes to the following locations:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:452(filename) -msgid "/var/log/rsyslog/c01.example.com/nova.log" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:456(filename) ./doc/openstack-ops/ch_ops_log_monitor.xml:468(filename) -msgid "/var/log/rsyslog/nova.log" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:460(para) -msgid "This is useful, as logs from c02.example.com go to:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:464(filename) -msgid "/var/log/rsyslog/c02.example.com/nova.log" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:472(para) -msgid "You have an individual log file for each compute node as well as an aggregated log that contains nova logs from all nodes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:478(title) -msgid "Monitoring" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:480(para) -msgid "There are two types of monitoring: watching for problems and watching usage trends. The former ensures that all services are up and running, creating a functional cloud. The latter involves monitoring resource usage over time in order to make informed decisions about potential bottlenecks and upgrades.cloud controllersprocess monitoring and" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:493(title) -msgid "Nagios" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:495(para) -msgid "Nagios is an open source monitoring service. It's capable of executing arbitrary commands to check the status of server and network services, remotely executing arbitrary commands directly on servers, and allowing servers to push notifications back in the form of passive monitoring. Nagios has been around since 1999. Although newer monitoring services are available, Nagios is a tried-and-true systems administration staple.Nagios" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:507(title) -msgid "Process Monitoring" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:509(para) -msgid "A basic type of alert monitoring is to simply check and see whether a required process is running.monitoringprocess monitoringprocess monitoringlogging/monitoringprocess monitoring For example, ensure that the nova-api service is running on the cloud controller:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:536(para) -msgid "You can create automated alerts for critical processes by using Nagios and NRPE. For example, to ensure that the nova-compute process is running on compute nodes, create an alert on your Nagios server that looks like this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:550(para) -msgid "Then on the actual compute node, create the following NRPE configuration:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:556(para) -msgid "Nagios checks that at least one nova-compute service is running at all times." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:561(title) -msgid "Resource Alerting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:563(para) -msgid "Resource alerting provides notifications when one or more resources are critically low. While the monitoring thresholds should be tuned to your specific OpenStack environment, monitoring resource usage is not specific to OpenStack at all—any generic type of alert will work fine.monitoringresource alertingalertsresourceresourcesresource alertinglogging/monitoringresource alerting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:585(para) -msgid "Some of the resources that you want to monitor include:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:589(para) -msgid "Disk usage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:593(para) -msgid "Server load" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:597(para) -msgid "Memory usage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:601(para) -msgid "Network I/O" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:605(para) -msgid "Available vCPUs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:609(para) -msgid "For example, to monitor disk capacity on a compute node with Nagios, add the following to your Nagios configuration:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:620(para) -msgid "On the compute node, add the following to your NRPE configuration:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:626(para) -msgid "Nagios alerts you with a WARNING when any disk on the compute node is 80 percent full and CRITICAL when 90 percent is full." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:631(title) -msgid "StackTach" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:633(para) -msgid "StackTach is a tool that collects and reports the notifications sent by nova. Notifications are essentially the same as logs but can be much more detailed. Nearly all OpenStack components are capable of generating notifications when significant events occur. Notifications are messages placed on the OpenStack queue (generally RabbitMQ) for consumption by downstream systems. An overview of notifications can be found at System Usage Data.StackTachlogging/monitoringStackTack tool" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:649(para) -msgid "To enable nova to send notifications, add the following to nova.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:655(para) -msgid "Once nova is sending notifications, install and configure StackTach. StackTach workers for Queue consumption and pipeling processing are configured to read these notifications from RabbitMQ servers and store them in a database. Users can inquire on instances, requests and servers by using the browser interface or command line tool, Stacky. Since StackTach is relatively new and constantly changing, installation instructions quickly become outdated. Please refer to the StackTach Git repo for instructions as well as a demo video. Additional details on the latest developments can be discovered at theofficial page" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:666(title) -msgid "Logstash" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:667(para) -msgid "Logstash is a high performance indexing and search engine for logs. Logs from Jenkins test runs are sent to logstash where they are indexed and stored. Logstash facilitates reviewing logs from multiple sources in a single test run, searching for errors or particular events within a test run, and searching for log event trends across test runs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:677(para) -msgid "Log Pusher" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:680(para) -msgid "Log Indexer" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:683(para) -msgid "ElasticSearch" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:686(para) -msgid "Kibana" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:673(para) -msgid "There are four major layers in Logstash setup which are Each layer scales horizontally. As the number of logs grows you can add more log pushers, more Logstash indexers, and more ElasticSearch nodes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:692(para) -msgid "Logpusher is a pair of Python scripts which first listens to Jenkins build events and converts them into Gearman jobs. Gearman provides a generic application framework to farm out work to other machines or processes that are better suited to do the work. It allows you to do work in parallel, to load balance processing, and to call functions between languages.Later Logpusher performs Gearman jobs to push log files into logstash. Logstash indexer reads these log events, filters them to remove unwanted lines, collapse multiple events together, and parses useful information before shipping them to ElasticSearch for storage and indexing. Kibana is a logstash oriented web client for ElasticSearch.Logstashlogging/monitoringLogstash" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:712(title) -msgid "OpenStack Telemetry" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:714(para) -msgid "An integrated OpenStack project (code-named ceilometer) collects metering and event data relating to OpenStack services. Data collected by the Telemetry module could be used for billing. Depending on deployment configuration, collected data may be accessible to users based on the deployment configuration. The Telemetry service provides a REST API documented at . You can read more about the module in the OpenStack Cloud Administrator Guide or in the developer documentation.monitoringmetering and telemetrytelemetry/meteringmetering/telemetryceilometerlogging/monitoringceilometer project" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:741(title) -msgid "OpenStack-Specific Resources" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:743(para) -msgid "Resources such as memory, disk, and CPU are generic resources that all servers (even non-OpenStack servers) have and are important to the overall health of the server. When dealing with OpenStack specifically, these resources are important for a second reason: ensuring that enough are available to launch instances. There are a few ways you can see OpenStack resource usage.monitoringOpenStack-specific resourcesresourcesgeneric vs. OpenStack-specificlogging/monitoringOpenStack-specific resources The first is through the nova command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:765(para) -msgid "This command displays a list of how many instances a tenant has running and some light usage statistics about the combined instances. This command is useful for a quick overview of your cloud, but it doesn't really get into a lot of details." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:770(para) -msgid "Next, the nova database contains three tables that store usage information." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:773(para) -msgid "The nova.quotas and nova.quota_usages tables store quota information. If a tenant's quota is different from the default quota settings, its quota is stored in the nova.quotas table. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:794(para) -msgid "The nova.quota_usages table keeps track of how many resources the tenant currently has in use:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:810(para) -msgid "By comparing a tenant's hard limit with their current resource usage, you can see their usage percentage. For example, if this tenant is using 1 floating IP out of 10, then they are using 10 percent of their floating IP quota. Rather than doing the calculation manually, you can use SQL or the scripting language of your choice and create a formatted report:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:839(para) -msgid "The preceding information was generated by using a custom script that can be found on GitHub." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:844(para) -msgid "This script is specific to a certain OpenStack installation and must be modified to fit your environment. However, the logic should easily be transferable." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:851(title) -msgid "Intelligent Alerting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:853(para) -msgid "Intelligent alerting can be thought of as a form of continuous integration for operations. For example, you can easily check to see whether the Image service is up and running by ensuring that the glance-api and glance-registry processes are running or by seeing whether glace-api is responding on port 9292.monitoringintelligent alertingalertsintelligentlogging/monitoringintelligent alertinglogging/monitoringintelligent alerting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:876(para) -msgid "But how can you tell whether images are being successfully uploaded to the Image service? Maybe the disk that Image service is storing the images on is full or the S3 back end is down. You could naturally check this by doing a quick image upload:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:895(para) -msgid "By taking this script and rolling it into an alert for your monitoring system (such as Nagios), you now have an automated way of ensuring that image uploads to the Image Catalog are working." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:900(para) -msgid "You must remove the image after each test. Even better, test whether you can successfully delete an image from the Image Service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:905(para) -msgid "Intelligent alerting takes considerably more time to plan and implement than the other alerts described in this chapter. A good outline to implement intelligent alerting is:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:911(para) -msgid "Review common actions in your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:915(para) -msgid "Create ways to automatically test these actions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:919(para) -msgid "Roll these tests into an alerting system." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:923(para) -msgid "Some other examples for Intelligent Alerting include:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:927(para) -msgid "Can instances launch and be destroyed?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:931(para) -msgid "Can users be created?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:935(para) -msgid "Can objects be stored and deleted?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:939(para) -msgid "Can volumes be created and destroyed?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:945(title) -msgid "Trending" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:947(para) -msgid "Trending can give you great insight into how your cloud is performing day to day. You can learn, for example, if a busy day was simply a rare occurrence or if you should start adding new compute nodes.monitoringtrendinglogging/monitoringtrendingmonitoring cloud performance withlogging/monitoringtrending" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:966(para) -msgid "Trending takes a slightly different approach than alerting. While alerting is interested in a binary result (whether a check succeeds or fails), trending records the current state of something at a certain point in time. Once enough points in time have been recorded, you can see how the value has changed over time.trendingvs. alertsbinarybinary results in trending" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:980(para) -msgid "All of the alert types mentioned earlier can also be used for trend reporting. Some other trend examples include:trendingreport examples" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:990(para) -msgid "The number of instances on each compute node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:994(para) -msgid "The types of flavors in use" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:998(para) -msgid "The number of volumes in use" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1002(para) -msgid "The number of Object Storage requests each hour" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1006(para) -msgid "The number of nova-api requests each hour" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1011(para) -msgid "The I/O statistics of your storage services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1015(para) -msgid "As an example, recording nova-api usage can allow you to track the need to scale your cloud controller. By keeping an eye on nova-api requests, you can determine whether you need to spawn more nova-api processes or go as far as introducing an entirely new server to run nova-api. To get an approximate count of the requests, look for standard INFO messages in /var/log/nova/nova-api.log:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1025(para) -msgid "You can obtain further statistics by looking for the number of successful requests:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1030(para) -msgid "By running this command periodically and keeping a record of the result, you can create a trending report over time that shows whether your nova-api usage is increasing, decreasing, or keeping steady." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1035(para) -msgid "A tool such as collectd can be used to store this information. While collectd is out of the scope of this book, a good starting point would be to use collectd to store the result as a COUNTER data type. More information can be found in collectd's documentation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_log_monitor.xml:1047(para) -msgid "For stable operations, you want to detect failure promptly and determine causes efficiently. With a distributed system, it's even more important to track the right items to meet a service-level target. Learning where these logs are located in the file system or API gives you an advantage. This chapter also showed how to read, interpret, and manipulate information from OpenStack services so that you can monitor effectively." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:12(title) -msgid "Backup and Recovery" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:14(para) -msgid "Standard backup best practices apply when creating your OpenStack backup policy. For example, how often to back up your data is closely related to how quickly you need to recover from data loss.backup/recoveryconsiderations" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:24(para) -msgid "If you cannot have any data loss at all, you should also focus on a highly available deployment. The OpenStack High Availability Guide offers suggestions for elimination of a single point of failure that could cause system downtime. While it is not a completely prescriptive document, it offers methods and techniques for avoiding downtime and data loss.datapreventing loss of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:37(para) -msgid "Other backup considerations include:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:41(para) -msgid "How many backups to keep?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:45(para) -msgid "Should backups be kept off-site?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:49(para) -msgid "How often should backups be tested?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:53(para) -msgid "Just as important as a backup policy is a recovery policy (or at least recovery testing)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:57(title) -msgid "What to Back Up" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:59(para) -msgid "While OpenStack is composed of many components and moving parts, backing up the critical data is quite simple.backup/recoveryitems included" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:66(para) -msgid "This chapter describes only how to back up configuration files and databases that the various OpenStack components need to run. This chapter does not describe how to back up objects inside Object Storage or data contained inside Block Storage. Generally these areas are left for users to back up on their own." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:74(title) -msgid "Database Backups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:76(para) -msgid "The example OpenStack architecture designates the cloud controller as the MySQL server. This MySQL server hosts the databases for nova, glance, cinder, and keystone. With all of these databases in one place, it's very easy to create a database backup:databasesbackup/recovery ofbackup/recoverydatabases" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:91(para) -msgid "If you only want to backup a single database, you can instead run:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:96(para) -msgid "where nova is the database you want to back up." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:98(para) -msgid "You can easily automate this process by creating a cron job that runs the following script once per day:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:109(para) -msgid "This script dumps the entire MySQL database and deletes any backups older than seven days." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:114(title) -msgid "File System Backups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:116(para) -msgid "This section discusses which files and directories should be backed up regularly, organized by service.file systemsbackup/recovery ofbackup/recoveryfile systems" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:128(title) ./doc/openstack-ops/section_arch_example-neutron.xml:309(td) -msgid "Compute" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:130(para) -msgid "The /etc/nova directory on both the cloud controller and compute nodes should be regularly backed up.cloud controllersfile system backups andcompute nodesbackup/recovery of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:142(para) -msgid "/var/log/nova does not need to be backed up if you have all logs going to a central area. It is highly recommended to use a central logging server or back up the log directory." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:146(para) -msgid "/var/lib/nova is another important directory to back up. The exception to this is the /var/lib/nova/instances subdirectory on compute nodes. This subdirectory contains the KVM images of running instances. You would want to back up this directory only if you need to maintain backup copies of all instances. Under most circumstances, you do not need to do this, but this can vary from cloud to cloud and your service levels. Also be aware that making a backup of a live KVM instance can cause that instance to not boot properly if it is ever restored from a backup." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:158(title) -msgid "Image Catalog and Delivery" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:160(para) -msgid "/etc/glance and /var/log/glance follow the same rules as their nova counterparts.Image servicebackup/recovery of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:167(para) -msgid "/var/lib/glance should also be backed up. Take special notice of /var/lib/glance/images. If you are using a file-based back end of glance, /var/lib/glance/images is where the images are stored and care should be taken." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:172(para) -msgid "There are two ways to ensure stability with this directory. The first is to make sure this directory is run on a RAID array. If a disk fails, the directory is available. The second way is to use a tool such as rsync to replicate the images to another server:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:182(title) -msgid "Identity" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:184(para) -msgid "/etc/keystone and /var/log/keystone follow the same rules as other components.Identitybackup/recovery" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:191(para) -msgid "/var/lib/keystone, although it should not contain any data being used, can also be backed up just in case." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:198(para) -msgid "/etc/cinder and /var/log/cinder follow the same rules as other components.Block Storagestorageblock storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:207(para) -msgid "/var/lib/cinder should also be backed up." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:213(para) -msgid "/etc/swift is very important to have backed up. This directory contains the swift configuration files as well as the ring files and ring builder files, which if lost, render the data on your cluster inaccessible. A best practice is to copy the builder files to all storage nodes along with the ring files. Multiple backup copies are spread throughout your storage cluster.builder filesringsring buildersObject Storagebackup/recovery of" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:234(title) -msgid "Recovering Backups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:236(para) -msgid "Recovering backups is a fairly simple process. To begin, first ensure that the service you are recovering is not running. For example, to do a full recovery of nova on the cloud controller, first stop all nova services:recoverybackup/recoverybackup/recoveryrecovering backups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:258(para) -msgid "Now you can import a previously backed-up database:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:262(para) -msgid "You can also restore backed-up nova directories:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:267(para) -msgid "Once the files are restored, start everything back up:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:276(para) -msgid "Other services follow the same process, with their respective directories and databases." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_backup_recovery.xml:283(para) -msgid "Backup and subsequent recovery is one of the first tasks system administrators learn. However, each system has different items that need attention. By taking care of your database, image service, and appropriate file system locations, you can be assured that you can handle any event requiring recovery." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:10(title) -msgid "User-Facing Operations" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:12(para) -msgid "This guide is for OpenStack operators and does not seek to be an exhaustive reference for users, but as an operator, you should have a basic understanding of how to use the cloud facilities. This chapter looks at OpenStack from a basic user perspective, which helps you understand your users' needs and determine, when you get a trouble ticket, whether it is a user issue or a service issue. The main concepts covered are images, flavors, security groups, block storage, shared file system storage, and instances." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:21(title) ./doc/openstack-ops/ch_arch_cloud_controller.xml:585(title) -msgid "Images" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:25(para) -msgid "OpenStack images can often be thought of as \"virtual machine templates.\" Images can also be standard installation media such as ISO images. Essentially, they contain bootable file systems that are used to launch instances.user trainingimages" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:36(title) -msgid "Adding Images" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:38(para) -msgid "Several pre-made images exist and can easily be imported into the Image service. A common image to add is the CirrOS image, which is very small and used for testing purposes.imagesadding To add this image, simply do:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:50(para) -msgid "The glance image-create command provides a large set of options for working with your image. For example, the min-disk option is useful for images that require root disks of a certain size (for example, large Windows images). To view these options, do:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:58(para) -msgid "The location option is important to note. It does not copy the entire image into the Image service, but references an original location where the image can be found. Upon launching an instance of that image, the Image service accesses the image from the location specified." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:64(para) -msgid "The copy-from option copies the image from the location specified into the /var/lib/glance/images directory. The same thing is done when using the STDIN redirection with <, as shown in the example." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:69(para) -msgid "Run the following command to view the properties of existing images:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:77(title) -msgid "Adding Signed Images" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:79(para) -msgid "To provide a chain of trust from an end user to the Image service, and the Image service to Compute, an end user can import signed images into the Image service that can be verified in Compute. Appropriate Image service properties need to be set to enable signature verification. Currently, signature verification is provided in Compute only, but an accompanying feature in the Image service is targeted for Mitaka." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:87(para) -msgid "Prior to the steps below, an asymmetric keypair and certificate must be generated. In this example, these are called private_key.pem and new_cert.crt, respectively, and both reside in the current directory. Also note that the image in this example is cirros-0.3.4-x86_64-disk.img, but any image can be used." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:93(para) -msgid "The following are steps needed to create the signature used for the signed images:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:97(para) -msgid "Retrieve image for upload" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:102(para) -msgid "Use private key to create a signature of the image" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:107(para) -msgid "Signature hash method = SHA-256" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:108(para) -msgid "Signature key type = RSA-PSS" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:104(para) -msgid "The following implicit values are being used to create the signature in this example: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:115(para) -msgid "Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:116(para) -msgid "Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, ECC_SECT571R1, ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, and RSA-PSS" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:113(para) -msgid "The following options are currently supported: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:120(para) -msgid "Generate signature of image and convert it to a base64 representation:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:129(para) -msgid "Create context" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:143(para) -msgid "Encode certificate in DER format" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:156(para) -msgid "Upload Certificate in DER format to Castellan" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:167(para) -msgid "Upload Image to Image service, with Signature Metadata" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:173(para) -msgid "img_signature uses the signature called signature_64" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:176(para) -msgid "img_signature_certificate_uuid uses the value from cert_uuid in section 5 above" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:180(para) -msgid "img_signature_hash_method matches 'SHA-256' in section 2 above" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:183(para) -msgid "img_signature_key_type matches 'RSA-PSS' in section 2 above" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:170(para) -msgid "The following signature properties are used: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:204(para) -msgid "Signature verification will occur when Compute boots the signed image" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:206(para) -msgid "As of the Mitaka release, Compute supports instance signature validation. This is enabled by setting the verify_glance_signatures flag in nova.conf to TRUE. When enabled, Compute will automatically validate signed instances prior to its launch." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:213(title) -msgid "Sharing Images Between Projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:215(para) -msgid "In a multi-tenant cloud environment, users sometimes want to share their personal images or snapshots with other projects.projectssharing images betweenimagessharing between projects This can be done on the command line with the glance tool by the owner of the image." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:228(para) -msgid "To share an image or snapshot with another project, do the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:233(para) -msgid "Obtain the UUID of the image:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:239(para) -msgid "Obtain the UUID of the project with which you want to share your image. Unfortunately, non-admin users are unable to use the keystone command to do this. The easiest solution is to obtain the UUID either from an administrator of the cloud or from a user located in the project." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:247(para) -msgid "Once you have both pieces of information, run the glance command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:257(para) -msgid "Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access to image 733d1c44-a2ea-414b-aca7-69decf20d810." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:264(title) -msgid "Deleting Images" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:266(para) -msgid "To delete an image,imagesdeleting just execute:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:275(para) -msgid "Deleting an image does not affect instances or snapshots that were based on the image." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:281(title) -msgid "Other CLI Options" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:283(para) -msgid "A full set of options can be found using:imagesCLI options for" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:292(para) -msgid "or the Command-Line Interface Reference ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:297(title) -msgid "The Image service and the Database" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:299(para) -msgid "The only thing that the Image service does not store in a database is the image itself. The Image service database has two main tables:databasesImage serviceImage servicedatabase tables" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:313(literal) -msgid "images" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:317(literal) -msgid "image_properties" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:321(para) -msgid "Working directly with the database and SQL queries can provide you with custom lists and reports of images. Technically, you can update properties about images through the database, although this is not generally recommended." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:328(title) -msgid "Example Image service Database Queries" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:330(para) -msgid "One interesting example is modifying the table of images and the owner of that image. This can be easily done if you simply display the unique ID of the owner. Image servicedatabase queriesThis example goes one step further and displays the readable name of the owner:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:344(para) -msgid "Another example is displaying all properties for a certain image:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:353(title) -msgid "Flavors" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:355(para) -msgid "Virtual hardware templates are called \"flavors\" in OpenStack, defining sizes for RAM, disk, number of cores, and so on. The default install provides five flavors." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:359(para) -msgid "These are configurable by admin users (the rights may also be delegated to other users by redefining the access controls for compute_extension:flavormanage in /etc/nova/policy.json on the nova-api server). To get the list of available flavors on your system, run:DAC (discretionary access control)flavoruser trainingflavors" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:385(para) -msgid "The nova flavor-create command allows authorized users to create new flavors. Additional flavor manipulation commands can be shown with the command: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:389(para) -msgid "Flavors define a number of parameters, resulting in the user having a choice of what type of virtual machine to run—just like they would have if they were purchasing a physical server. lists the elements that can be set. Note in particular extra_specs, which can be used to define free-form characteristics, giving a lot of flexibility beyond just the size of RAM, CPU, and Disk.base image" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:401(caption) -msgid "Flavor parameters" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:409(emphasis) -msgid "Column" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:417(para) -msgid "ID" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:419(para) -msgid "Unique ID (integer or UUID) for the flavor." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:423(para) ./doc/openstack-ops/ch_ops_user_facing.xml:2382(th) ./doc/openstack-ops/ch_arch_scaling.xml:78(th) -msgid "Name" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:425(para) -msgid "A descriptive name, such as xx.size_name, is conventional but not required, though some third-party tools may rely on it." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:431(para) -msgid "Memory_MB" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:433(para) -msgid "Virtual machine memory in megabytes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:437(para) ./doc/openstack-ops/ch_arch_scaling.xml:84(th) -msgid "Disk" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:439(para) -msgid "Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. The \"0\" size is a special case that uses the native base image size as the size of the ephemeral root volume." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:447(para) ./doc/openstack-ops/ch_arch_scaling.xml:86(th) -msgid "Ephemeral" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:449(para) -msgid "Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:455(para) -msgid "Swap" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:457(para) -msgid "Optional swap space allocation for the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:464(para) -msgid "Number of virtual CPUs presented to the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:469(para) -msgid "RXTX_Factor" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:471(para) -msgid "Optional property that allows created servers to have a different bandwidthbandwidthcapping cap from that defined in the network they are attached to. This factor is multiplied by the rxtx_base property of the network. Default value is 1.0 (that is, the same as the attached network)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:483(para) -msgid "Is_Public" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:485(para) -msgid "Boolean value that indicates whether the flavor is available to all users or private. Private flavors do not get the current tenant assigned to them. Defaults to True." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:492(para) -msgid "extra_specs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:494(para) -msgid "Additional optional restrictions on which compute nodes the flavor can run on. This is implemented as key-value pairs that must match against the corresponding key-value pairs on compute nodes. Can be used to implement things like special resources (such as flavors that can run only on compute nodes with GPU hardware)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:505(title) -msgid "Private Flavors" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:507(para) -msgid "A user might need a custom flavor that is uniquely tuned for a project she is working on. For example, the user might require 128 GB of memory. If you create a new flavor as described above, the user would have access to the custom flavor, but so would all other tenants in your cloud. Sometimes this sharing isn't desirable. In this scenario, allowing all users to have access to a flavor with 128 GB of memory might cause your cloud to reach full capacity very quickly. To prevent this, you can restrict access to the custom flavor using the nova command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:519(para) -msgid "To view a flavor's access list, do the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:524(title) -msgid "Best Practices" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:526(para) -msgid "Once access to a flavor has been restricted, no other projects besides the ones granted explicit access will be able to see the flavor. This includes the admin project. Make sure to add the admin project in addition to the original project." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:531(para) -msgid "It's also helpful to allocate a specific numeric range for custom and private flavors. On UNIX-based systems, nonsystem accounts usually have a UID starting at 500. A similar approach can be taken with custom flavors. This helps you easily identify which flavors are custom, private, and public for the entire cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:540(title) -msgid "How Do I Modify an Existing Flavor?" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:542(para) -msgid "The OpenStack dashboard simulates the ability to modify a flavor by deleting an existing flavor and creating a new one with the same name." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:551(title) -msgid "Security Groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:553(para) -msgid "A common new-user issue with OpenStack is failing to set an appropriate security group when launching an instance. As a result, the user is unable to contact the instance on the network.security groupsuser trainingsecurity groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:564(para) -msgid "Security groups are sets of IP filter rules that are applied to an instance's networking. They are project specific, and project members can edit the default rules for their group and add new rules sets. All projects have a \"default\" security group, which is applied to instances that have no other security group defined. Unless changed, this security group denies all incoming traffic." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:572(title) -msgid "General Security Groups Configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:574(para) -msgid "The nova.conf option allow_same_net_traffic (which defaults to true) globally controls whether the rules apply to hosts that share a network. When set to true, hosts on the same subnet are not filtered and are allowed to pass all types of traffic between them. On a flat network, this allows all instances from all projects unfiltered communication. With VLAN networking, this allows access between instances within the same project. If allow_same_net_traffic is set to false, security groups are enforced for all connections. In this case, it is possible for projects to simulate allow_same_net_traffic by configuring their default security group to allow all traffic from their subnet." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:589(para) -msgid "As noted in the previous chapter, the number of rules per security group is controlled by the quota_security_group_rules, and the number of allowed security groups per project is controlled by the quota_security_groups quota." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:598(title) -msgid "End-User Configuration of Security Groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:600(para) -msgid "Security groups for the current project can be found on the OpenStack dashboard under Access & Security. To see details of an existing group, select the edit action for that security group. Obviously, modifying existing groups can be done from this edit interface. There is a Create Security Group button on the main Access & Security page for creating new groups. We discuss the terms used in these fields when we explain the command-line equivalents." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:611(title) -msgid "Setting with nova command" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:613(para) -msgid "From the command line, you can get a list of security groups for the project you're acting in using the nova command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:627(para) ./doc/openstack-ops/ch_ops_user_facing.xml:741(para) -msgid "To view the details of the \"open\" security group:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:638(para) -msgid "These rules are all \"allow\" type rules, as the default is deny. The first column is the IP protocol (one of icmp, tcp, or udp), and the second and third columns specify the affected port range. The fourth column specifies the IP range in CIDR format. This example shows the full port range for all protocols allowed from all IPs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:644(para) ./doc/openstack-ops/ch_ops_user_facing.xml:828(para) -msgid "When adding a new security group, you should pick a descriptive but brief name. This name shows up in brief descriptions of the instances that use it where the longer description field often does not. Seeing that an instance is using security group http is much easier to understand than bobs_group or secgrp1." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:651(para) -msgid "As an example, let's create a security group that allows web traffic anywhere on the Internet. We'll call this group global_http, which is clear and reasonably concise, encapsulating what is allowed and from where. From the command line, do:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:665(para) -msgid "This creates the empty security group. To make it do what we want, we need to add some rules:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:676(para) -msgid "Note that the arguments are positional, and the from-port and to-port arguments specify the allowed local port range connections. These arguments are not indicating source and destination ports of the connection. More complex rule sets can be built up through multiple invocations of nova secgroup-add-rule. For example, if you want to pass both http and https traffic, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:691(para) ./doc/openstack-ops/ch_ops_user_facing.xml:910(para) -msgid "Despite only outputting the newly added rule, this operation is additive:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:702(para) -msgid "The inverse operation is called secgroup-delete-rule, using the same format. Whole security groups can be removed with secgroup-delete." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:707(para) -msgid "To create security group rules for a cluster of instances, you want to use SourceGroups." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:710(para) -msgid "SourceGroups are a special dynamic way of defining the CIDR of allowed sources. The user specifies a SourceGroup (security group name) and then all the users' other instances using the specified SourceGroup are selected dynamically. This dynamic selection alleviates the need for individual rules to allow each new member of the cluster." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:717(para) -msgid "The code is structured like this: nova secgroup-add-group-rule <secgroup> <source-group> <ip-proto> <from-port> <to-port>. An example usage is shown here:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:724(para) ./doc/openstack-ops/ch_ops_user_facing.xml:949(para) -msgid "The \"cluster\" rule allows SSH access from any other instance that uses the global-http group." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:728(title) -msgid "Setting with neutron command" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:730(para) -msgid "If your environment is using Neutron, you can configure security groups settings using the neutron command. Get a list of security groups for the project you are acting in, by using following command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:758(para) -msgid "These rules are all \"allow\" type rules, as the default is deny. This example shows the full port range for all protocols allowed from all IPs. This section describes the most common security-group-rule parameters:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:764(term) -msgid "direction" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:767(para) -msgid "The direction in which the security group rule is applied. Valid values are ingress or egress." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:773(term) -msgid "remote_ip_prefix" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:776(para) -msgid "This attribute value matches the specified IP prefix as the source IP address of the IP packet." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:782(term) -msgid "protocol" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:785(para) -msgid "The protocol that is matched by the security group rule. Valid values are null, tcp, udp, icmp, and icmpv6." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:793(term) -msgid "port_range_min" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:796(para) -msgid "The minimum port number in the range that is matched by the security group rule. If the protocol is TCP or UDP, this value must be less than or equal to the port_range_max attribute value. If the protocol is ICMP or ICMPv6, this value must be an ICMP or ICMPv6 type, respectively." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:806(term) -msgid "port_range_max" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:809(para) -msgid "The maximum port number in the range that is matched by the security group rule. The port_range_min attribute constrains the port_range_max attribute. If the protocol is ICMP or ICMPv6, this value must be an ICMP or ICMPv6 type, respectively." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:819(term) -msgid "ethertype" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:822(para) -msgid "Must be IPv4 or IPv6, and addresses represented in CIDR must match the ingress or egress rules." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:835(para) -msgid "This example creates a security group that allows web traffic anywhere on the Internet. We'll call this group global_http, which is clear and reasonably concise, encapsulating what is allowed and from where. From the command line, do:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:855(para) -msgid "Immediately after create, the security group has only an allow egress rule. To make it do what we want, we need to add some rules:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:889(para) -msgid "More complex rule sets can be built up through multiple invocations of neutron security-group-rule-create. For example, if you want to pass both http and https traffic, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:927(para) -msgid "The inverse operation is called security-group-rule-delete, specifying security-group-rule ID. Whole security groups can be removed with security-group-delete." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:932(para) -msgid "To create security group rules for a cluster of instances, use RemoteGroups." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:935(para) -msgid "RemoteGroups are a dynamic way of defining the CIDR of allowed sources. The user specifies a RemoteGroup (security group name) and then all the users' other instances using the specified RemoteGroup are selected dynamically. This dynamic selection alleviates the need for individual rules to allow each new member of the cluster." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:942(para) -msgid "The code is similar to the above example of security-group-rule-create. To use RemoteGroup, specify --remote-group-id instead of --remote-ip-prefix. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:960(para) -msgid "OpenStack volumes are persistent block-storage devices that may be attached and detached from instances, but they can be attached to only one instance at a time. Similar to an external hard drive, they do not provide shared storage in the way a network file system or object store does. It is left to the operating system in the instance to put a file system on the block device and mount it, or not. block storagestorageblock storageuser trainingblock storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:979(para) -msgid "As with other removable disk technology, it is important that the operating system is not trying to make use of the disk before removing it. On Linux instances, this typically involves unmounting any file systems mounted from the volume. The OpenStack volume service cannot tell whether it is safe to remove volumes from an instance, so it does what it is told. If a user tells the volume service to detach a volume from an instance while it is being written to, you can expect some level of file system corruption as well as faults from whatever process within the instance was using the device." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:989(para) -msgid "There is nothing OpenStack-specific in being aware of the steps needed to access block devices from within the instance operating system, potentially formatting them for first use and being cautious when removing them. What is specific is how to create new volumes and attach and detach them from instances. These operations can all be done from the Volumes page of the dashboard or by using the cinder command-line client." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:997(para) -msgid "To add new volumes, you need only a name and a volume size in gigabytes. Either put these into the create volume web form or use the command line:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1003(para) -msgid "This creates a 10 GB volume named test-volume. To list existing volumes and the instances they are connected to, if any:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1014(para) -msgid "OpenStack Block Storage also allows creating snapshots of volumes. Remember that this is a block-level snapshot that is crash consistent, so it is best if the volume is not connected to an instance when the snapshot is taken and second best if the volume is not in use on the instance it is attached to. If the volume is under heavy use, the snapshot may have an inconsistent file system. In fact, by default, the volume service does not take a snapshot of a volume that is attached to an image, though it can be forced to. To take a volume snapshot, either select Create Snapshot from the actions column next to the volume name on the dashboard volume page, or run this from the command line:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1040(para) -msgid "For more information about updating Block Storage volumes (for example, resizing or transferring), see the OpenStack End User Guide." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1044(title) -msgid "Block Storage Creation Failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1046(para) -msgid "If a user tries to create a volume and the volume immediately goes into an error state, the best way to troubleshoot is to grep the cinder log files for the volume's UUID. First try the log files on the cloud controller, and then try the storage node where the volume was attempted to be created:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1062(para) -msgid "Similar to Block Storage, the Shared File System is a persistent storage, called share, that can be used in multi-tenant environments. Users create and mount a share as a remote file system on any machine that allows mounting shares, and has network access to share exporter. This share can then be used for storing, sharing, and exchanging files. The default configuration of the Shared File Systems service depends on the back-end driver the admin chooses when starting the Shared File Systems service. For more information about existing back-end drivers, see section \"Share Backends\" of Shared File Systems service Developer Guide. For example, in case of OpenStack Block Storage based back-end is used, the Shared File Systems service cares about everything, including VMs, networking, keypairs, and security groups. Other configurations require more detailed knowledge of shares functionality to set up and tune specific parameters and modes of shares functioning." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1087(para) -msgid "Create, update, delete and force-delete shares" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1090(para) -msgid "Change access rules for shares, reset share state" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1093(para) -msgid "Specify quotas for existing users or tenants" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1096(para) -msgid "Create share networks" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1099(para) -msgid "Define new share types" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1102(para) -msgid "Perform operations with share snapshots: create, change name, create a share from a snapshot, delete" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1106(para) -msgid "Operate with consistency groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1109(para) -msgid "Use security services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1080(para) -msgid "Shares are a remote mountable file system, so users can mount a share to multiple hosts, and have it accessed from multiple hosts by multiple users at a time. With the Shared File Systems service, you can perform a large number of operations with shares: For more information on share management see section “Share management” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide. As to Security services, you should remember that different drivers support different authentication methods, while generic driver does not support Security Services at all (see section “Security services” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1124(para) -msgid "You can create a share in a network, list shares, and show information for, update, and delete a specified share. You can also create snapshots of shares (see section “Share snapshots” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1133(para) -msgid "There are default and specific share types that allow you to filter or choose back-ends before you create a share. Functions and behaviour of share type is similar to Block Storage volume type (see section “Share types” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1142(para) -msgid "To help users keep and restore their data, Shared File Systems service provides a mechanism to create and operate snapshots (see section “Share snapshots” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1160(para) -msgid "LDAP" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1163(para) -msgid "Kerberos" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1166(para) -msgid "Microsoft Active Directory" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1150(para) -msgid "A security service stores configuration information for clients for authentication and authorization. Inside Manila a share network can be associated with up to three security types (for detailed information see section “Security services” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide): " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1177(para) -msgid "Without interaction with share networks, in so called \"no share servers\" mode." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1181(para) -msgid "Interacting with share networks." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1171(para) -msgid "Shared File Systems service differs from the principles implemented in Block Storage. Shared File Systems service can work in two modes: Networking service is used by the Shared File Systems service to directly operate with share servers. For switching interaction with Networking service on, create a share specifying a share network. To use \"share servers\" mode even being out of OpenStack, a network plugin called StandaloneNetworkPlugin is used. In this case, provide network information in the configuration: IP range, network type, and segmentation ID. Also you can add security services to a share network (see section “Networking” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1197(para) -msgid "The main idea of consistency groups is to enable you to create snapshots at the exact same point in time from multiple file system shares. Those snapshots can be then used for restoring all shares that were associated with the consistency group (see section “Consistency groups” of chapter “Shared File Systems” in OpenStack Cloud Administrator Guide)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1214(para) -msgid "Rate limits" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1217(para) -msgid "Absolute limits" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1229(para) -msgid "Max amount of space awailable for all shares" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1230(para) -msgid "Max number of shares" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1231(para) -msgid "Max number of shared networks" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1232(para) -msgid "Max number of share snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1233(para) -msgid "Max total amount of all snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1234(para) -msgid "Type and number of API calls that can be made in a specific time interval" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1207(para) -msgid "Shared File System storage allows administrators to set limits and quotas for specific tenants and users. Limits are the resource limitations that are allowed for each tenant or user. Limits consist of: Rate limits control the frequency at which users can issue specific API requests. Rate limits are configured by administrators in a config file. Also, administrator can specify quotas also known as max values of absolute limits per tenant. Whereas users can see only the amount of their consumed resources. Administrator can specify rate limits or quotas for the following resources: User can see his rate limits and absolute limits by running commands manila rate-limits and manila absolute-limits respectively. For more details on limits and quotas see subsection \"Quotas and limits\" of \"Share management\" section of OpenStack Cloud Administrator Guide document." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1253(para) -msgid "Create share" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1256(para) -msgid "Operating with a share" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1259(para) -msgid "Manage access to shares" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1262(para) -msgid "Create snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1265(para) -msgid "Create a share network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1268(para) -msgid "Manage a share network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1247(para) -msgid "This section lists several of the most important Use Cases that demonstrate the main functions and abilities of Shared File Systems service: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1274(para) -msgid "Shared File Systems service cannot warn you beforehand if it is safe to write a specific large amount of data onto a certain share or to remove a consistency group if it has a number of shares assigned to it. In such a potentially erroneous situations, if a mistake happens, you can expect some error message or even failing of shares or consistency groups into an incorrect status. You can also expect some level of system corruption if a user tries to unmount an unmanaged share while a process is using it for data transfer." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1286(title) -msgid "Create Share" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1293(para) -msgid "Check if there is an appropriate share type defined in the Shared File Systems service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1298(para) -msgid "If such a share type does not exist, an Admin should create it using manila type-create command before other users are able to use it" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1303(para) -msgid "Using a share network is optional. However if you need one, check if there is an appropriate network defined in Shared File Systems service by using manila share-network-list command. For the information on creating a share network, see below in this chapter." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1311(para) -msgid "Create a public share using manila create" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1314(para) -msgid "Make sure that the share has been created successfully and is ready to use (check the share status and see the share export location)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1288(para) -msgid "In this section, we examine the process of creating a simple share. It consists of several steps: Below is the same whole procedure described step by step and in more detail." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1324(para) -msgid "Before you start, make sure that Shared File Systems service is installed on your OpenStack cluster and is ready to use." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1330(para) -msgid "By default, there are no share types defined in Shared File Systems service, so you can check if a required one has been already created: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1340(para) -msgid "If the share types list is empty or does not contain a type you need, create the required share type using this command: This command will create a public share with the following parameters: name = netapp1, spec_driver_handles_share_servers = False" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1347(para) -msgid "You can now create a public share with my_share_net network, default share type, NFS shared file systems protocol, and 1 GB size: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1380(para) -msgid "To confirm that creation has been successful, see the share in the share list: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1428(para) -msgid "See subsection “Share Management” of “Shared File Systems” section of Cloud Administration Guide document for the details on share management operations." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1391(para) -msgid "Check the share status and see the share export location. After creation, the share status should become available: The value is_public defines the level of visibility for the share: whether other tenants can or cannot see the share. By default, the share is private. Now you can mount the created share like a remote file system and use it for your purposes. " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1440(title) -msgid "Manage Access To Shares" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1451(para) -msgid "rw: read and write (RW) access. This is the default value." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1457(para) -msgid "ro: read-only (RO) access." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1467(para) -msgid "ip: authenticates an instance through its IP address. A valid format is XX.XX.XX.XX orXX.XX.XX.XX/XX. For example 0.0.0.0/0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1474(para) -msgid "cert: authenticates an instance through a TLS certificate. Specify the TLS identity as the IDENTKEY. A valid value is any string up to 64 characters long in the common name (CN) of the certificate. The meaning of a string depends on its interpretation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1483(para) -msgid "user: authenticates by a specified user or group name. A valid value is an alphanumeric string that can contain some special characters and is from 4 to 32 characters long." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1492(para) -msgid "Do not mount a share without an access rule! This can lead to an exception." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1442(para) -msgid "Currently, you have a share and would like to control access to this share for other users. For this, you have to perform a number of steps and operations. Before getting to manage access to the share, pay attention to the following important parameters. To grant or deny access to a share, specify one of these supported share access levels: Additionally, you should also specify one of these supported authentication methods: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1498(para) -msgid "Allow access to the share with IP access type and 10.254.0.4 IP address: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1513(para) -msgid "Mount the Share: Then check if the share mounted successfully and according to the specified access rules: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1528(para) -msgid "Different share features are supported by different share drivers. In these examples there was used generic (Cinder as a back-end) driver that does not support user and cert authentication methods." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1537(para) -msgid "For the details of features supported by different drivers see section “Manila share features support mapping” of Manila Developer Guide document." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1550(title) -msgid "Manage Shares" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1552(para) -msgid "There are several other useful operations you would perform when working with shares." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1558(title) -msgid "Update Share" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1560(para) -msgid "To change the name of a share, or update its description, or level of visibility for other tenants, use this command: Check the attributes of the updated Share1: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1598(title) -msgid "Reset Share State" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1600(para) -msgid "Sometimes a share may appear and then hang in an erroneous or a transitional state. Unprivileged users do not have the appropriate access rights to correct this situation. However, having cloud administrator's permissions, you can reset the share's state by using command to reset share state, where state indicates which state to assign the share to. Options include: available, error, creating, deleting, error_deleting states." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1613(para) -msgid "After running check the share's status: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1652(title) -msgid "Delete Share" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1654(para) -msgid "If you do not need a share any more, you can delete it using command like: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1661(para) -msgid "If you specified the consistency group while creating a share, you should provide the --consistency-group parameter to delete the share:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1672(para) -msgid "Sometimes it appears that a share hangs in one of transitional states (i.e. creating, deleting, managing, unmanaging, extending, and shrinking). In that case, to delete it, you need command and administrative permissions to run it: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1682(para) -msgid "For more details and additional information about other cases, features, API commands etc, see subsection “Share Management” of “Shared File Systems” section of Cloud Administration Guide document." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1698(title) -msgid "Create Snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1700(para) -msgid "The Shared File Systems service provides a mechanism of snapshots to help users to restore their own data. To create a snapshot, use command like: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1740(para) -msgid "For more details and additional information on snapshots, see subsection “Share Snapshots” of “Shared File Systems” section of “Cloud Administration Guide” document." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1720(para) -msgid "Then, if needed, update the name and description of the created snapshot: To make sure that the snapshot is available, run: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1755(title) -msgid "Create a Share Network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1757(para) -msgid "To control a share network, Shared File Systems service requires interaction with Networking service to manage share servers on its own. If the selected driver runs in a mode that requires such kind of interaction, you need to specify the share network when a share is created. For the information on share creation, see earlier in this chapter. Initially, check the existing share networks type list by: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1772(para) -msgid "If share network list is empty or does not contain a required network, just create, for example, a share network with a private network and subnetwork. The segmentation_id, cidr, ip_version, and network_type share network attributes are automatically set to the values determined by the network provider." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1800(para) -msgid "Then check if the network became created by requesting the networks list once again: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1815(para) -msgid "See subsection “Share Networks” of “Shared File Systems” section of Cloud Administration Guide document for more details." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1811(para) -msgid "Finally, to create a share that uses this share network, get to Create Share use case described earlier in this chapter. " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1828(title) -msgid "Manage a Share Network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1830(para) -msgid "There is a pair of useful commands that help manipulate share networks. To start, check the network list: If you configured the back-end with driver_handles_share_servers = True (with the share servers) and had already some operations in the Shared File Systems service, you can see manila_service_network in the neutron list of networks. This network was created by the share driver for internal usage. " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1856(para) -msgid "You also can see detailed information about the share network including network_type, segmentation_id fields: You also can add and remove the security services to the share network." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1881(para) -msgid "For details, see subsection \"Security Services\" of “Shared File Systems” section of Cloud Administration Guide document." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1901(para) -msgid "Instances are the running virtual machines within an OpenStack cloud. This section deals with how to work with them and their underlying images, their network properties, and how they are represented in the database.user traininginstances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1911(title) -msgid "Starting Instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1913(para) -msgid "To launch an instance, you need to select an image, a flavor, and a name. The name needn't be unique, but your life will be simpler if it is because many tools will use the name in place of the UUID so long as the name is unique. You can start an instance from the dashboard from the Launch Instance button on the Instances page or by selecting the Launch Instance action next to an image or snapshot on the Images page.instancesstarting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1927(para) -msgid "On the command line, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1931(para) -msgid "There are a number of optional items that can be specified. You should read the rest of this section before trying to start an instance, but this is the base command that later details are layered upon." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1935(para) -msgid "To delete instances from the dashboard, select the Terminate instance action next to the instance on the Instances page. From the command line, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1942(para) -msgid "It is important to note that powering off an instance does not terminate it in the OpenStack sense." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1947(title) -msgid "Instance Boot Failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1949(para) -msgid "If an instance fails to start and immediately moves to an error state, there are a few different ways to track down what has gone wrong. Some of these can be done with normal user access, while others require access to your log server or compute nodes.instancesboot failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1958(para) -msgid "The simplest reasons for nodes to fail to launch are quota violations or the scheduler being unable to find a suitable compute node on which to run the instance. In these cases, the error is apparent when you run a nova show on the faulted instance:config drive" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:1996(para) -msgid "In this case, looking at the fault message shows NoValidHost, indicating that the scheduler was unable to match the instance requirements." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2000(para) -msgid "If nova show does not sufficiently explain the failure, searching for the instance UUID in the nova-compute.log on the compute node it was scheduled on or the nova-scheduler.log on your scheduler hosts is a good place to start looking for lower-level problems." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2006(para) -msgid "Using nova show as an admin user will show the compute node the instance was scheduled on as hostId. If the instance failed during scheduling, this field is blank." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2012(title) -msgid "Using Instance-Specific Data" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2014(para) -msgid "There are two main types of instance-specific data: metadata and user data.metadatainstance metadatainstancesinstance-specific data" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2026(title) -msgid "Instance metadata" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2028(para) -msgid "For Compute, instance metadata is a collection of key-value pairs associated with an instance. Compute reads and writes to these key-value pairs any time during the instance lifetime, from inside and outside the instance, when the end user uses the Compute API to do so. However, you cannot query the instance-associated key-value pairs with the metadata service that is compatible with the Amazon EC2 metadata service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2036(para) -msgid "For an example of instance metadata, users can generate and register SSH keys using the nova command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2041(para) -msgid "This creates a key named , which you can associate with instances. The file mykey.pem is the private key, which should be saved to a secure location because it allows root access to instances the key is associated with." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2047(para) -msgid "Use this command to register an existing key with OpenStack:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2053(para) -msgid "You must have the matching private key to access instances associated with this key." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2057(para) -msgid "To associate a key with an instance on boot, add --key_name mykey to your command line. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2063(para) -msgid "When booting a server, you can also add arbitrary metadata so that you can more easily identify it among other running instances. Use the --meta option with a key-value pair, where you can make up the string for both the key and the value. For example, you could add a description and also the creator of the server:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2072(para) -msgid "When viewing the server information, you can see the metadata included on the metadata line:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2108(title) -msgid "Instance user data" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2110(para) -msgid "The user-data key is a special key in the metadata service that holds a file that cloud-aware applications within the guest instance can access. For example, cloudinit is an open source package from Ubuntu, but available in most distributions, that handles early initialization of a cloud instance that makes use of this user data.user data" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2121(para) -msgid "This user data can be put in a file on your local system and then passed in at instance creation with the flag --user-data <user-data-file>. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2127(para) -msgid "To understand the difference between user data and metadata, realize that user data is created before an instance is started. User data is accessible from within the instance when it is running. User data can be used to store configuration, a script, or anything the tenant wants." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2135(title) -msgid "File injection" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2137(para) -msgid "Arbitrary local files can also be placed into the instance file system at creation time by using the --file <dst-path=src-path> option. You may store up to five files.file injection" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2144(para) -msgid "For example, let's say you have a special authorized_keys file named special_authorized_keysfile that for some reason you want to put on the instance instead of using the regular SSH key injection. In this case, you can use the following command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2157(title) -msgid "Associating Security Groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2159(para) -msgid "Security groups, as discussed earlier, are typically required to allow network traffic to an instance, unless the default security group for a project has been modified to be more permissive.security groupsuser trainingsecurity groups" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2170(para) -msgid "Adding security groups is typically done on instance boot. When launching from the dashboard, you do this on the Access & Security tab of the Launch Instance dialog. When launching from the command line, append --security-groups with a comma-separated list of security groups." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2177(para) -msgid "It is also possible to add and remove security groups when an instance is running. Currently this is only available through the command-line tools. Here is an example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2189(para) -msgid "Where floating IPs are configured in a deployment, each project will have a limited number of floating IPs controlled by a quota. However, these need to be allocated to the project from the central pool prior to their use—usually by the administrator of the project. To allocate a floating IP to a project, use the Allocate IP To Project button on the Floating IPs tab of the Access & Security page of the dashboard. The command line can also be used:address poolIP addressesfloatinguser trainingfloating IPs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2210(para) -msgid "Once allocated, a floating IP can be assigned to running instances from the dashboard either by selecting Associate Floating IP from the actions drop-down next to the IP on the Floating IPs tab of the Access & Security page or by making this selection next to the instance you want to associate it with on the Instances page. The inverse action, Dissociate Floating IP, is available from the Floating IPs tab of the Access & Security page and from the Instances page." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2222(para) -msgid "To associate or disassociate a floating IP with a server from the command line, use the following commands:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2231(title) -msgid "Attaching Block Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2233(para) -msgid "You can attach block storage to instances from the dashboard on the Volumes page. Click the Manage Attachments action next to the volume you want to attach.storageblock storageblock storageuser trainingblock storage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2248(para) -msgid "To perform this action from command line, run the following command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2253(para) -msgid "You can also specify block deviceblock device mapping at instance boot time through the nova command-line client with this option set:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2261(code) -msgid "<dev-name>=<id>:<type>:<size(GB)>:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2260(phrase) -msgid "The block device mapping format is " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2262(code) -msgid "<delete-on-terminate>" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2262(phrase) -msgid ", where:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2267(term) -msgid "dev-name" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2270(para) -msgid "A device name where the volume is attached in the system at /dev/dev_name" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2276(term) -msgid "id" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2279(para) -msgid "The ID of the volume to boot from, as shown in the output of nova volume-list" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2285(term) -msgid "type" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2288(para) -msgid "Either snap, which means that the volume was created from a snapshot, or anything other than snap (a blank string is valid). In the preceding example, the volume was not created from a snapshot, so we leave this field blank in our following example." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2297(term) -msgid "size (GB)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2300(para) -msgid "The size of the volume in gigabytes. It is safe to leave this blank and have the Compute Service infer the size." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2306(term) -msgid "delete-on-terminate" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2309(para) -msgid "A boolean to indicate whether the volume should be deleted when the instance is terminated. True can be specified as True or 1. False can be specified as False or 0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2318(para) -msgid "The following command will boot a new instance and attach a volume at the same time. The volume of ID 13 will be attached as /dev/vdc. It is not a snapshot, does not specify a size, and will not be deleted when the instance is terminated:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2327(para) -msgid "If you have previously prepared block storage with a bootable file system image, it is even possible to boot from persistent block storage. The following command boots an image from the specified volume. It is similar to the previous command, but the image is omitted and the volume is now attached as /dev/vda:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2336(para) -msgid "Read more detailed instructions for launching an instance from a bootable volume in the OpenStack End User Guide." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2341(para) -msgid "To boot normally from an image and attach block storage, map to a device other than vda. You can find instructions for launching an instance and attaching a volume to the instance and for copying the image to the attached volume in the OpenStack End User Guide." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2352(title) -msgid "Taking Snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2354(para) -msgid "The OpenStack snapshot mechanism allows you to create new images from running instances. This is very convenient for upgrading base images or for taking a published image and customizing it for local use. To snapshot a running instance to an image using the CLI, do this:base imagesnapshotuser trainingsnapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2370(para) -msgid "The dashboard interface for snapshots can be confusing because the snapshots and images are displayed in the Images page. However, an instance snapshot is an image. The only difference between an image that you upload directly to the Image Service and an image that you create by snapshot is that an image created by snapshot has additional properties in the glance database. These properties are found in the image_properties table and include:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2384(th) -msgid "Value" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2390(literal) -msgid "image_type" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2392(para) ./doc/openstack-ops/ch_ops_user_facing.xml:2411(para) -msgid "snapshot" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2396(literal) -msgid "instance_uuid" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2398(para) -msgid "<uuid of instance that was snapshotted>" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2402(literal) -msgid "base_image_ref" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2404(para) -msgid "<uuid of original image of instance that was snapshotted>" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2409(literal) -msgid "image_location" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2417(title) -msgid "Live Snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2419(para) -msgid "Live snapshots is a feature that allows users to snapshot the running virtual machines without pausing them. These snapshots are simply disk-only snapshots. Snapshotting an instance can now be performed with no downtime (assuming QEMU 1.3+ and libvirt 1.0+ are used).live snapshots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2428(title) -msgid "Disable live snapshotting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2429(para) -msgid "If you use libvirt version 1.2.2, you may experience intermittent problems with live snapshot creation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2433(para) -msgid "To effectively disable the libvirt live snapshotting, until the problem is resolved, add the below setting to nova.conf." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2440(title) -msgid "Ensuring Snapshots of Linux Guests Are Consistent" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2442(para) -msgid "The following section is from Sébastien Han's “OpenStack: Perform Consistent Snapshots” blog entry." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2447(para) -msgid "A snapshot captures the state of the file system, but not the state of the memory. Therefore, to ensure your snapshot contains the data that you want, before your snapshot you need to ensure that:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2454(para) -msgid "Running programs have written their contents to disk" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2458(para) -msgid "The file system does not have any \"dirty\" buffers: where programs have issued the command to write to disk, but the operating system has not yet done the write" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2464(para) -msgid "To ensure that important services have written their contents to disk (such as databases), we recommend that you read the documentation for those applications to determine what commands to issue to have them sync their contents to disk. If you are unsure how to do this, the safest approach is to simply stop these running services normally." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2471(para) -msgid "To deal with the \"dirty\" buffer issue, we recommend using the sync command before snapshotting:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2476(para) -msgid "Running sync writes dirty buffers (buffered blocks that have been modified but not written yet to the disk block) to disk." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2480(para) -msgid "Just running sync is not enough to ensure that the file system is consistent. We recommend that you use the fsfreeze tool, which halts new access to the file system, and create a stable image on disk that is suitable for snapshotting. The fsfreeze tool supports several file systems, including ext3, ext4, and XFS. If your virtual machine instance is running on Ubuntu, install the util-linux package to get fsfreeze:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2490(para) -msgid "In the very common case where the underlying snapshot is done via LVM, the filesystem freeze is automatically handled by LVM." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2497(para) -msgid "If your operating system doesn't have a version of fsfreeze available, you can use xfs_freeze instead, which is available on Ubuntu in the xfsprogs package. Despite the \"xfs\" in the name, xfs_freeze also works on ext3 and ext4 if you are using a Linux kernel version 2.6.29 or greater, since it works at the virtual file system (VFS) level starting at 2.6.29. The xfs_freeze version supports the same command-line arguments as fsfreeze." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2506(para) -msgid "Consider the example where you want to take a snapshot of a persistent block storage volume, detected by the guest operating system as /dev/vdb and mounted on /mnt. The fsfreeze command accepts two arguments:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2514(term) -msgid "-f" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2517(para) -msgid "Freeze the system" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2522(term) -msgid "-u" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2525(para) -msgid "Thaw (unfreeze) the system" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2530(para) -msgid "To freeze the volume in preparation for snapshotting, you would do the following, as root, inside the instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2535(para) -msgid "You must mount the file system before you run the fsfreeze command." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2538(para) -msgid "When the fsfreeze -f command is issued, all ongoing transactions in the file system are allowed to complete, new write system calls are halted, and other calls that modify the file system are halted. Most importantly, all dirty data, metadata, and log information are written to disk." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2544(para) -msgid "Once the volume has been frozen, do not attempt to read from or write to the volume, as these operations hang. The operating system stops every I/O operation and any I/O attempts are delayed until the file system has been unfrozen." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2549(para) -msgid "Once you have issued the fsfreeze command, it is safe to perform the snapshot. For example, if your instance was named mon-instance and you wanted to snapshot it to an image named mon-snapshot, you could now run the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2557(para) -msgid "When the snapshot is done, you can thaw the file system with the following command, as root, inside of the instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2562(para) -msgid "If you want to back up the root file system, you can't simply run the preceding command because it will freeze the prompt. Instead, run the following one-liner, as root, inside the instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2568(para) -msgid "After this command it is common practice to call from your workstation, and once done press enter in your instance shell to unfreeze it. Obviously you could automate this, but at least it will let you properly synchronize." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2574(title) -msgid "Ensuring Snapshots of Windows Guests Are Consistent" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2576(para) -msgid "Obtaining consistent snapshots of Windows VMs is conceptually similar to obtaining consistent snapshots of Linux VMs, although it requires additional utilities to coordinate with a Windows-only subsystem designed to facilitate consistent backups." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2581(para) -msgid "Windows XP and later releases include a Volume Shadow Copy Service (VSS) which provides a framework so that compliant applications can be consistently backed up on a live filesystem. To use this framework, a VSS requestor is run that signals to the VSS service that a consistent backup is needed. The VSS service notifies compliant applications (called VSS writers) to quiesce their data activity. The VSS service then tells the copy provider to create a snapshot. Once the snapshot has been made, the VSS service unfreezes VSS writers and normal I/O activity resumes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2591(para) -msgid "QEMU provides a guest agent that can be run in guests running on KVM hypervisors. This guest agent, on Windows VMs, coordinates with the Windows VSS service to facilitate a workflow which ensures consistent snapshots. This feature requires at least QEMU 1.7. The relevant guest agent commands are:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2599(term) -msgid "guest-file-flush" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2601(para) -msgid "Write out \"dirty\" buffers to disk, similar to the Linux sync operation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2607(term) -msgid "guest-fsfreeze" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2609(para) -msgid "Suspend I/O to the disks, similar to the Linux fsfreeze -f operation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2615(term) -msgid "guest-fsfreeze-thaw" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2617(para) -msgid "Resume I/O to the disks, similar to the Linux fsfreeze -u operation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2623(para) -msgid "To obtain snapshots of a Windows VM these commands can be scripted in sequence: flush the filesystems, freeze the filesystems, snapshot the filesystems, then unfreeze the filesystems. As with scripting similar workflows against Linux VMs, care must be used when writing such a script to ensure error handling is thorough and filesystems will not be left in a frozen state." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2636(title) -msgid "Instances in the Database" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2638(para) -msgid "While instance information is stored in a number of database tables, the table you most likely need to look at in relation to user instances is the instances table.instancesdatabase informationdatabasesinstance information inuser traininginstances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2654(para) -msgid "The instances table carries most of the information related to both running and deleted instances. It has a bewildering array of fields; for an exhaustive list, look at the database. These are the most useful fields for operators looking to form queries:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2661(para) -msgid "The deleted field is set to 1 if the instance has been deleted and NULL if it has not been deleted. This field is important for excluding deleted instances from your queries." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2668(para) -msgid "The uuid field is the UUID of the instance and is used throughout other tables in the database as a foreign key. This ID is also reported in logs, the dashboard, and command-line tools to uniquely identify an instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2675(para) -msgid "A collection of foreign keys are available to find relations to the instance. The most useful of these—user_id and project_id—are the UUIDs of the user who launched the instance and the project it was launched in." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2682(para) -msgid "The host field tells which compute node is hosting the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2687(para) -msgid "The hostname field holds the name of the instance when it is launched. The display-name is initially the same as hostname but can be reset using the nova rename command." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2693(para) -msgid "A number of time-related fields are useful for tracking when state changes happened on an instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2698(literal) -msgid "created_at" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2702(literal) -msgid "updated_at" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2706(literal) -msgid "deleted_at" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2710(literal) -msgid "scheduled_at" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2714(literal) -msgid "launched_at" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2718(literal) -msgid "terminated_at" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2724(title) -msgid "Good Luck!" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_user_facing.xml:2726(para) -msgid "This section was intended as a brief introduction to some of the most useful of many OpenStack commands. For an exhaustive list, please refer to the Admin User Guide, and for additional hints and tips, see the Cloud Admin Guide. We hope your users remain happy and recognize your hard work! (For more hard work, turn the page to the next chapter, where we discuss the system-facing operations: maintenance, failures and debugging.)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:12(title) -msgid "Compute Nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:14(para) -msgid "In this chapter, we discuss some of the choices you need to consider when building out your compute nodes. Compute nodes form the resource core of the OpenStack Compute cloud, providing the processing, memory, network and storage resources to run instances." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:20(title) -msgid "Choosing a CPU" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:22(para) -msgid "The type of CPU in your compute node is a very important choice. First, ensure that the CPU supports virtualization by way of VT-x for Intel chips and AMD-v for AMD chips.CPUs (central processing units)choosingcompute nodesCPU choice" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:36(para) -msgid "Consult the vendor documentation to check for virtualization support. For Intel, read “Does my processor support Intel® Virtualization Technology?”. For AMD, read AMD Virtualization. Note that your CPU may support virtualization but it may be disabled. Consult your BIOS documentation for how to enable CPU features.virtualization technologyAMD VirtualizationIntel Virtualization Technology" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:52(para) -msgid "The number of cores that the CPU has also affects the decision. It's common for current CPUs to have up to 12 cores. Additionally, if an Intel CPU supports hyperthreading, those 12 cores are doubled to 24 cores. If you purchase a server that supports multiple CPUs, the number of cores is further multiplied.coreshyperthreadingmultithreading" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:67(title) -msgid "Multithread Considerations" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:69(para) -msgid "Hyper-Threading is Intel's proprietary simultaneous multithreading implementation used to improve parallelization on their CPUs. You might consider enabling Hyper-Threading to improve the performance of multithreaded applications." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:74(para) -msgid "Whether you should enable Hyper-Threading on your CPUs depends upon your use case. For example, disabling Hyper-Threading can be beneficial in intense computing environments. We recommend that you do performance testing with your local workload with both Hyper-Threading on and off to determine what is more appropriate in your case.CPUs (central processing units)enabling hyperthreading on" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:88(title) -msgid "Choosing a Hypervisor" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:124(link) ./doc/openstack-ops/section_arch_example-neutron.xml:76(para) ./doc/openstack-ops/section_arch_example-neutron.xml:166(term) ./doc/openstack-ops/section_arch_example-nova.xml:103(para) -msgid "KVM" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:129(link) -msgid "LXC" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:134(link) -msgid "QEMU" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:139(link) -msgid "VMware ESX/ESXi" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:144(link) -msgid "Xen" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:149(link) -msgid "Hyper-V" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:154(link) -msgid "Docker" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:90(para) -msgid "A hypervisor provides software to manage virtual machine access to the underlying hardware. The hypervisor creates, manages, and monitors virtual machines.DockerHyper-VESXi hypervisorESX hypervisorVMware APIQuick EMUlator (QEMU)Linux containers (LXC)kernel-based VM (KVM) hypervisorXen APIXenServer hypervisorhypervisorschoosingcompute nodeshypervisor choice OpenStack Compute supports many hypervisors to various degrees, including: " -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:158(para) -msgid "Probably the most important factor in your choice of hypervisor is your current usage or experience. Aside from that, there are practical concerns to do with feature parity, documentation, and the level of community experience." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:163(para) -msgid "For example, KVM is the most widely adopted hypervisor in the OpenStack community. Besides KVM, more deployments run Xen, LXC, VMware, and Hyper-V than the others listed. However, each of these are lacking some feature support or the documentation on how to use them with OpenStack is out of date." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:169(para) -msgid "The best information available to support your choice is found on the Hypervisor Support Matrix and in the configuration reference." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:177(para) -msgid "It is also possible to run multiple hypervisors in a single deployment using host aggregates or cells. However, an individual compute node can run only a single hypervisor at a time.hypervisorsrunning multiple" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:189(title) -msgid "Instance Storage Solutions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:191(para) -msgid "As part of the procurement for a compute cluster, you must specify some storage for the disk on which the instantiated instance runs. There are three main approaches to providing this temporary-style storage, and it is important to understand the implications of the choice.storageinstance storage solutionsinstancesstorage solutionscompute nodesinstance storage solutions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:209(para) -msgid "They are:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:213(para) -msgid "Off compute node storage—shared file system" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:217(para) -msgid "On compute node storage—shared file system" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:221(para) -msgid "On compute node storage—nonshared file system" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:225(para) -msgid "In general, the questions you should ask when selecting storage are as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:230(para) -msgid "What is the platter count you can achieve?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:234(para) -msgid "Do more spindles result in better I/O despite network access?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:239(para) -msgid "Which one results in the best cost-performance scenario you're aiming for?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:244(para) -msgid "How do you manage the storage operationally?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:248(para) -msgid "Many operators use separate compute and storage hosts. Compute services and storage services have different requirements, and compute hosts typically require more CPU and RAM than storage hosts. Therefore, for a fixed budget, it makes sense to have different configurations for your compute nodes and your storage nodes. Compute nodes will be invested in CPU and RAM, and storage nodes will be invested in block storage." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:256(para) -msgid "However, if you are more restricted in the number of physical hosts you have available for creating your cloud and you want to be able to dedicate as many of your hosts as possible to running instances, it makes sense to run compute and storage on the same machines." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:261(para) -msgid "We'll discuss the three main approaches to instance storage in the next few sections." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:267(title) -msgid "Off Compute Node Storage—Shared File System" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:269(para) -msgid "In this option, the disks storing the running instances are hosted in servers outside of the compute nodes.shared storagefile systemsshared" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:278(para) -msgid "If you use separate compute and storage hosts, you can treat your compute hosts as \"stateless.\" As long as you don't have any instances currently running on a compute host, you can take it offline or wipe it completely without having any effect on the rest of your cloud. This simplifies maintenance for the compute hosts." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:284(para) -msgid "There are several advantages to this approach:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:288(para) -msgid "If a compute node fails, instances are usually easily recoverable." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:293(para) -msgid "Running a dedicated storage system can be operationally simpler." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:298(para) -msgid "You can scale to any number of spindles." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:302(para) -msgid "It may be possible to share the external storage for other purposes." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:307(para) -msgid "The main downsides to this approach are:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:311(para) -msgid "Depending on design, heavy I/O usage from some instances can affect unrelated instances." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:316(para) ./doc/openstack-ops/ch_arch_compute_nodes.xml:350(para) -msgid "Use of the network can decrease performance." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:322(title) -msgid "On Compute Node Storage—Shared File System" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:324(para) -msgid "In this option, each compute node is specified with a significant amount of disk space, but a distributed file system ties the disks from each compute node into a single mount." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:328(para) -msgid "The main advantage of this option is that it scales to external storage when you require additional storage." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:331(para) -msgid "However, this option has several downsides:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:335(para) -msgid "Running a distributed file system can make you lose your data locality compared with nonshared storage." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:340(para) -msgid "Recovery of instances is complicated by depending on multiple hosts." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:345(para) ./doc/openstack-ops/ch_arch_compute_nodes.xml:387(para) -msgid "The chassis size of the compute node can limit the number of spindles able to be used in a compute node." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:356(title) -msgid "On Compute Node Storage—Nonshared File System" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:358(para) -msgid "In this option, each compute node is specified with enough disks to store the instances it hosts.file systemsnonshared" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:365(para) -msgid "There are two main reasons why this is a good idea:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:369(para) -msgid "Heavy I/O usage on one compute node does not affect instances on other compute nodes." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:374(para) -msgid "Direct I/O access can increase performance." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:378(para) -msgid "This has several downsides:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:382(para) -msgid "If a compute node fails, the instances running on that node are lost." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:392(para) -msgid "Migrations of instances from one node to another are more complicated and rely on features that may not continue to be developed." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:398(para) -msgid "If additional storage is required, this option does not scale." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:403(para) -msgid "Running a shared file system on a storage system apart from the computes nodes is ideal for clouds where reliability and scalability are the most important factors. Running a shared file system on the compute nodes themselves may be best in a scenario where you have to deploy to preexisting servers for which you have little to no control over their specifications. Running a nonshared file system on the compute nodes themselves is a good option for clouds with high I/O requirements and low concern for reliability.scalingfile system choice" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:418(title) -msgid "Issues with Live Migration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:420(para) -msgid "We consider live migration an integral part of the operations of the cloud. This feature provides the ability to seamlessly move instances from one physical host to another, a necessity for performing upgrades that require reboots of the compute hosts, but only works well with shared storage.storagelive migrationmigrationlive migrationcompute nodeslive migration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:438(para) -msgid "Live migration can also be done with nonshared storage, using a feature known as KVM live block migration. While an earlier implementation of block-based migration in KVM and QEMU was considered unreliable, there is a newer, more reliable implementation of block-based live migration as of QEMU 1.4 and libvirt 1.0.2 that is also compatible with OpenStack. However, none of the authors of this guide have first-hand experience using live block migration.block migration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:451(title) -msgid "Choice of File System" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:453(para) -msgid "If you want to support shared-storage live migration, you need to configure a distributed file system.compute nodesfile system choicefile systemschoice ofstoragefile system choice" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:468(para) -msgid "Possible options include:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:472(para) -msgid "NFS (default for Linux)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:476(para) ./doc/openstack-ops/section_arch_example-neutron.xml:106(para) ./doc/openstack-ops/section_arch_example-neutron.xml:118(para) ./doc/openstack-ops/section_arch_example-neutron.xml:218(term) -msgid "GlusterFS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:480(para) -msgid "MooseFS" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:484(para) -msgid "Lustre" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:488(para) -msgid "We've seen deployments with all, and recommend that you choose the one you are most familiar with operating. If you are not familiar with any of these, choose NFS, as it is the easiest to set up and there is extensive community knowledge about it." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:496(title) -msgid "Overcommitting" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:498(para) -msgid "OpenStack allows you to overcommit CPU and RAM on compute nodes. This allows you to increase the number of instances you can have running on your cloud, at the cost of reducing the performance of the instances.RAM overcommitCPUs (central processing units)overcommittingovercommittingcompute nodesovercommitting OpenStack Compute uses the following ratios by default:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:518(para) -msgid "CPU allocation ratio: 16:1" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:522(para) -msgid "RAM allocation ratio: 1.5:1" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:526(para) -msgid "The default CPU allocation ratio of 16:1 means that the scheduler allocates up to 16 virtual cores per physical core. For example, if a physical node has 12 cores, the scheduler sees 192 available virtual cores. With typical flavor definitions of 4 virtual cores per instance, this ratio would provide 48 instances on a physical node." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:532(para) -msgid "The formula for the number of virtual instances on a compute node is (OR*PC)/VC, where:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:537(emphasis) -msgid "OR" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:540(para) -msgid "CPU overcommit ratio (virtual cores per physical core)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:545(emphasis) -msgid "PC" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:548(para) -msgid "Number of physical cores" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:553(emphasis) -msgid "VC" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:556(para) -msgid "Number of virtual cores per instance" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:561(para) -msgid "Similarly, the default RAM allocation ratio of 1.5:1 means that the scheduler allocates instances to a physical node as long as the total amount of RAM associated with the instances is less than 1.5 times the amount of RAM available on the physical node." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:566(para) -msgid "For example, if a physical node has 48 GB of RAM, the scheduler allocates instances to that node until the sum of the RAM associated with the instances reaches 72 GB (such as nine instances, in the case where each instance has 8 GB of RAM)." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:571(para) -msgid "You must select the appropriate CPU and RAM allocation ratio for your particular use case." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:576(title) -msgid "Logging" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:578(para) -msgid "Logging is detailed more fully in . However, it is an important design consideration to take into account before commencing operations of your cloud.logging/monitoringcompute nodes andcompute nodeslogging" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:591(para) -msgid "OpenStack produces a great deal of useful logging information, however; but for the information to be useful for operations purposes, you should consider having a central logging server to send logs to, and a log parsing/analysis system (such as logstash)." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:599(title) ./doc/openstack-ops/ch_ops_resources.xml:59(title) -msgid "Networking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:601(para) -msgid "Networking in OpenStack is a complex, multifaceted challenge. See .compute nodesnetworking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_compute_nodes.xml:612(para) -msgid "Compute nodes are the workhorse of your cloud and the place where your users' applications will run. They are likely to be affected by your decisions on what to deploy and how you deploy it. Their requirements should be reflected in the choices you make." -msgstr "" - -#: ./doc/openstack-ops/part_operations.xml:9(title) -msgid "Operations" -msgstr "" - -#: ./doc/openstack-ops/part_operations.xml:12(para) -msgid "Congratulations! By now, you should have a solid design for your cloud. We now recommend that you turn to the OpenStack Installation Guides (), which contains a step-by-step guide on how to manually install the OpenStack packages and dependencies on your cloud." -msgstr "" - -#: ./doc/openstack-ops/part_operations.xml:18(para) -msgid "While it is important for an operator to be familiar with the steps involved in deploying OpenStack, we also strongly encourage you to evaluate configuration-management tools, such as Puppet or Chef, which can help automate this deployment process.ChefPuppet" -msgstr "" - -#: ./doc/openstack-ops/part_operations.xml:28(para) -msgid "In the remainder of this guide, we assume that you have successfully deployed an OpenStack cloud and are able to perform basic operations such as adding images, booting instances, and attaching volumes." -msgstr "" - -#: ./doc/openstack-ops/part_operations.xml:32(para) -msgid "As your focus turns to stable operations, we recommend that you do skim the remainder of this book to get a sense of the content. Some of this content is useful to read in advance so that you can put best practices into effect to simplify your life in the long run. Other content is more useful as a reference that you might turn to when an unexpected event occurs (such as a power failure), or to troubleshoot a particular problem." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:88(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_1201.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:207(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_1202.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:12(title) -msgid "Network Troubleshooting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:14(para) -msgid "Network troubleshooting can unfortunately be a very difficult and confusing procedure. A network issue can cause a problem at several points in the cloud. Using a logical troubleshooting procedure can help mitigate the confusion and more quickly isolate where exactly the network issue is. This chapter aims to give you the information you need to identify any issues for either nova-network or OpenStack Networking (neutron) with Linux Bridge or Open vSwitch.OpenStack Networking (neutron)troubleshootingLinux Bridgetroubleshootingnetwork troubleshootingtroubleshooting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:35(title) -msgid "Using \"ip a\" to Check Interface States" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:37(para) -msgid "On compute nodes and nodes running nova-network, use the following command to see information about interfaces, including information about IPs, VLANs, and whether your interfaces are up:ip a commandinterface states, checkingtroubleshootingchecking interface states" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:52(para) -msgid "If you're encountering any sort of networking difficulty, one good initial sanity check is to make sure that your interfaces are up. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:66(para) -msgid "You can safely ignore the state of virbr0, which is a default bridge created by libvirt and not used by OpenStack." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:71(title) -msgid "Visualizing nova-network Traffic in the Cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:73(para) -msgid "If you are logged in to an instance and ping an external host—for example, Google—the ping packet takes the route shown in .ping packetstroubleshootingnova-network traffic" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:84(title) -msgid "Traffic route for ping packet" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:95(para) -msgid "The instance generates a packet and places it on the virtual Network Interface Card (NIC) inside the instance, such as eth0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:101(para) -msgid "The packet transfers to the virtual NIC of the compute host, such as, vnet1. You can find out what vnet NIC is being used by looking at the /etc/libvirt/qemu/instance-xxxxxxxx.xml file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:109(para) -msgid "From the vnet NIC, the packet transfers to a bridge on the compute node, such as br100." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:112(para) -msgid "If you run FlatDHCPManager, one bridge is on the compute node. If you run VlanManager, one bridge exists for each VLAN." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:115(para) -msgid "To see which bridge the packet will use, run the command: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:118(para) -msgid "Look for the vnet NIC. You can also reference nova.conf and look for the flat_interface_bridge option." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:124(para) -msgid "The packet transfers to the main NIC of the compute node. You can also see this NIC in the brctl output, or you can find it by referencing the flat_interface option in nova.conf." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:131(para) -msgid "After the packet is on this NIC, it transfers to the compute node's default gateway. The packet is now most likely out of your control at this point. The diagram depicts an external gateway. However, in the default configuration with multi-host, the compute host is the gateway." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:139(para) -msgid "Reverse the direction to see the path of a ping reply. From this path, you can see that a single packet travels across four different NICs. If a problem occurs with any of these NICs, a network issue occurs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:145(title) -msgid "Visualizing OpenStack Networking Service Traffic in the Cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:148(para) -msgid "OpenStack Networking has many more degrees of freedom than nova-network does because of its pluggable back end. It can be configured with open source or vendor proprietary plug-ins that control software defined networking (SDN) hardware or plug-ins that use Linux native facilities on your hosts, such as Open vSwitch or Linux Bridge.troubleshootingOpenStack traffic" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:159(para) -msgid "The networking chapter of the OpenStack Cloud Administrator Guide shows a variety of networking scenarios and their connection paths. The purpose of this section is to give you the tools to troubleshoot the various components involved however they are plumbed together in your environment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:167(para) -msgid "For this example, we will use the Open vSwitch (OVS) back end. Other back-end plug-ins will have very different flow paths. OVS is the most popularly deployed network driver, according to the October 2015 OpenStack User Survey, with 41 percent more sites using it than the Linux Bridge driver. We'll describe each step in turn, with for reference." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:176(para) -msgid "The instance generates a packet and places it on the virtual NIC inside the instance, such as eth0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:181(para) -msgid "The packet transfers to a Test Access Point (TAP) device on the compute host, such as tap690466bc-92. You can find out what TAP is being used by looking at the /etc/libvirt/qemu/instance-xxxxxxxx.xml file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:187(para) -msgid "The TAP device name is constructed using the first 11 characters of the port ID (10 hex digits plus an included '-'), so another means of finding the device name is to use the neutron command. This returns a pipe-delimited list, the first item of which is the port ID. For example, to get the port ID associated with IP address 10.0.0.10, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:197(para) -msgid "Taking the first 11 characters, we can construct a device name of tapff387e54-9e from this output." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:203(title) -msgid "Neutron network paths" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:214(para) -msgid "The TAP device is connected to the integration bridge, br-int. This bridge connects all the instance TAP devices and any other bridges on the system. In this example, we have int-br-eth1 and patch-tun. int-br-eth1 is one half of a veth pair connecting to the bridge br-eth1, which handles VLAN networks trunked over the physical Ethernet device eth1. patch-tun is an Open vSwitch internal port that connects to the br-tun bridge for GRE networks." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:224(para) -msgid "The TAP devices and veth devices are normal Linux network devices and may be inspected with the usual tools, such as ip and tcpdump. Open vSwitch internal devices, such as patch-tun, are only visible within the Open vSwitch environment. If you try to run tcpdump -i patch-tun, it will raise an error, saying that the device does not exist." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:232(para) -msgid "It is possible to watch packets on internal interfaces, but it does take a little bit of networking gymnastics. First you need to create a dummy network device that normal Linux tools can see. Then you need to add it to the bridge containing the internal interface you want to snoop on. Finally, you need to tell Open vSwitch to mirror all traffic to or from the internal port onto this dummy port. After all this, you can then run tcpdump on the dummy interface and see the traffic on the internal port." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:242(title) -msgid "To capture packets from the patch-tun internal interface on integration bridge, br-int:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:246(para) -msgid "Create and bring up a dummy interface, snooper0:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:256(para) -msgid "Add device snooper0 to bridge br-int:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:264(para) -msgid "Create mirror of patch-tun to snooper0 (returns UUID of mirror port):" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:274(para) -msgid "Profit. You can now see traffic on patch-tun by running tcpdump -i snooper0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:279(para) -msgid "Clean up by clearing all mirrors on br-int and deleting the dummy interface:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:291(para) -msgid "On the integration bridge, networks are distinguished using internal VLANs regardless of how the networking service defines them. This allows instances on the same host to communicate directly without transiting the rest of the virtual, or physical, network. These internal VLAN IDs are based on the order they are created on the node and may vary between nodes. These IDs are in no way related to the segmentation IDs used in the network definition and on the physical wire." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:300(para) -msgid "VLAN tags are translated between the external tag defined in the network settings, and internal tags in several places. On the br-int, incoming packets from the int-br-eth1 are translated from external tags to internal tags. Other translations also happen on the other bridges and will be discussed in those sections." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:310(title) -msgid "To discover which internal VLAN tag is in use for a given external VLAN by using the ovs-ofctl command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:315(para) -msgid "Find the external VLAN tag of the network you're interested in. This is the provider:segmentation_id as returned by the networking service:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:330(para) -msgid "Grep for the provider:segmentation_id, 2113 in this case, in the output of ovs-ofctl dump-flows br-int:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:340(para) -msgid "Here you can see packets received on port ID 1 with the VLAN tag 2113 are modified to have the internal VLAN tag 7. Digging a little deeper, you can confirm that port 1 is in fact int-br-eth1:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:378(para) -msgid "The next step depends on whether the virtual network is configured to use 802.1q VLAN tags or GRE:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:383(para) -msgid "VLAN-based networks exit the integration bridge via veth interface int-br-eth1 and arrive on the bridge br-eth1 on the other member of the veth pair phy-br-eth1. Packets on this interface arrive with internal VLAN tags and are translated to external tags in the reverse of the process described above:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:395(para) -msgid "Packets, now tagged with the external VLAN tag, then exit onto the physical network via eth1. The Layer2 switch this interface is connected to must be configured to accept traffic with the VLAN ID used. The next hop for this packet must also be on the same layer-2 network." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:403(para) -msgid "GRE-based networks are passed with patch-tun to the tunnel bridge br-tun on interface patch-int. This bridge also contains one port for each GRE tunnel peer, so one for each compute node and network node in your network. The ports are named sequentially from gre-1 onward." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:410(para) -msgid "Matching gre-<n> interfaces to tunnel endpoints is possible by looking at the Open vSwitch state:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:421(para) -msgid "In this case, gre-1 is a tunnel from IP 10.10.128.21, which should match a local interface on this node, to IP 10.10.128.16 on the remote side." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:425(para) -msgid "These tunnels use the regular routing tables on the host to route the resulting GRE packet, so there is no requirement that GRE endpoints are all on the same layer-2 network, unlike VLAN encapsulation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:430(para) -msgid "All interfaces on the br-tun are internal to Open vSwitch. To monitor traffic on them, you need to set up a mirror port as described above for patch-tun in the br-int bridge." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:435(para) -msgid "All translation of GRE tunnels to and from internal VLANs happens on this bridge." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:441(title) -msgid "To discover which internal VLAN tag is in use for a GRE tunnel by using the ovs-ofctl command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:445(para) -msgid "Find the provider:segmentation_id of the network you're interested in. This is the same field used for the VLAN ID in VLAN-based networks:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:460(para) -msgid "Grep for 0x<provider:segmentation_id>, 0x3 in this case, in the output of ovs-ofctl dump-flows br-tun:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:487(para) -msgid "Here, you see three flows related to this GRE tunnel. The first is the translation from inbound packets with this tunnel ID to internal VLAN ID 1. The second shows a unicast flow to output port 53 for packets destined for MAC address fa:16:3e:a6:48:24. The third shows the translation from the internal VLAN representation to the GRE tunnel ID flooded to all output ports. For further details of the flow descriptions, see the man page for ovs-ofctl. As in the previous VLAN example, numeric port IDs can be matched with their named representations by examining the output of ovs-ofctl show br-tun." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:503(para) -msgid "The packet is then received on the network node. Note that any traffic to the l3-agent or dhcp-agent will be visible only within their network namespace. Watching any interfaces outside those namespaces, even those that carry the network traffic, will only show broadcast packets like Address Resolution Protocols (ARPs), but unicast traffic to the router or DHCP address will not be seen. See Dealing with Network Namespaces for detail on how to run commands within these namespaces." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:514(para) -msgid "Alternatively, it is possible to configure VLAN-based networks to use external routers rather than the l3-agent shown here, so long as the external router is on the same VLAN:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:520(para) -msgid "VLAN-based networks are received as tagged packets on a physical network interface, eth1 in this example. Just as on the compute node, this interface is a member of the br-eth1 bridge." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:527(para) -msgid "GRE-based networks will be passed to the tunnel bridge br-tun, which behaves just like the GRE interfaces on the compute node." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:535(para) -msgid "Next, the packets from either input go through the integration bridge, again just as on the compute node." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:540(para) -msgid "The packet then makes it to the l3-agent. This is actually another TAP device within the router's network namespace. Router namespaces are named in the form qrouter-<router-uuid>. Running ip a within the namespace will show the TAP device name, qr-e6256f7d-31 in this example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:557(para) -msgid "The qg-<n> interface in the l3-agent router namespace sends the packet on to its next hop through device eth2 on the external bridge br-ex. This bridge is constructed similarly to br-eth1 and may be inspected in the same way." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:565(para) -msgid "This external bridge also includes a physical network interface, eth2 in this example, which finally lands the packet on the external network destined for an external router or destination." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:572(para) -msgid "DHCP agents running on OpenStack networks run in namespaces similar to the l3-agents. DHCP namespaces are named qdhcp-<uuid> and have a TAP device on the integration bridge. Debugging of DHCP issues usually involves working inside this network namespace." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:583(title) -msgid "Finding a Failure in the Path" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:585(para) -msgid "Use ping to quickly find where a failure exists in the network path. In an instance, first see whether you can ping an external host, such as google.com. If you can, then there shouldn't be a network problem at all." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:590(para) -msgid "If you can't, try pinging the IP address of the compute node where the instance is hosted. If you can ping this IP, then the problem is somewhere between the compute node and that compute node's gateway." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:594(para) -msgid "If you can't ping the IP address of the compute node, the problem is between the instance and the compute node. This includes the bridge connecting the compute node's main NIC with the vnet NIC of the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:599(para) -msgid "One last test is to launch a second instance and see whether the two instances can ping each other. If they can, the issue might be related to the firewall on the compute node.path failurestroubleshootingdetecting path failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:611(title) ./doc/openstack-ops/ch_ops_resources.xml:72(code) -msgid "tcpdump" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:613(para) -msgid "One great, although very in-depth, way of troubleshooting network issues is to use tcpdump. We recommended using tcpdump at several points along the network path to correlate where a problem might be. If you prefer working with a GUI, either live or by using a tcpdump capture, do also check out Wireshark.tcpdump" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:623(para) -msgid "For example, run the following command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:628(para) -msgid "Run this on the command line of the following areas:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:632(para) -msgid "An external server outside of the cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:636(para) -msgid "A compute node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:640(para) -msgid "An instance running on that compute node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:644(para) -msgid "In this example, these locations have the following IP addresses:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:656(para) -msgid "Next, open a new shell to the instance and then ping the external host where tcpdump is running. If the network path to the external server and back is fully functional, you see something like the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:661(para) -msgid "On the external server:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:671(para) -msgid "On the compute node:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:692(para) -msgid "On the instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:698(para) -msgid "Here, the external server received the ping request and sent a ping reply. On the compute node, you can see that both the ping and ping reply successfully passed through. You might also see duplicate packets on the compute node, as seen above, because tcpdump captured the packet on both the bridge and outgoing interface." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:706(title) -msgid "iptables" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:708(para) -msgid "Through nova-network or neutron, OpenStack Compute automatically manages iptables, including forwarding packets to and from instances on a compute node, forwarding floating IP traffic, and managing security group rules. In addition to managing the rules, comments (if supported) will be inserted in the rules to help indicate the purpose of the rule. iptablestroubleshootingiptables" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:722(para) -msgid "The following comments are added to the rule set as appropriate:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:726(para) -msgid "Perform source NAT on outgoing traffic." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:729(para) -msgid "Default drop rule for unmatched traffic." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:732(para) -msgid "Direct traffic from the VM interface to the security group chain." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:736(para) -msgid "Jump to the VM specific chain." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:739(para) -msgid "Direct incoming traffic from VM to the security group chain." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:743(para) -msgid "Allow traffic from defined IP/MAC pairs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:746(para) -msgid "Drop traffic without an IP/MAC allow rule." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:749(para) -msgid "Allow DHCP client traffic." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:752(para) -msgid "Prevent DHCP Spoofing by VM." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:755(para) -msgid "Send unmatched traffic to the fallback chain." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:758(para) -msgid "Drop packets that are not associated with a state." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:761(para) -msgid "Direct packets associated with a known session to the RETURN chain." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:765(para) -msgid "Allow IPv6 ICMP traffic to allow RA packets." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:769(para) -msgid "Run the following command to view the current iptables configuration:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:775(para) -msgid "If you modify the configuration, it reverts the next time you restart nova-network or neutron-server. You must use OpenStack to manage iptables." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:783(title) -msgid "Network Configuration in the Database for nova-network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:785(para) -msgid "With nova-network, the nova database table contains a few tables with networking information:databasesnova-network troubleshootingtroubleshootingnova-network database" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:799(literal) -msgid "fixed_ips" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:802(para) -msgid "Contains each possible IP address for the subnet(s) added to Compute. This table is related to the instances table by way of the fixed_ips.instance_uuid column." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:810(literal) -msgid "floating_ips" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:813(para) -msgid "Contains each floating IP address that was added to Compute. This table is related to the fixed_ips table by way of the floating_ips.fixed_ip_id column." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:824(para) -msgid "Not entirely network specific, but it contains information about the instance that is utilizing the fixed_ip and optional floating_ip." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:831(para) -msgid "From these tables, you can see that a floating IP is technically never directly related to an instance; it must always go through a fixed IP." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:836(title) -msgid "Manually Disassociating a Floating IP" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:838(para) -msgid "Sometimes an instance is terminated but the floating IP was not correctly de-associated from that instance. Because the database is in an inconsistent state, the usual tools to disassociate the IP no longer work. To fix this, you must manually update the database.IP addressesfloatingfloating IP address" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:850(para) -msgid "First, find the UUID of the instance in question:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:854(para) -msgid "Next, find the fixed IP entry for that UUID:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:858(para) -msgid "You can now get the related floating IP entry:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:862(para) -msgid "And finally, you can disassociate the floating IP:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:867(para) -msgid "You can optionally also deallocate the IP from the user's pool:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:876(title) -msgid "Debugging DHCP Issues with nova-network" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:878(para) -msgid "One common networking problem is that an instance boots successfully but is not reachable because it failed to obtain an IP address from dnsmasq, which is the DHCP server that is launched by the nova-network service.DHCP (Dynamic Host Configuration Protocol)debuggingtroubleshootingnova-network DHCP" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:891(para) -msgid "The simplest way to identify that this is the problem with your instance is to look at the console output of your instance. If DHCP failed, you can retrieve the console log by doing:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:897(para) -msgid "If your instance failed to obtain an IP through DHCP, some messages should appear in the console. For example, for the Cirros image, you see output that looks like the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:911(para) -msgid "After you establish that the instance booted properly, the task is to figure out where the failure is." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:914(para) -msgid "A DHCP problem might be caused by a misbehaving dnsmasq process. First, debug by checking logs and then restart the dnsmasq processes only for that project (tenant). In VLAN mode, there is a dnsmasq process for each tenant. Once you have restarted targeted dnsmasq processes, the simplest way to rule out dnsmasq causes is to kill all of the dnsmasq processes on the machine and restart nova-network. As a last resort, do this as root:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:926(para) -msgid "Use openstack-nova-network on RHEL/CentOS/Fedora but nova-network on Ubuntu/Debian." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:931(para) -msgid "Several minutes after nova-network is restarted, you should see new dnsmasq processes running:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:955(para) -msgid "If your instances are still not able to obtain IP addresses, the next thing to check is whether dnsmasq is seeing the DHCP requests from the instance. On the machine that is running the dnsmasq process, which is the compute host if running in multi-host mode, look at /var/log/syslog to see the dnsmasq output. If dnsmasq is seeing the request properly and handing out an IP, the output looks like this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:971(para) -msgid "If you do not see the DHCPDISCOVER, a problem exists with the packet getting from the instance to the machine running dnsmasq. If you see all of the preceding output and your instances are still not able to obtain IP addresses, then the packet is able to get from the instance to the host running dnsmasq, but it is not able to make the return trip." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:978(para) -msgid "You might also see a message such as this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:983(para) -msgid "This may be a dnsmasq and/or nova-network related issue. (For the preceding example, the problem happened to be that dnsmasq did not have any more IP addresses to give away because there were no more fixed IPs available in the OpenStack Compute database.)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:988(para) -msgid "If there's a suspicious-looking dnsmasq log message, take a look at the command-line arguments to the dnsmasq processes to see if they look correct:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:994(para) -msgid "The output looks something like the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1023(para) -msgid "The output shows three different dnsmasq processes. The dnsmasq process that has the DHCP subnet range of 192.168.122.0 belongs to libvirt and can be ignored. The other two dnsmasq processes belong to nova-network. The two processes are actually related—one is simply the parent process of the other. The arguments of the dnsmasq processes should correspond to the details you configured nova-network with." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1031(para) -msgid "If the problem does not seem to be related to dnsmasq itself, at this point use tcpdump on the interfaces to determine where the packets are getting lost." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1035(para) -msgid "DHCP traffic uses UDP. The client sends from port 68 to port 67 on the server. Try to boot a new instance and then systematically listen on the NICs until you identify the one that isn't seeing the traffic. To use tcpdump to listen to ports 67 and 68 on br100, you would do:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1043(para) -msgid "You should be doing sanity checks on the interfaces using command such as ip a and brctl show to ensure that the interfaces are actually up and configured the way that you think that they are." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1050(title) -msgid "Debugging DNS Issues" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1052(para) -msgid "If you are able to use SSH to log into an instance, but it takes a very long time (on the order of a minute) to get a prompt, then you might have a DNS issue. The reason a DNS issue can cause this problem is that the SSH server does a reverse DNS lookup on the IP address that you are connecting from. If DNS lookup isn't working on your instances, then you must wait for the DNS reverse lookup timeout to occur for the SSH login process to complete.DNS (Domain Name Server, Service or System)debuggingtroubleshootingDNS issues" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1068(para) -msgid "When debugging DNS issues, start by making sure that the host where the dnsmasq process for that instance runs is able to correctly resolve. If the host cannot resolve, then the instances won't be able to either." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1073(para) -msgid "A quick way to check whether DNS is working is to resolve a hostname inside your instance by using the host command. If DNS is working, you should see:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1082(para) -msgid "If you're running the Cirros image, it doesn't have the \"host\" program installed, in which case you can use ping to try to access a machine by hostname to see whether it resolves. If DNS is working, the first line of ping would be:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1090(para) -msgid "If the instance fails to resolve the hostname, you have a DNS problem. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1096(para) -msgid "In an OpenStack cloud, the dnsmasq process acts as the DNS server for the instances in addition to acting as the DHCP server. A misbehaving dnsmasq process may be the source of DNS-related issues inside the instance. As mentioned in the previous section, the simplest way to rule out a misbehaving dnsmasq process is to kill all the dnsmasq processes on the machine and restart nova-network. However, be aware that this command affects everyone running instances on this node, including tenants that have not seen the issue. As a last resort, as root:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1109(para) -msgid "After the dnsmasq processes start again, check whether DNS is working." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1112(para) -msgid "If restarting the dnsmasq process doesn't fix the issue, you might need to use tcpdump to look at the packets to trace where the failure is. The DNS server listens on UDP port 53. You should see the DNS request on the bridge (such as, br100) of your compute node. Let's say you start listening with tcpdump on the compute node:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1122(para) -msgid "Then, if you use SSH to log into your instance and try ping openstack.org, you should see something like:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1135(title) -msgid "Troubleshooting Open vSwitch" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1137(para) -msgid "Open vSwitch, as used in the previous OpenStack Networking examples is a full-featured multilayer virtual switch licensed under the open source Apache 2.0 license. Full documentation can be found at the project's website. In practice, given the preceding configuration, the most common issues are being sure that the required bridges (br-int, br-tun, and br-ex) exist and have the proper ports connected to them.Open vSwitchtroubleshootingtroubleshootingOpen vSwitch" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1154(para) -msgid "The Open vSwitch driver should and usually does manage this automatically, but it is useful to know how to do this by hand with the ovs-vsctl command. This command has many more subcommands than we will use here; see the man page or use ovs-vsctl --help for the full listing." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1160(para) -msgid "To list the bridges on a system, use ovs-vsctl list-br. This example shows a compute node that has an internal bridge and a tunnel bridge. VLAN networks are trunked through the eth1 network interface:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1171(para) -msgid "Working from the physical interface inwards, we can see the chain of ports and bridges. First, the bridge eth1-br, which contains the physical network interface eth1 and the virtual interface phy-eth1-br:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1181(para) -msgid "Next, the internal bridge, br-int, contains int-eth1-br, which pairs with phy-eth1-br to connect to the physical network shown in the previous bridge, patch-tun, which is used to connect to the GRE tunnel bridge and the TAP devices that connect to the instances currently running on the system:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1196(para) -msgid "The tunnel bridge, br-tun, contains the patch-int interface and gre-<N> interfaces for each peer it connects to via GRE, one for each compute and network node in your cluster:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1210(para) -msgid "If any of these links is missing or incorrect, it suggests a configuration error. Bridges can be added with ovs-vsctl add-br, and ports can be added to bridges with ovs-vsctl add-port. While running these by hand can be useful debugging, it is imperative that manual changes that you intend to keep be reflected back into your configuration files." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1219(title) -msgid "Dealing with Network Namespaces" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1221(para) -msgid "Linux network namespaces are a kernel feature the networking service uses to support multiple isolated layer-2 networks with overlapping IP address ranges. The support may be disabled, but it is on by default. If it is enabled in your environment, your network nodes will run their dhcp-agents and l3-agents in isolated namespaces. Network interfaces and traffic on those interfaces will not be visible in the default namespace.network namespaces, troubleshootingnamespaces, troubleshootingtroubleshootingnetwork namespaces" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1237(para) -msgid "To see whether you are using namespaces, run ip netns:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1248(para) -msgid "L3-agent router namespaces are named qrouter-<router_uuid>, and dhcp-agent name spaces are named qdhcp-<net_uuid>. This output shows a network node with four networks running dhcp-agents, one of which is also running an l3-agent router. It's important to know which network you need to be working in. A list of existing networks and their UUIDs can be obtained by running neutron net-list with administrative credentials." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1258(para) -msgid "Once you've determined which namespace you need to work in, you can use any of the debugging tools mention earlier by prefixing the command with ip netns exec <namespace>. For example, to see what network interfaces exist in the first qdhcp namespace returned above, do this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1278(para) -msgid "From this you see that the DHCP server on that network is using the tape6256f7d-31 device and has an IP address of 10.0.1.100. Seeing the address 169.254.169.254, you can also see that the dhcp-agent is running a metadata-proxy service. Any of the commands mentioned previously in this chapter can be run in the same way. It is also possible to run a shell, such as bash, and have an interactive session within the namespace. In the latter case, exiting the shell returns you to the top-level default namespace." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_network_troubleshooting.xml:1291(para) -msgid "The authors have spent too much time looking at packet dumps in order to distill this information for you. We trust that, following the methods outlined in this chapter, you will have an easier time! Aside from working with the tools and steps above, don't forget that sometimes an extra pair of eyes goes a long way to assist." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:13(title) ./doc/openstack-ops/bk_ops_guide.xml:34(productname) -msgid "OpenStack" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:21(link) -msgid "Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and Fedora 22" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:35(emphasis) -msgid "OpenStack Cloud Computing Cookbook" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:35(link) -msgid " (Packt Publishing)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:41(title) -msgid "Cloud (General)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:44(link) -msgid "“The NIST Definition of Cloud Computing”" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:50(title) -msgid "Python" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:53(emphasis) -msgid "Dive Into Python" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:53(link) ./doc/openstack-ops/ch_ops_resources.xml:103(link) -msgid " (Apress)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:62(emphasis) -msgid "TCP/IP Illustrated, Volume 1: The Protocols, 2/E" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:62(link) -msgid " (Pearson)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:67(emphasis) -msgid "The TCP/IP Guide" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:67(link) ./doc/openstack-ops/ch_ops_resources.xml:90(link) -msgid " (No Starch Press)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:71(link) -msgid "“A Tutorial and Primer”" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:77(title) -msgid "Systems Administration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:80(emphasis) -msgid "UNIX and Linux Systems Administration Handbook" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:80(link) -msgid " (Prentice Hall)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:87(title) -msgid "Virtualization" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:90(emphasis) -msgid "The Book of Xen" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:96(title) ./doc/openstack-ops/ch_ops_maintenance.xml:870(title) -msgid "Configuration Management" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:99(link) -msgid "Puppet Labs Documentation" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_resources.xml:103(emphasis) -msgid "Pro Puppet" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/ch_arch_provision.xml:156(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0201.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:12(title) -msgid "Provisioning and Deployment" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:14(para) -msgid "A critical part of a cloud's scalability is the amount of effort that it takes to run your cloud. To minimize the operational cost of running your cloud, set up and use an automated deployment and configuration infrastructure with a configuration management system, such as Puppet or Chef. Combined, these systems greatly reduce manual effort and the chance for operator error.cloud computingminimizing costs of" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:25(para) -msgid "This infrastructure includes systems to automatically install the operating system's initial configuration and later coordinate the configuration of all services automatically and centrally, which reduces both manual effort and the chance for error. Examples include Ansible, CFEngine, Chef, Puppet, and Salt. You can even use OpenStack to deploy OpenStack, named TripleO (OpenStack On OpenStack).PuppetChef" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:37(title) -msgid "Automated Deployment" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:39(para) -msgid "An automated deployment system installs and configures operating systems on new servers, without intervention, after the absolute minimum amount of manual work, including physical racking, MAC-to-IP assignment, and power configuration. Typically, solutions rely on wrappers around PXE boot and TFTP servers for the basic operating system install and then hand off to an automated configuration management system.deploymentprovisioning/deploymentprovisioning/deploymentautomated deployment" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:55(para) -msgid "Both Ubuntu and Red Hat Enterprise Linux include mechanisms for configuring the operating system, including preseed and kickstart, that you can use after a network boot. Typically, these are used to bootstrap an automated configuration system. Alternatively, you can use an image-based approach for deploying the operating system, such as systemimager. You can use both approaches with a virtualized infrastructure, such as when you run VMs to separate your control services and physical infrastructure." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:64(para) -msgid "When you create a deployment plan, focus on a few vital areas because they are very hard to modify post deployment. The next two sections talk about configurations for:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:70(para) -msgid "Disk partitioning and disk array setup for scalability" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:74(para) -msgid "Networking configuration just for PXE booting" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:79(title) -msgid "Disk Partitioning and RAID" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:81(para) -msgid "At the very base of any operating system are the hard drives on which the operating system (OS) is installed.RAID (redundant array of independent disks)partitionsdisk partitioningdisk partitioning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:92(para) -msgid "You must complete the following configurations on the server's hard drives:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:97(para) -msgid "Partitioning, which provides greater flexibility for layout of operating system and swap space, as described below." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:102(para) -msgid "Adding to a RAID array (RAID stands for redundant array of independent disks), based on the number of disks you have available, so that you can add capacity as your cloud grows. Some options are described in more detail below." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:109(para) -msgid "The simplest option to get started is to use one hard drive with two partitions:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:114(para) -msgid "File system to store files and directories, where all the data lives, including the root partition that starts and runs the system" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:120(para) -msgid "Swap space to free up memory for processes, as an independent area of the physical disk used only for swapping and nothing else" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:126(para) -msgid "RAID is not used in this simplistic one-drive setup because generally for production clouds, you want to ensure that if one disk fails, another can take its place. Instead, for production, use more than one disk. The number of disks determine what types of RAID arrays to build." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:132(para) -msgid "We recommend that you choose one of the following multiple disk options:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:137(term) -msgid "Option 1" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:140(para) -msgid "Partition all drives in the same way in a horizontal fashion, as shown in ." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:144(para) -msgid "With this option, you can assign different partitions to different RAID arrays. You can allocate partition 1 of disk one and two to the /boot partition mirror. You can make partition 2 of all disks the root partition mirror. You can use partition 3 of all disks for a cinder-volumes LVM partition running on a RAID 10 array." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:152(title) -msgid "Partition setup of drives" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:161(para) -msgid "While you might end up with unused partitions, such as partition 1 in disk three and four of this example, this option allows for maximum utilization of disk space. I/O performance might be an issue as a result of all disks being used for all tasks." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:170(term) -msgid "Option 2" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:173(para) -msgid "Add all raw disks to one large RAID array, either hardware or software based. You can partition this large array with the boot, root, swap, and LVM areas. This option is simple to implement and uses all partitions. However, disk I/O might suffer." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:182(term) -msgid "Option 3" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:185(para) -msgid "Dedicate entire disks to certain partitions. For example, you could allocate disk one and two entirely to the boot, root, and swap partitions under a RAID 1 mirror. Then, allocate disk three and four entirely to the LVM partition, also under a RAID 1 mirror. Disk I/O should be better because I/O is focused on dedicated tasks. However, the LVM partition is much smaller." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:197(para) -msgid "You may find that you can automate the partitioning itself. For example, MIT uses Fully Automatic Installation (FAI) to do the initial PXE-based partition and then install using a combination of min/max and percentage-based partitioning.Fully Automatic Installation (FAI)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:206(para) -msgid "As with most architecture choices, the right answer depends on your environment. If you are using existing hardware, you know the disk density of your servers and can determine some decisions based on the options above. If you are going through a procurement process, your user's requirements also help you determine hardware purchases. Here are some examples from a private cloud providing web developers custom environments at AT&T. This example is from a specific deployment, so your existing hardware or procurement opportunity may vary from this. AT&T uses three types of hardware in its deployment:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:218(para) -msgid "Hardware for controller nodes, used for all stateless OpenStack API services. About 32–64 GB memory, small attached disk, one processor, varied number of cores, such as 6–12." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:224(para) -msgid "Hardware for compute nodes. Typically 256 or 144 GB memory, two processors, 24 cores. 4–6 TB direct attached storage, typically in a RAID 5 configuration." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:230(para) -msgid "Hardware for storage nodes. Typically for these, the disk space is optimized for the lowest cost per GB of storage while maintaining rack-space efficiency." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:236(para) -msgid "Again, the right answer depends on your environment. You have to make your decision based on the trade-offs between space utilization, simplicity, and I/O performance." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:242(title) -msgid "Network Configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:244(para) -msgid "Network configuration is a very large topic that spans multiple areas of this book. For now, make sure that your servers can PXE boot and successfully communicate with the deployment server.networksconfiguration of" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:253(para) -msgid "For example, you usually cannot configure NICs for VLANs when PXE booting. Additionally, you usually cannot PXE boot with bonded NICs. If you run into this scenario, consider using a simple 1 GB switch in a private network on which only your cloud communicates." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:261(title) -msgid "Automated Configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:263(para) -msgid "The purpose of automatic configuration management is to establish and maintain the consistency of a system without using human intervention. You want to maintain consistency in your deployments so that you can have the same cloud every time, repeatably. Proper use of automatic configuration-management tools ensures that components of the cloud systems are in particular states, in addition to simplifying deployment, and configuration change propagation.automated configurationprovisioning/deploymentautomated configuration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:277(para) -msgid "These tools also make it possible to test and roll back changes, as they are fully repeatable. Conveniently, a large body of work has been done by the OpenStack community in this space. Puppet, a configuration management tool, even provides official modules for OpenStack projects in an OpenStack infrastructure system known as Puppet OpenStack . Chef configuration management is provided within . Additional configuration management systems include Juju, Ansible, and Salt. Also, PackStack is a command-line utility for Red Hat Enterprise Linux and derivatives that uses Puppet modules to support rapid deployment of OpenStack on existing servers over an SSH connection." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:292(para) -msgid "An integral part of a configuration-management system is the item that it controls. You should carefully consider all of the items that you want, or do not want, to be automatically managed. For example, you may not want to automatically format hard drives with user data." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:299(title) -msgid "Remote Management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:301(para) -msgid "In our experience, most operators don't sit right next to the servers running the cloud, and many don't necessarily enjoy visiting the data center. OpenStack should be entirely remotely configurable, but sometimes not everything goes according to plan.provisioning/deploymentremote management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:311(para) -msgid "In this instance, having an out-of-band access into nodes running OpenStack components is a boon. The IPMI protocol is the de facto standard here, and acquiring hardware that supports it is highly recommended to achieve that lights-out data center aim." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:316(para) -msgid "In addition, consider remote power control as well. While IPMI usually controls the server's power state, having remote access to the PDU that the server is plugged into can really be useful for situations when everything seems wedged." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:323(title) -msgid "Parting Thoughts for Provisioning and Deploying OpenStack" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:325(para) -msgid "You can save time by understanding the use cases for the cloud you want to create. Use cases for OpenStack are varied. Some include object storage only; others require preconfigured compute resources to speed development-environment set up; and others need fast provisioning of compute resources that are already secured per tenant with private networks. Your users may have need for highly redundant servers to make sure their legacy applications continue to run. Perhaps a goal would be to architect these legacy applications so that they run on multiple instances in a cloudy, fault-tolerant way, but not make it a goal to add to those clusters over time. Your users may indicate that they need scaling considerations because of heavy Windows server use.provisioning/deploymenttips for" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:342(para) -msgid "You can save resources by looking at the best fit for the hardware you have in place already. You might have some high-density storage hardware available. You could format and repurpose those servers for OpenStack Object Storage. All of these considerations and input from users help you build your use case and your deployment plan." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:349(para) -msgid "For further research about OpenStack deployment, investigate the supported and documented preconfigured, prepackaged installers for OpenStack from companies such as Canonical, Cisco, Cloudscaling, IBM, Metacloud, Mirantis, Piston, Rackspace, Red Hat, SUSE, and SwiftStack." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_provision.xml:369(para) -msgid "The decisions you make with respect to provisioning and deployment will affect your day-to-day, week-to-week, and month-to-month maintenance of the cloud. Your configuration management will be able to evolve over time. However, more thought and design need to be done for upfront choices about deployment, disk partitioning, and network configuration." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/part_architecture.xml:82(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0001.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:10(title) -msgid "Architecture" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:13(para) -msgid "Designing an OpenStack cloud is a great achievement. It requires a robust understanding of the requirements and needs of the cloud's users to determine the best possible configuration to meet them. OpenStack provides a great deal of flexibility to achieve your needs, and this part of the book aims to shine light on many of the decisions you need to make during the process." -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:20(para) -msgid "To design, deploy, and configure OpenStack, administrators must understand the logical architecture. A diagram can help you envision all the integrated services within OpenStack and how they interact with each other.modules, types ofOpenStackmodule types in" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:31(para) -msgid "OpenStack modules are one of the following types:" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:35(term) -msgid "Daemon" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:38(para) -msgid "Runs as a background process. On Linux platforms, a daemon is usually installed as a service.daemonsbasics of" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:48(term) -msgid "Script" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:51(para) -msgid "Installs a virtual environment and runs tests.script modules" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:59(term) -msgid "Command-line interface (CLI)" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:62(para) -msgid "Enables users to submit API calls to OpenStack services through commands.Command-line interface (CLI)" -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:70(para) -msgid "As shown, end users can interact through the dashboard, CLIs, and APIs. All services authenticate through a common Identity service, and individual services interact with each other through public APIs, except where privileged administrator commands are necessary. shows the most common, but not the only logical architecture for an OpenStack cloud." -msgstr "" - -#: ./doc/openstack-ops/part_architecture.xml:77(title) -msgid "OpenStack Logical Architecture ()" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:12(title) -msgid "Designing for Cloud Controllers and Cloud Management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:15(para) -msgid "OpenStack is designed to be massively horizontally scalable, which allows all services to be distributed widely. However, to simplify this guide, we have decided to discuss services of a more central nature, using the concept of a cloud controller. A cloud controller is just a conceptual simplification. In the real world, you design an architecture for your cloud controller that enables high availability so that if any node fails, another can take over the required tasks. In reality, cloud controller tasks are spread out across more than a single node.design considerationscloud controller servicescloud controllersconcept of" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:33(para) -msgid "The cloud controller provides the central management system for OpenStack deployments. Typically, the cloud controller manages authentication and sends messaging to all the systems through a message queue." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:38(para) -msgid "For many deployments, the cloud controller is a single node. However, to have high availability, you have to take a few considerations into account, which we'll cover in this chapter." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:42(para) -msgid "The cloud controller manages the following services for the cloud:cloud controllersservices managed by" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:51(term) ./doc/openstack-ops/ch_ops_maintenance.xml:1014(title) -msgid "Databases" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:54(para) -msgid "Tracks current information about users and instances, for example, in a database, typically one database instance managed per service" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:61(term) -msgid "Message queue services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:64(para) -msgid "All AMQP—Advanced Message Queue Protocol—messages for services are received and sent according to the queue brokerAdvanced Message Queuing Protocol (AMQP)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:73(term) -msgid "Conductor services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:76(para) -msgid "Proxy requests to a database" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:81(term) -msgid "Authentication and authorization for identity management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:84(para) -msgid "Indicates which users can do what actions on certain cloud resources; quota management is spread out among services, howeverauthentication" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:93(term) -msgid "Image-management services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:96(para) -msgid "Stores and serves images with metadata on each, for launching in the cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:102(term) -msgid "Scheduling services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:105(para) -msgid "Indicates which resources to use first; for example, spreading out where instances are launched based on an algorithm" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:111(term) -msgid "User dashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:114(para) -msgid "Provides a web-based front end for users to consume OpenStack cloud services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:120(term) -msgid "API endpoints" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:123(para) -msgid "Offers each service's REST API access, where the API endpoint catalog is managed by the Identity service" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:129(para) -msgid "For our example, the cloud controller has a collection of nova-* components that represent the global state of the cloud; talks to services such as authentication; maintains information about the cloud in a database; communicates to all compute nodes and storage workers through a queue; and provides API access. Each service running on a designated cloud controller may be broken out into separate nodes for scalability or availability.storagestorage workersworkers" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:143(para) -msgid "As another example, you could use pairs of servers for a collective cloud controller—one active, one standby—for redundant nodes providing a given set of related services, such as:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:149(para) -msgid "Front end web for API requests, the scheduler for choosing which compute node to boot an instance on, Identity services, and the dashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:155(para) -msgid "Database and message queue server (such as MySQL, RabbitMQ)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:159(para) -msgid "Image service for the image management" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:163(para) -msgid "Now that you see the myriad designs for controlling your cloud, read more about the further considerations to help with your design decisions." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:168(title) -msgid "Hardware Considerations" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:170(para) -msgid "A cloud controller's hardware can be the same as a compute node, though you may want to further specify based on the size and type of cloud that you run.hardwaredesign considerationsdesign considerationshardware considerations" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:182(para) -msgid "It's also possible to use virtual machines for all or some of the services that the cloud controller manages, such as the message queuing. In this guide, we assume that all services are running directly on the cloud controller." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:187(para) -msgid " contains common considerations to review when sizing hardware for the cloud controller design.cloud controllershardware sizing considerationsActive Directorydashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:200(caption) -msgid "Cloud controller hardware sizing considerations" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:208(th) -msgid "Consideration" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:210(th) -msgid "Ramification" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:216(para) -msgid "How many instances will run at once?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:218(para) -msgid "Size your database server accordingly, and scale out beyond one cloud controller if many instances will report status at the same time and scheduling where a new instance starts up needs computing power." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:225(para) -msgid "How many compute nodes will run at once?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:227(para) -msgid "Ensure that your messaging queue handles requests successfully and size accordingly." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:232(para) -msgid "How many users will access the API?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:234(para) -msgid "If many users will make multiple requests, make sure that the CPU load for the cloud controller can handle it." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:239(para) -msgid "How many users will access the dashboard versus the REST API directly?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:243(para) -msgid "The dashboard makes many requests, even more than the API access, so add even more CPU if your dashboard is the main interface for your users." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:249(para) -msgid "How many nova-api services do you run at once for your cloud?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:252(para) -msgid "You need to size the controller with a core per service." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:257(para) -msgid "How long does a single instance run?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:259(para) -msgid "Starting instances and deleting instances is demanding on the compute node but also demanding on the controller node because of all the API queries and scheduling needs." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:265(para) -msgid "Does your authentication system also verify externally?" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:268(para) -msgid "External systems such as LDAP or Active Directory require network connectivity between the cloud controller and an external authentication system. Also ensure that the cloud controller has the CPU power to keep up with requests." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:279(title) -msgid "Separation of Services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:281(para) -msgid "While our example contains all central services in a single location, it is possible and indeed often a good idea to separate services onto different physical servers. is a list of deployment scenarios we've seen and their justifications.provisioning/deploymentdeployment scenariosservicesseparation ofseparation of servicesdesign considerationsseparation of services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:302(caption) -msgid "Deployment scenarios" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:310(th) -msgid "Scenario" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:312(th) -msgid "Justification" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:318(para) -msgid "Run glance-* servers on the swift-proxy server." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:321(para) -msgid "This deployment felt that the spare I/O on the Object Storage proxy server was sufficient and that the Image Delivery portion of glance benefited from being on physical hardware and having good connectivity to the Object Storage back end it was using." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:329(para) -msgid "Run a central dedicated database server." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:331(para) -msgid "This deployment used a central dedicated server to provide the databases for all services. This approach simplified operations by isolating database server updates and allowed for the simple creation of slave database servers for failover." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:338(para) -msgid "Run one VM per service." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:340(para) -msgid "This deployment ran central services on a set of servers running KVM. A dedicated VM was created for each service (nova-scheduler, rabbitmq, database, etc). This assisted the deployment with scaling because administrators could tune the resources given to each virtual machine based on the load it received (something that was not well understood during installation)." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:350(para) -msgid "Use an external load balancer." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:352(para) -msgid "This deployment had an expensive hardware load balancer in its organization. It ran multiple nova-api and swift-proxy servers on different physical servers and used the load balancer to switch between them." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:360(para) -msgid "One choice that always comes up is whether to virtualize. Some services, such as nova-compute, swift-proxy and swift-object servers, should not be virtualized. However, control servers can often be happily virtualized—the performance penalty can usually be offset by simply running more of the service." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:368(title) ./doc/openstack-ops/section_arch_example-neutron.xml:80(para) ./doc/openstack-ops/section_arch_example-nova.xml:107(para) -msgid "Database" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:370(para) -msgid "OpenStack Compute uses an SQL database to store and retrieve stateful information. MySQL is the popular database choice in the OpenStack community.databasesdesign considerationsdesign considerationsdatabase choice" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:382(para) -msgid "Loss of the database leads to errors. As a result, we recommend that you cluster your database to make it failure tolerant. Configuring and maintaining a database cluster is done outside OpenStack and is determined by the database software you choose to use in your cloud environment. MySQL/Galera is a popular option for MySQL-based databases." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:390(title) -msgid "Message Queue" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:392(para) -msgid "Most OpenStack services communicate with each other using the message queue.messagesdesign considerationsdesign considerationsmessage queues For example, Compute communicates to block storage services and networking services through the message queue. Also, you can optionally enable notifications for any service. RabbitMQ, Qpid, and 0mq are all popular choices for a message-queue service. In general, if the message queue fails or becomes inaccessible, the cluster grinds to a halt and ends up in a read-only state, with information stuck at the point where the last message was sent. Accordingly, we recommend that you cluster the message queue. Be aware that clustered message queues can be a pain point for many OpenStack deployments. While RabbitMQ has native clustering support, there have been reports of issues when running it at a large scale. While other queuing solutions are available, such as 0mq and Qpid, 0mq does not offer stateful queues. Qpid is the messaging system of choice for Red Hat and its derivatives. Qpid does not have native clustering capabilities and requires a supplemental service, such as Pacemaker or Corsync. For your message queue, you need to determine what level of data loss you are comfortable with and whether to use an OpenStack project's ability to retry multiple MQ hosts in the event of a failure, such as using Compute's ability to do so.0mqQpidRabbitMQmessage queue" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:431(title) -msgid "Conductor Services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:433(para) -msgid "In the previous version of OpenStack, all nova-compute services required direct access to the database hosted on the cloud controller. This was problematic for two reasons: security and performance. With regard to security, if a compute node is compromised, the attacker inherently has access to the database. With regard to performance, nova-compute calls to the database are single-threaded and blocking. This creates a performance bottleneck because database requests are fulfilled serially rather than in parallel.conductorsdesign considerationsconductor services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:449(para) -msgid "The conductor service resolves both of these issues by acting as a proxy for the nova-compute service. Now, instead of nova-compute directly accessing the database, it contacts the nova-conductor service, and nova-conductor accesses the database on nova-compute's behalf. Since nova-compute no longer has direct access to the database, the security issue is resolved. Additionally, nova-conductor is a nonblocking service, so requests from all compute nodes are fulfilled in parallel." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:461(para) -msgid "If you are using nova-network and multi-host networking in your cloud environment, nova-compute still requires direct access to the database.multi-host networking" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:468(para) -msgid "The nova-conductor service is horizontally scalable. To make nova-conductor highly available and fault tolerant, just launch more instances of the nova-conductor process, either on the same server or across multiple servers." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:476(title) -msgid "Application Programming Interface (API)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:478(para) -msgid "All public access, whether direct, through a command-line client, or through the web-based dashboard, uses the API service. Find the API reference at .API (application programming interface)design considerationsdesign considerationsAPI support" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:491(para) -msgid "You must choose whether you want to support the Amazon EC2 compatibility APIs, or just the OpenStack APIs. One issue you might encounter when running both APIs is an inconsistent experience when referring to images and instances." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:496(para) -msgid "For example, the EC2 API refers to instances using IDs that contain hexadecimal, whereas the OpenStack API uses names and digits. Similarly, the EC2 API tends to rely on DNS aliases for contacting virtual machines, as opposed to OpenStack, which typically lists IP addresses.DNS (Domain Name Server, Service or System)DNS aliasestroubleshootingDNS issues" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:510(para) -msgid "If OpenStack is not set up in the right way, it is simple to have scenarios in which users are unable to contact their instances due to having only an incorrect DNS alias. Despite this, EC2 compatibility can assist users migrating to your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:515(para) -msgid "As with databases and message queues, having more than one API server is a good thing. Traditional HTTP load-balancing techniques can be used to achieve a highly available nova-api service.API (application programming interface)API server" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:526(title) -msgid "Extensions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:528(para) -msgid "The API Specifications define the core actions, capabilities, and mediatypes of the OpenStack API. A client can always depend on the availability of this core API, and implementers are always required to support it in its entirety. Requiring strict adherence to the core API allows clients to rely upon a minimal level of functionality when interacting with multiple implementations of the same API.extensionsdesign considerationsdesign considerationsextensions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:546(para) -msgid "The OpenStack Compute API is extensible. An extension adds capabilities to an API beyond those defined in the core. The introduction of new features, MIME types, actions, states, headers, parameters, and resources can all be accomplished by means of extensions to the core API. This allows the introduction of new features in the API without requiring a version change and allows the introduction of vendor-specific niche functionality." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:556(title) -msgid "Scheduling" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:558(para) -msgid "The scheduling services are responsible for determining the compute or storage node where a virtual machine or block storage volume should be created. The scheduling services receive creation requests for these resources from the message queue and then begin the process of determining the appropriate node where the resource should reside. This process is done by applying a series of user-configurable filters against the available collection of nodes.schedulersdesign considerationsdesign considerationsscheduling" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:574(para) -msgid "There are currently two schedulers: nova-scheduler for virtual machines and cinder-scheduler for block storage volumes. Both schedulers are able to scale horizontally, so for high-availability purposes, or for very large or high-schedule-frequency installations, you should consider running multiple instances of each scheduler. The schedulers all listen to the shared message queue, so no special load balancing is required." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:587(para) -msgid "The OpenStack Image service consists of two parts: glance-api and glance-registry. The former is responsible for the delivery of images; the compute node uses it to download images from the back end. The latter maintains the metadata information associated with virtual machine images and requires a database.glanceglance registryglanceglance API servermetadataOpenStack Image service andImage servicedesign considerationsdesign considerationsimages" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:614(para) -msgid "The glance-api part is an abstraction layer that allows a choice of back end. Currently, it supports:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:619(term) -msgid "OpenStack Object Storage" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:622(para) -msgid "Allows you to store images as objects." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:627(term) -msgid "File system" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:630(para) -msgid "Uses any traditional file system to store the images as files." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:636(term) -msgid "S3" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:639(para) -msgid "Allows you to fetch images from Amazon S3." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:644(term) -msgid "HTTP" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:647(para) -msgid "Allows you to fetch images from a web server. You cannot write images by using this mode." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:653(para) -msgid "If you have an OpenStack Object Storage service, we recommend using this as a scalable place to store your images. You can also use a file system with sufficient performance or Amazon S3—unless you do not need the ability to upload new images through OpenStack." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:660(title) -msgid "Dashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:662(para) -msgid "The OpenStack dashboard (horizon) provides a web-based user interface to the various OpenStack components. The dashboard includes an end-user area for users to manage their virtual infrastructure and an admin area for cloud operators to manage the OpenStack environment as a whole.dashboarddesign considerationsdashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:674(para) -msgid "The dashboard is implemented as a Python web application that normally runs in Apachehttpd. Therefore, you may treat it the same as any other web application, provided it can reach the API servers (including their admin endpoints) over the network.Apache" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:685(title) -msgid "Authentication and Authorization" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:687(para) -msgid "The concepts supporting OpenStack's authentication and authorization are derived from well-understood and widely used systems of a similar nature. Users have credentials they can use to authenticate, and they can be a member of one or more groups (known as projects or tenants, interchangeably).credentialsauthorizationauthenticationdesign considerationsauthentication/authorization" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:703(para) -msgid "For example, a cloud administrator might be able to list all instances in the cloud, whereas a user can see only those in his current group. Resources quotas, such as the number of cores that can be used, disk space, and so on, are associated with a project." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:708(para) -msgid "OpenStack Identity provides authentication decisions and user attribute information, which is then used by the other OpenStack services to perform authorization. The policy is set in the policy.json file. For information on how to configure these, see .Identityauthentication decisionsIdentityplug-in support" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:723(para) -msgid "OpenStack Identity supports different plug-ins for authentication decisions and identity storage. Examples of these plug-ins include:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:728(para) -msgid "In-memory key-value Store (a simplified internal storage structure)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:733(para) -msgid "SQL database (such as MySQL or PostgreSQL)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:737(para) -msgid "Memcached (a distributed memory object caching system)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:741(para) -msgid "LDAP (such as OpenLDAP or Microsoft's Active Directory)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:745(para) -msgid "Many deployments use the SQL database; however, LDAP is also a popular choice for those with existing authentication infrastructure that needs to be integrated." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:751(title) -msgid "Network Considerations" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:753(para) -msgid "Because the cloud controller handles so many different services, it must be able to handle the amount of traffic that hits it. For example, if you choose to host the OpenStack Image service on the cloud controller, the cloud controller should be able to support the transferring of the images at an acceptable speed.cloud controllersnetwork traffic andnetworksdesign considerationsdesign considerationsnetworks" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:771(para) -msgid "As another example, if you choose to use single-host networking where the cloud controller is the network gateway for all instances, then the cloud controller must support the total amount of traffic that travels between your cloud and the public Internet." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_cloud_controller.xml:776(para) -msgid "We recommend that you use a fast NIC, such as 10 GB. You can also choose to use two 10 GB NICs and bond them together. While you might not be able to get a full bonded 20 GB speed, different transmission streams use different NICs. For example, if the cloud controller transfers two images, each image uses a different NIC and gets a full 10 GB of bandwidth.bandwidthdesign considerations for" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-neutron.xml:490(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0101.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-neutron.xml:514(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0102.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-neutron.xml:536(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0103.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-neutron.xml:546(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0104.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-neutron.xml:556(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0105.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-neutron.xml:566(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_0106.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:16(title) -msgid "Example Architecture—OpenStack Networking" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:18(para) -msgid "This chapter provides an example architecture using OpenStack Networking, also known as the Neutron project, in a highly available environment." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:23(title) ./doc/openstack-ops/section_arch_example-nova.xml:27(title) -msgid "Overview" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:25(para) -msgid "A highly-available environment can be put into place if you require an environment that can scale horizontally, or want your cloud to continue to be operational in case of node failure. This example architecture has been written based on the current default feature set of OpenStack Havana, with an emphasis on high availability.RDO (Red Hat Distributed OpenStack)OpenStack Networking (neutron)component overview" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:38(title) ./doc/openstack-ops/section_arch_example-nova.xml:62(title) -msgid "Components" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:47(th) ./doc/openstack-ops/section_arch_example-nova.xml:71(th) -msgid "Component" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:49(th) ./doc/openstack-ops/section_arch_example-nova.xml:73(th) -msgid "Details" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:55(para) ./doc/openstack-ops/section_arch_example-nova.xml:79(para) -msgid "OpenStack release" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:61(para) ./doc/openstack-ops/section_arch_example-nova.xml:85(para) -msgid "Host operating system" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:63(para) -msgid "Red Hat Enterprise Linux 6.5" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:67(para) ./doc/openstack-ops/section_arch_example-nova.xml:93(para) -msgid "OpenStack package repository" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:69(link) -msgid "Red Hat Distributed OpenStack (RDO)" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:74(para) ./doc/openstack-ops/section_arch_example-nova.xml:101(para) -msgid "Hypervisor" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:82(para) ./doc/openstack-ops/section_arch_example-neutron.xml:176(term) -msgid "MySQL" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:86(para) ./doc/openstack-ops/section_arch_example-nova.xml:113(para) -msgid "Message queue" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:88(para) ./doc/openstack-ops/section_arch_example-neutron.xml:188(term) -msgid "Qpid" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:92(para) ./doc/openstack-ops/section_arch_example-nova.xml:120(para) -msgid "Networking service" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:98(para) -msgid "Tenant Network Separation" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:100(para) ./doc/openstack-ops/section_arch_example-neutron.xml:208(term) -msgid "VLAN" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:104(para) -msgid "Image service back end" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:110(para) -msgid "Identity driver" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:112(para) ./doc/openstack-ops/section_arch_example-nova.xml:147(para) -msgid "SQL" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:116(para) -msgid "Block Storage back end" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:125(title) ./doc/openstack-ops/section_arch_example-nova.xml:250(title) -msgid "Rationale" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:127(para) -msgid "This example architecture has been selected based on the current default feature set of OpenStack Havana, with an emphasis on high availability. This architecture is currently being deployed in an internal Red Hat OpenStack cloud and used to run hosted and shared services, which by their nature must be highly available.OpenStack Networking (neutron)rationale for choice of" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:138(para) -msgid "This architecture's components have been selected for the following reasons:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:143(term) -msgid "Red Hat Enterprise Linux" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:146(para) -msgid "You must choose an operating system that can run on all of the physical nodes. This example architecture is based on Red Hat Enterprise Linux, which offers reliability, long-term support, certified testing, and is hardened. Enterprise customers, now moving into OpenStack usage, typically require these advantages." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:156(term) -msgid "RDO" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:159(para) -msgid "The Red Hat Distributed OpenStack package offers an easy way to download the most current OpenStack release that is built for the Red Hat Enterprise Linux platform." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:169(para) -msgid "KVM is the supported hypervisor of choice for Red Hat Enterprise Linux (and included in distribution). It is feature complete and free from licensing charges and restrictions." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:179(para) -msgid "MySQL is used as the database back end for all databases in the OpenStack environment. MySQL is the supported database of choice for Red Hat Enterprise Linux (and included in distribution); the database is open source, scalable, and handles memory well." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:191(para) -msgid "Apache Qpid offers 100 percent compatibility with the Advanced Message Queuing Protocol Standard, and its broker is available for both C++ and Java." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:201(para) -msgid "OpenStack Networking offers sophisticated networking functionality, including Layer 2 (L2) network segregation and provider networks." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:211(para) -msgid "Using a virtual local area network offers broadcast control, security, and physical layer transparency. If needed, use VXLAN to extend your address space." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:221(para) -msgid "GlusterFS offers scalable storage. As your environment grows, you can continue to add more storage nodes (instead of being restricted, for example, by an expensive storage array)." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:232(title) ./doc/openstack-ops/section_arch_example-nova.xml:394(title) -msgid "Detailed Description" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:235(title) ./doc/openstack-ops/section_arch_example-neutron.xml:248(caption) -msgid "Node types" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:237(para) -msgid "This section gives you a breakdown of the different nodes that make up the OpenStack environment. A node is a physical machine that is provisioned with an operating system, and running a defined software stack on top of it. provides node descriptions and specifications.OpenStack Networking (neutron)detailed description of" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:258(th) -msgid "Type" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:262(th) -msgid "Example hardware" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:268(td) -msgid "Controller" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:270(para) -msgid "Controller nodes are responsible for running the management software services needed for the OpenStack environment to function. These nodes:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:276(para) -msgid "Provide the front door that people access as well as the API services that all other components in the environment talk to." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:282(para) -msgid "Run a number of services in a highly available fashion, utilizing Pacemaker and HAProxy to provide a virtual IP and load-balancing functions so all controller nodes are being used." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:289(para) -msgid "Supply highly available \"infrastructure\" services, such as MySQL and Qpid, that underpin all the services." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:295(para) -msgid "Provide what is known as \"persistent storage\" through services run on the host as well. This persistent storage is backed onto the storage nodes for reliability." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:302(para) -msgid "See ." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:303(para) ./doc/openstack-ops/section_arch_example-neutron.xml:327(para) ./doc/openstack-ops/section_arch_example-neutron.xml:369(para) ./doc/openstack-ops/section_arch_example-neutron.xml:384(para) -msgid "Model: Dell R620" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:303(para) ./doc/openstack-ops/section_arch_example-neutron.xml:341(para) ./doc/openstack-ops/section_arch_example-neutron.xml:384(para) -msgid "CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:304(para) ./doc/openstack-ops/section_arch_example-neutron.xml:370(para) ./doc/openstack-ops/section_arch_example-neutron.xml:385(para) -msgid "Memory: 32 GB" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:304(para) ./doc/openstack-ops/section_arch_example-neutron.xml:370(para) -msgid "Disk: two 300 GB 10000 RPM SAS Disks" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:305(para) ./doc/openstack-ops/section_arch_example-neutron.xml:345(para) ./doc/openstack-ops/section_arch_example-neutron.xml:386(para) -msgid "Network: two 10G network ports" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:310(para) -msgid "Compute nodes run the virtual machine instances in OpenStack. They:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:315(para) -msgid "Run the bare minimum of services needed to facilitate these instances." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:319(para) -msgid "Use local storage on the node for the virtual machines so that no VM migration or instance recovery at node failure is possible." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:324(phrase) -msgid "See ." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:327(para) -msgid "CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:328(para) -msgid "Memory: 128 GB" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:329(para) -msgid "Disk: two 600 GB 10000 RPM SAS Disks" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:329(para) -msgid "Network: four 10G network ports (For future proofing expansion)" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:333(td) -msgid "Storage" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:334(para) -msgid "Storage nodes store all the data required for the environment, including disk images in the Image service library, and the persistent storage volumes created by the Block Storage service. Storage nodes use GlusterFS technology to keep the data highly available and scalable." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:339(para) -msgid "See ." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:341(para) -msgid "Model: Dell R720xd" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:342(para) -msgid "Memory: 64 GB" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:343(para) -msgid "Disk: two 500 GB 7200 RPM SAS Disks and twenty-four 600 GB 10000 RPM SAS Disks" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:344(para) -msgid "Raid Controller: PERC H710P Integrated RAID Controller, 1 GB NV Cache" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:349(td) -msgid "Network" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:350(para) -msgid "Network nodes are responsible for doing all the virtual networking needed for people to create public or private networks and uplink their virtual machines into external networks. Network nodes:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:357(para) -msgid "Form the only ingress and egress point for instances running on top of OpenStack." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:361(para) -msgid "Run all of the environment's networking services, with the exception of the networking API service (which runs on the controller node)." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:367(para) -msgid "See ." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:369(para) -msgid "CPU: 1x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:371(para) -msgid "Network: five 10G network ports" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:375(td) -msgid "Utility" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:376(para) -msgid "Utility nodes are used by internal administration staff only to provide a number of basic system administration functions needed to get the environment up and running and to maintain the hardware, OS, and software on which it runs." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:379(para) -msgid "These nodes run services such as provisioning, configuration management, monitoring, or GlusterFS management software. They are not required to scale, although these machines are usually backed up." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:385(para) -msgid "Disk: two 500 GB 7200 RPM SAS Disks" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:394(title) -msgid "Networking layout" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:396(para) -msgid "The network contains all the management devices for all hardware in the environment (for example, by including Dell iDrac7 devices for the hardware nodes, and management interfaces for network switches). The network is accessed by internal staff only when diagnosing or recovering a hardware issue." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:403(title) -msgid "OpenStack internal network" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:405(para) -msgid "This network is used for OpenStack management functions and traffic, including services needed for the provisioning of physical nodes (pxe, tftp, kickstart), traffic between various OpenStack node types using OpenStack APIs and messages (for example, nova-compute talking to keystone or cinder-volume talking to nova-api), and all traffic for storage data to the storage layer underneath by the Gluster protocol. All physical nodes have at least one network interface (typically eth0) in this network. This network is only accessible from other VLANs on port 22 (for ssh access to manage machines)." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:423(title) -msgid "Public Network" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:427(para) -msgid "IP addresses for public-facing interfaces on the controller nodes (which end users will access the OpenStack services)" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:433(para) -msgid "A range of publicly routable, IPv4 network addresses to be used by OpenStack Networking for floating IPs. You may be restricted in your access to IPv4 addresses; a large range of IPv4 addresses is not necessary." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:440(para) -msgid "Routers for private networks created within OpenStack." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:425(para) -msgid "This network is a combination of: " -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:445(para) -msgid "This network is connected to the controller nodes so users can access the OpenStack interfaces, and connected to the network nodes to provide VMs with publicly routable traffic functionality. The network is also connected to the utility machines so that any utility services that need to be made public (such as system monitoring) can be accessed." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:454(title) -msgid "VM traffic network" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:456(para) -msgid "This is a closed network that is not publicly routable and is simply used as a private, internal network for traffic between virtual machines in OpenStack, and between the virtual machines and the network nodes that provide l3 routes out to the public network (and floating IPs for connections back in to the VMs). Because this is a closed network, we are using a different address space to the others to clearly define the separation. Only Compute and OpenStack Networking nodes need to be connected to this network." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:468(title) -msgid "Node connectivity" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:470(para) -msgid "The following section details how the nodes are connected to the different networks (see ) and what other considerations need to take place (for example, bonding) when connecting nodes to the networks." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:476(title) -msgid "Initial deployment" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:478(para) -msgid "Initially, the connection setup should revolve around keeping the connectivity simple and straightforward in order to minimize deployment complexity and time to deploy. The deployment shown in aims to have 1 10G connectivity available to all compute nodes, while still leveraging bonding on appropriate nodes for maximum performance." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:486(title) -msgid "Basic node deployment" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:497(title) -msgid "Connectivity for maximum performance" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:499(para) -msgid "If the networking performance of the basic layout is not enough, you can move to , which provides 2 10G network links to all instances in the environment as well as providing more network bandwidth to the storage layer. bandwidth obtaining maximum performance" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:510(title) -msgid "Performance node deployment" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:522(title) -msgid "Node diagrams" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:524(para) -msgid "The following diagrams ( through ) include logical information about the different types of nodes, indicating what services will be running on top of them and how they interact with each other. The diagrams also illustrate how the availability and scalability of services are achieved." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:542(title) -msgid "Compute node" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:552(title) -msgid "Network node" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:562(title) -msgid "Storage node" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:574(title) -msgid "Example Component Configuration" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:695(para) -msgid "Because Pacemaker is cluster software, the software itself handles its own availability, leveraging corosync and cman underneath." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:702(para) -msgid "If you use the GlusterFS native client, no virtual IP is needed, since the client knows all about nodes after initial connection and automatically routes around failures on the client side." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:709(para) -msgid "If you use the NFS or SMB adaptor, you will need a virtual IP on which to mount the GlusterFS volumes." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:691(para) -msgid "Pacemaker is the clustering software used to ensure the availability of services running on the controller and network nodes: " -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:825(para) -msgid "Configured to use Qpid, qpid_heartbeat = 10, configured to use Memcached for caching, configured to use libvirt, configured to use neutron." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:833(para) -msgid "Configured nova-consoleauth to use Memcached for session management (so that it can have multiple copies and run in a load balancer)." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:837(para) -msgid "The nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are run on all controller nodes, ensuring at least one instance will be available in case of node failure. Compute is also behind HAProxy, which detects when the software fails and routes requests around the failing instance." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:842(para) -msgid "Nova-compute and nova-conductor services, which run on the compute nodes, are only needed to run services on that node, so availability of those services is coupled tightly to the nodes that are available. As long as a compute node is up, it will have the needed services running on top of it." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:895(para) -msgid "The OpenStack Networking service is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects if the software fails and routes requests around the failing instance." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:899(para) -msgid "OpenStack Networking's ovs-agent, l3-agent, dhcp-agent, and metadata-agent services run on the network nodes, as lsb resources inside of Pacemaker. This means that in the case of network node failure, services are kept running on another node. Finally, the ovs-agent service is also run on all compute nodes, and in case of compute node failure, the other nodes will continue to function using the copy of the service running on them." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-neutron.xml:576(para) -msgid " and include example configuration and considerations for both third-party and OpenStackOpenStack Networking (neutron)third-party component configuration components:
Third-party component configuration
ComponentTuningAvailabilityScalability
MySQLbinlog-format = rowMaster/master replication. However, both nodes are not used at the same time. Replication keeps all nodes as close to being up to date as possible (although the asynchronous nature of the replication means a fully consistent state is not possible). Connections to the database only happen through a Pacemaker virtual IP, ensuring that most problems that occur with master-master replication can be avoided.Not heavily considered. Once load on the MySQL server increases enough that scalability needs to be considered, multiple masters or a master/slave setup can be used.
Qpidmax-connections=1000worker-threads=20connection-backlog=10, sasl security enabled with SASL-BASIC authenticationQpid is added as a resource to the Pacemaker software that runs on Controller nodes where Qpid is situated. This ensures only one Qpid instance is running at one time, and the node with the Pacemaker virtual IP will always be the node running Qpid.Not heavily considered. However, Qpid can be changed to run on all controller nodes for scalability and availability purposes, and removed from Pacemaker.
HAProxymaxconn 3000HAProxy is a software layer-7 load balancer used to front door all clustered OpenStack API components and do SSL termination. HAProxy can be added as a resource to the Pacemaker software that runs on the Controller nodes where HAProxy is situated. This ensures that only one HAProxy instance is running at one time, and the node with the Pacemaker virtual IP will always be the node running HAProxy.Not considered. HAProxy has small enough performance overheads that a single instance should scale enough for this level of workload. If extra scalability is needed, keepalived or other Layer-4 load balancing can be introduced to be placed in front of multiple copies of HAProxy.
MemcachedMAXCONN=\"8192\" CACHESIZE=\"30457\"Memcached is a fast in-memory key-value cache software that is used by OpenStack components for caching data and increasing performance. Memcached runs on all controller nodes, ensuring that should one go down, another instance of Memcached is available.Not considered. A single instance of Memcached should be able to scale to the desired workloads. If scalability is desired, HAProxy can be placed in front of Memcached (in raw tcp mode) to utilize multiple Memcached instances for scalability. However, this might cause cache consistency issues.
PacemakerConfigured to use corosync andcman as a cluster communication stack/quorum manager, and as a two-node cluster.If more nodes need to be made cluster aware, Pacemaker can scale to 64 nodes.
GlusterFSglusterfs performance profile \"virt\" enabled on all volumes. Volumes are setup in two-node replication.Glusterfs is a clustered file system that is run on the storage nodes to provide persistent scalable data storage in the environment. Because all connections to gluster use the gluster native mount points, the gluster instances themselves provide availability and failover functionality.The scalability of GlusterFS storage can be achieved by adding in more storage volumes.
OpenStack component configuration
ComponentNode typeTuningAvailabilityScalability
Dashboard (horizon)ControllerConfigured to use Memcached as a session store, neutron support is enabled, can_set_mount_point = FalseThe dashboard is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.The dashboard is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for the dashboard as more nodes are added.
Identity (keystone)ControllerConfigured to use Memcached for caching and PKI for tokens.Identity is run on all controller nodes, ensuring at least one instance will be available in case of node failure. Identity also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.Identity is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Identity as more nodes are added.
Image service (glance)Controller/var/lib/glance/images is a GlusterFS native mount to a Gluster volume off the storage layer.The Image service is run on all controller nodes, ensuring at least one instance will be available in case of node failure. It also sits behind HAProxy, which detects when the software fails and routes requests around the failing instance.The Image service is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for the Image service as more nodes are added.
Compute (nova)Controller, ComputeThe nova API, scheduler, objectstore, cert, consoleauth, conductor, and vncproxy services are run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Compute as more nodes are added. The scalability of services running on the compute nodes (compute, conductor) is achieved linearly by adding in more compute nodes.
Block Storage (cinder)ControllerConfigured to use Qpid, qpid_heartbeat = 10, configured to use a Gluster volume from the storage layer as the back end for Block Storage, using the Gluster native client.Block Storage API, scheduler, and volume services are run on all controller nodes, ensuring at least one instance will be available in case of node failure. Block Storage also sits behind HAProxy, which detects if the software fails and routes requests around the failing instance.Block Storage API, scheduler and volume services are run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for Block Storage as more nodes are added.
OpenStack Networking (neutron)Controller, Compute, NetworkConfigured to use QPID, qpid_heartbeat = 10, kernel namespace support enabled, tenant_network_type = vlan, allow_overlapping_ips = true, tenant_network_type = vlan, bridge_uplinks = br-ex:em2, bridge_mappings = physnet1:br-exThe OpenStack Networking server service is run on all controller nodes, so scalability can be achieved with additional controller nodes. HAProxy allows scalability for OpenStack Networking as more nodes are added. Scalability of services running on the network nodes is not currently supported by OpenStack Networking, so they are not be considered. One copy of the services should be sufficient to handle the workload. Scalability of the ovs-agent running on compute nodes is achieved by adding in more compute nodes as necessary.
" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:12(title) -msgid "Upstream OpenStack" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:14(para) -msgid "OpenStack is founded on a thriving community that is a source of help and welcomes your contributions. This chapter details some of the ways you can interact with the others involved." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:19(title) -msgid "Getting Help" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:21(para) -msgid "There are several avenues available for seeking assistance. The quickest way is to help the community help you. Search the Q&A sites, mailing list archives, and bug lists for issues similar to yours. If you can't find anything, follow the directions for reporting bugs or use one of the channels for support, which are listed below.mailing listsOpenStackdocumentationhelp, resources fortroubleshootinggetting helpOpenStack communitygetting help from" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:44(para) -msgid "Your first port of call should be the official OpenStack documentation, found on . You can get questions answered on ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:49(para) -msgid "Mailing lists are also a great place to get help. The wiki page has more information about the various lists. As an operator, the main lists you should be aware of are:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:56(link) -msgid "General list" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:60(para) -msgid "openstack@lists.openstack.org. The scope of this list is the current state of OpenStack. This is a very high-traffic mailing list, with many, many emails per day." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:67(link) -msgid "Operators list" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:71(para) -msgid "openstack-operators@lists.openstack.org. This list is intended for discussion among existing OpenStack cloud operators, such as yourself. Currently, this list is relatively low traffic, on the order of one email a day." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:79(link) -msgid "Development list" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:83(para) -msgid "openstack-dev@lists.openstack.org. The scope of this list is the future state of OpenStack. This is a high-traffic mailing list, with multiple emails per day." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:90(para) -msgid "We recommend that you subscribe to the general list and the operator list, although you must set up filters to manage the volume for the general list. You'll also find links to the mailing list archives on the mailing list wiki page, where you can search through the discussions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:96(para) -msgid "Multiple IRC channels are available for general questions and developer discussions. The general discussion channel is #openstack on irc.freenode.net." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:103(title) -msgid "Reporting Bugs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:105(para) -msgid "As an operator, you are in a very good position to report unexpected behavior with your cloud. Since OpenStack is flexible, you may be the only individual to report a particular issue. Every issue is important to fix, so it is essential to learn how to easily submit a bug report.maintenance/debuggingreporting bugsbugs, reportingOpenStack communityreporting bugs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:121(para) -msgid "All OpenStack projects use Launchpad for bug tracking. You'll need to create an account on Launchpad before you can submit a bug report." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:126(para) -msgid "Once you have a Launchpad account, reporting a bug is as simple as identifying the project or projects that are causing the issue. Sometimes this is more difficult than expected, but those working on the bug triage are happy to help relocate issues if they are not in the right place initially:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:134(para) -msgid "Report a bug in nova." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:139(para) -msgid "Report a bug in python-novaclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:144(para) -msgid "Report a bug in swift." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:149(para) -msgid "Report a bug in python-swiftclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:154(para) -msgid "Report a bug in glance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:159(para) -msgid "Report a bug in python-glanceclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:164(para) -msgid "Report a bug in keystone." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:169(para) -msgid "Report a bug in python-keystoneclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:174(para) -msgid "Report a bug in neutron." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:179(para) -msgid "Report a bug in python-neutronclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:184(para) -msgid "Report a bug in cinder." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:189(para) -msgid "Report a bug in python-cinderclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:194(para) -msgid "Report a bug in manila." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:199(para) -msgid "Report a bug in python-manilaclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:204(para) -msgid "Report a bug in python-openstackclient." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:209(para) -msgid "Report a bug in horizon." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:214(para) -msgid "Report a bug with the documentation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:219(para) -msgid "Report a bug with the API documentation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:224(para) -msgid "To write a good bug report, the following process is essential. First, search for the bug to make sure there is no bug already filed for the same issue. If you find one, be sure to click on \"This bug affects X people. Does this bug affect you?\" If you can't find the issue, then enter the details of your report. It should at least include:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:232(para) -msgid "The release, or milestone, or commit ID corresponding to the software that you are running" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:237(para) -msgid "The operating system and version where you've identified the bug" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:242(para) -msgid "Steps to reproduce the bug, including what went wrong" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:246(para) -msgid "Description of the expected results instead of what you saw" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:251(para) -msgid "Portions of your log files so that you include only relevant excerpts" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:256(para) -msgid "When you do this, the bug is created with:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:260(para) -msgid "Status: New" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:264(para) -msgid "In the bug comments, you can contribute instructions on how to fix a given bug, and set it to Triaged. Or you can directly fix it: assign the bug to yourself, set it to In progress, branch the code, implement the fix, and propose your change for merging. But let's not get ahead of ourselves; there are bug triaging tasks as well." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:272(title) -msgid "Confirming and Prioritizing" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:274(para) -msgid "This stage is about checking that a bug is real and assessing its impact. Some of these steps require bug supervisor rights (usually limited to core teams). If the bug lacks information to properly reproduce or assess the importance of the bug, the bug is set to:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:281(para) -msgid "Status: Incomplete" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:285(para) -msgid "Once you have reproduced the issue (or are 100 percent confident that this is indeed a valid bug) and have permissions to do so, set:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:291(para) -msgid "Status: Confirmed" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:295(para) -msgid "Core developers also prioritize the bug, based on its impact:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:300(para) -msgid "Importance: <Bug impact>" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:304(para) -msgid "The bug impacts are categorized as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:310(para) -msgid "Critical if the bug prevents a key feature from working properly (regression) for all users (or without a simple workaround) or results in data loss" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:316(para) -msgid "High if the bug prevents a key feature from working properly for some users (or with a workaround)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:321(para) -msgid "Medium if the bug prevents a secondary feature from working properly" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:326(para) -msgid "Low if the bug is mostly cosmetic" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:330(para) -msgid "Wishlist if the bug is not really a bug but rather a welcome change in behavior" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:335(para) -msgid "If the bug contains the solution, or a patch, set the bug status to Triaged." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:340(title) -msgid "Bug Fixing" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:342(para) -msgid "At this stage, a developer works on a fix. During that time, to avoid duplicating the work, the developer should set:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:347(para) -msgid "Status: In Progress" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:351(para) -msgid "Assignee: <yourself>" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:355(para) -msgid "When the fix is ready, the developer proposes a change and gets the change reviewed." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:360(title) -msgid "After the Change Is Accepted" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:362(para) -msgid "After the change is reviewed, accepted, and lands in master, it automatically moves to:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:367(para) -msgid "Status: Fix Committed" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:371(para) -msgid "When the fix makes it into a milestone or release branch, it automatically moves to:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:376(para) -msgid "Milestone: Milestone the bug was fixed in" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:380(para) -msgid "Status: Fix Released" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:387(title) -msgid "Join the OpenStack Community" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:389(para) -msgid "Since you've made it this far in the book, you should consider becoming an official individual member of the community and join the OpenStack Foundation. The OpenStack Foundation is an independent body providing shared resources to help achieve the OpenStack mission by protecting, empowering, and promoting OpenStack software and the community around it, including users, developers, and the entire ecosystem. We all share the responsibility to make this community the best it can possibly be, and signing up to be a member is the first step to participating. Like the software, individual membership within the OpenStack Foundation is free and accessible to anyone.OpenStack communityjoining" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:407(title) -msgid "How to Contribute to the Documentation" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:409(para) -msgid "OpenStack documentation efforts encompass operator and administrator docs, API docs, and user docs.OpenStack communitycontributing to" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:416(para) -msgid "The genesis of this book was an in-person event, but now that the book is in your hands, we want you to contribute to it. OpenStack documentation follows the coding principles of iterative work, with bug logging, investigating, and fixing." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:421(para) -msgid "Just like the code, is updated constantly using the Gerrit review system, with source stored in git.openstack.org in the openstack-manuals repository and the api-site repository." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:428(para) -msgid "To review the documentation before it's published, go to the OpenStack Gerrit server at  and search for project:openstack/openstack-manuals or project:openstack/api-site." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:435(para) -msgid "See the How To Contribute page on the wiki for more information on the steps you need to take to submit your first documentation review or change." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:441(title) -msgid "Security Information" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:443(para) -msgid "As a community, we take security very seriously and follow a specific process for reporting potential issues. We vigilantly pursue fixes and regularly eliminate exposures. You can report security issues you discover through this specific process. The OpenStack Vulnerability Management Team is a very small group of experts in vulnerability management drawn from the OpenStack community. The team's job is facilitating the reporting of vulnerabilities, coordinating security fixes and handling progressive disclosure of the vulnerability information. Specifically, the team is responsible for the following functions:vulnerability tracking/managementsecurity issuesreporting/fixing vulnerabilitiesOpenStack communitysecurity information" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:466(term) -msgid "Vulnerability management" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:469(para) -msgid "All vulnerabilities discovered by community members (or users) can be reported to the team." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:475(term) -msgid "Vulnerability tracking" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:478(para) -msgid "The team will curate a set of vulnerability related issues in the issue tracker. Some of these issues are private to the team and the affected product leads, but once remediation is in place, all vulnerabilities are public." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:486(term) -msgid "Responsible disclosure" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:489(para) -msgid "As part of our commitment to work with the security community, the team ensures that proper credit is given to security researchers who responsibly report issues in OpenStack." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:496(para) -msgid "We provide two ways to report issues to the OpenStack Vulnerability Management Team, depending on how sensitive the issue is:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:501(para) -msgid "Open a bug in Launchpad and mark it as a \"security bug.\" This makes the bug private and accessible to only the Vulnerability Management Team." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:507(para) -msgid "If the issue is extremely sensitive, send an encrypted email to one of the team's members. Find their GPG keys at OpenStack Security." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:514(para) -msgid "You can find the full list of security-oriented teams you can join at Security Teams. The vulnerability management process is fully documented at Vulnerability Management." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:522(title) -msgid "Finding Additional Information" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_upstream.xml:524(para) -msgid "In addition to this book, there are many other sources of information about OpenStack. The OpenStack website is a good starting point, with OpenStack Docs and OpenStack API Docs providing technical documentation about OpenStack. The OpenStack wiki contains a lot of general information that cuts across the OpenStack projects, including a list of recommended tools. Finally, there are a number of blogs aggregated at Planet OpenStack.OpenStack communityadditional information" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:10(title) -msgid "Tales From the Cryp^H^H^H^H Cloud" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:12(para) -msgid "Herein lies a selection of tales from OpenStack cloud operators. Read, and learn from their wisdom." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:16(title) -msgid "Double VLAN" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:17(para) -msgid "I was on-site in Kelowna, British Columbia, Canada setting up a new OpenStack cloud. The deployment was fully automated: Cobbler deployed the OS on the bare metal, bootstrapped it, and Puppet took over from there. I had run the deployment scenario so many times in practice and took for granted that everything was working." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:23(para) -msgid "On my last day in Kelowna, I was in a conference call from my hotel. In the background, I was fooling around on the new cloud. I launched an instance and logged in. Everything looked fine. Out of boredom, I ran and all of the sudden the instance locked up." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:29(para) -msgid "Thinking it was just a one-off issue, I terminated the instance and launched a new one. By then, the conference call ended and I was off to the data center." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:32(para) -msgid "At the data center, I was finishing up some tasks and remembered the lock-up. I logged into the new instance and ran again. It worked. Phew. I decided to run it one more time. It locked up." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:36(para) -msgid "After reproducing the problem several times, I came to the unfortunate conclusion that this cloud did indeed have a problem. Even worse, my time was up in Kelowna and I had to return back to Calgary." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:40(para) -msgid "Where do you even begin troubleshooting something like this? An instance that just randomly locks up when a command is issued. Is it the image? Nopeit happens on all images. Is it the compute node? Nopeall nodes. Is the instance locked up? No! New SSH connections work just fine!" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:45(para) -msgid "We reached out for help. A networking engineer suggested it was an MTU issue. Great! MTU! Something to go on! What's MTU and why would it cause a problem?" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:48(para) -msgid "MTU is maximum transmission unit. It specifies the maximum number of bytes that the interface accepts for each packet. If two interfaces have two different MTUs, bytes might get chopped off and weird things happensuch as random session lockups." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:54(para) -msgid "Not all packets have a size of 1500. Running the command over SSH might only create a single packets less than 1500 bytes. However, running a command with heavy output, such as requires several packets of 1500 bytes." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:60(para) -msgid "OK, so where is the MTU issue coming from? Why haven't we seen this in any other deployment? What's new in this situation? Well, new data center, new uplink, new switches, new model of switches, new servers, first time using this model of servers… so, basically everything was new. Wonderful. We toyed around with raising the MTU at various areas: the switches, the NICs on the compute nodes, the virtual NICs in the instances, we even had the data center raise the MTU for our uplink interface. Some changes worked, some didn't. This line of troubleshooting didn't feel right, though. We shouldn't have to be changing the MTU in these areas." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:72(para) -msgid "As a last resort, our network admin (Alvaro) and myself sat down with four terminal windows, a pencil, and a piece of paper. In one window, we ran ping. In the second window, we ran on the cloud controller. In the third, on the compute node. And the forth had on the instance. For background, this cloud was a multi-node, non-multi-host setup." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:80(para) -msgid "One cloud controller acted as a gateway to all compute nodes. VlanManager was used for the network config. This means that the cloud controller and all compute nodes had a different VLAN for each OpenStack project. We used the -s option of to change the packet size. We watched as sometimes packets would fully return, sometimes they'd only make it out and never back in, and sometimes the packets would stop at a random point. We changed to start displaying the hex dump of the packet. We pinged between every combination of outside, controller, compute, and instance." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:92(para) -msgid "Finally, Alvaro noticed something. When a packet from the outside hits the cloud controller, it should not be configured with a VLAN. We verified this as true. When the packet went from the cloud controller to the compute node, it should only have a VLAN if it was destined for an instance. This was still true. When the ping reply was sent from the instance, it should be in a VLAN. True. When it came back to the cloud controller and on its way out to the Internet, it should no longer have a VLAN. False. Uh oh. It looked as though the VLAN part of the packet was not being removed." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:103(para) -msgid "That made no sense." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:104(para) -msgid "While bouncing this idea around in our heads, I was randomly typing commands on the compute node: " -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:111(para) -msgid "\"Hey Alvaro, can you run a VLAN on top of a VLAN?\"" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:113(para) -msgid "\"If you did, you'd add an extra 4 bytes to the packet\"" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:115(para) -msgid "Then it all made sense… " -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:119(para) -msgid "In nova.conf, vlan_interface specifies what interface OpenStack should attach all VLANs to. The correct setting should have been: " -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:123(para) -msgid "As this would be the server's bonded NIC." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:124(para) -msgid "vlan20 is the VLAN that the data center gave us for outgoing Internet access. It's a correct VLAN and is also attached to bond0." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:127(para) -msgid "By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 instead of bond0 thereby stacking one VLAN on top of another. This added an extra 4 bytes to each packet and caused a packet of 1504 bytes to be sent out which would cause problems when it arrived at an interface that only accepted 1500." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:133(para) -msgid "As soon as this setting was fixed, everything worked." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:137(title) -msgid "\"The Issue\"" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:138(para) -msgid "At the end of August 2012, a post-secondary school in Alberta, Canada migrated its infrastructure to an OpenStack cloud. As luck would have it, within the first day or two of it running, one of their servers just disappeared from the network. Blip. Gone." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:143(para) -msgid "After restarting the instance, everything was back up and running. We reviewed the logs and saw that at some point, network communication stopped and then everything went idle. We chalked this up to a random occurrence." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:148(para) -msgid "A few nights later, it happened again." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:149(para) -msgid "We reviewed both sets of logs. The one thing that stood out the most was DHCP. At the time, OpenStack, by default, set DHCP leases for one minute (it's now two minutes). This means that every instance contacts the cloud controller (DHCP server) to renew its fixed IP. For some reason, this instance could not renew its IP. We correlated the instance's logs with the logs on the cloud controller and put together a conversation:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:160(para) -msgid "Instance tries to renew IP." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:163(para) -msgid "Cloud controller receives the renewal request and sends a response." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:167(para) -msgid "Instance \"ignores\" the response and re-sends the renewal request." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:171(para) -msgid "Cloud controller receives the second request and sends a new response." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:175(para) -msgid "Instance begins sending a renewal request to 255.255.255.255 since it hasn't heard back from the cloud controller." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:180(para) -msgid "The cloud controller receives the 255.255.255.255 request and sends a third response." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:185(para) -msgid "The instance finally gives up." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:188(para) -msgid "With this information in hand, we were sure that the problem had to do with DHCP. We thought that for some reason, the instance wasn't getting a new IP address and with no IP, it shut itself off from the network." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:192(para) -msgid "A quick Google search turned up this: DHCP lease errors in VLAN mode (https://lists.launchpad.net/openstack/msg11696.html) which further supported our DHCP theory." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:197(para) -msgid "An initial idea was to just increase the lease time. If the instance only renewed once every week, the chances of this problem happening would be tremendously smaller than every minute. This didn't solve the problem, though. It was just covering the problem up." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:202(para) -msgid "We decided to have run on this instance and see if we could catch it in action again. Sure enough, we did." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:205(para) -msgid "The looked very, very weird. In short, it looked as though network communication stopped before the instance tried to renew its IP. Since there is so much DHCP chatter from a one minute lease, it's very hard to confirm it, but even with only milliseconds difference between packets, if one packet arrives first, it arrived first, and if that packet reported network issues, then it had to have happened before DHCP." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:213(para) -msgid "Additionally, this instance in question was responsible for a very, very large backup job each night. While \"The Issue\" (as we were now calling it) didn't happen exactly when the backup happened, it was close enough (a few hours) that we couldn't ignore it." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:218(para) -msgid "Further days go by and we catch The Issue in action more and more. We find that dhclient is not running after The Issue happens. Now we're back to thinking it's a DHCP issue. Running /etc/init.d/networking restart brings everything back up and running." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:223(para) -msgid "Ever have one of those days where all of the sudden you get the Google results you were looking for? Well, that's what happened here. I was looking for information on dhclient and why it dies when it can't renew its lease and all of the sudden I found a bunch of OpenStack and dnsmasq discussions that were identical to the problem we were seeing!" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:230(para) -msgid "Problem with Heavy Network IO and Dnsmasq (http://www.gossamer-threads.com/lists/openstack/operators/18197)" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:236(para) -msgid "instances losing IP address while running, due to No DHCPOFFER (http://www.gossamer-threads.com/lists/openstack/dev/14696)" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:242(para) -msgid "Seriously, Google." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:243(para) -msgid "This bug report was the key to everything: KVM images lose connectivity with bridged network (https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978)" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:249(para) -msgid "It was funny to read the report. It was full of people who had some strange network problem but didn't quite explain it in the same way." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:252(para) -msgid "So it was a qemu/kvm bug." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:253(para) -msgid "At the same time of finding the bug report, a co-worker was able to successfully reproduce The Issue! How? He used to spew a ton of bandwidth at an instance. Within 30 minutes, the instance just disappeared from the network." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:258(para) -msgid "Armed with a patched qemu and a way to reproduce, we set out to see if we've finally solved The Issue. After 48 hours straight of hammering the instance with bandwidth, we were confident. The rest is history. You can search the bug report for \"joe\" to find my comments and actual tests." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:266(title) -msgid "Disappearing Images" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:267(para) -msgid "At the end of 2012, Cybera (a nonprofit with a mandate to oversee the development of cyberinfrastructure in Alberta, Canada) deployed an updated OpenStack cloud for their DAIR project (http://www.canarie.ca/en/dair-program/about). A few days into production, a compute node locks up. Upon rebooting the node, I checked to see what instances were hosted on that node so I could boot them on behalf of the customer. Luckily, only one instance." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:278(para) -msgid "The command wasn't working, so I used , but it immediately came back with an error saying it was unable to find the backing disk. In this case, the backing disk is the Glance image that is copied to /var/lib/nova/instances/_base when the image is used for the first time. Why couldn't it find it? I checked the directory and sure enough it was gone." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:287(para) -msgid "I reviewed the nova database and saw the instance's entry in the nova.instances table. The image that the instance was using matched what virsh was reporting, so no inconsistency there." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:291(para) -msgid "I checked Glance and noticed that this image was a snapshot that the user created. At least that was good newsthis user would have been the only user affected." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:295(para) -msgid "Finally, I checked StackTach and reviewed the user's events. They had created and deleted several snapshotsmost likely experimenting. Although the timestamps didn't match up, my conclusion was that they launched their instance and then deleted the snapshot and it was somehow removed from /var/lib/nova/instances/_base. None of that made sense, but it was the best I could come up with." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:302(para) -msgid "It turns out the reason that this compute node locked up was a hardware issue. We removed it from the DAIR cloud and called Dell to have it serviced. Dell arrived and began working. Somehow or another (or a fat finger), a different compute node was bumped and rebooted. Great." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:308(para) -msgid "When this node fully booted, I ran through the same scenario of seeing what instances were running so I could turn them back on. There were a total of four. Three booted and one gave an error. It was the same error as before: unable to find the backing disk. Seriously, what?" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:314(para) -msgid "Again, it turns out that the image was a snapshot. The three other instances that successfully started were standard cloud images. Was it a problem with snapshots? That didn't make sense." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:318(para) -msgid "A note about DAIR's architecture: /var/lib/nova/instances is a shared NFS mount. This means that all compute nodes have access to it, which includes the _base directory. Another centralized area is /var/log/rsyslog on the cloud controller. This directory collects all OpenStack logs from all compute nodes. I wondered if there were any entries for the file that is reporting: " -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:333(para) -msgid "Ah-hah! So OpenStack was deleting it. But why?" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:334(para) -msgid "A feature was introduced in Essex to periodically check and see if there were any _base files not in use. If there were, OpenStack Compute would delete them. This idea sounds innocent enough and has some good qualities to it. But how did this feature end up turned on? It was disabled by default in Essex. As it should be. It was decided to be turned on in Folsom (https://bugs.launchpad.net/nova/+bug/1029674). I cannot emphasize enough that:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:346(emphasis) -msgid "Actions which delete things should not be enabled by default." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:349(para) -msgid "Disk space is cheap these days. Data recovery is not." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:351(para) -msgid "Secondly, DAIR's shared /var/lib/nova/instances directory contributed to the problem. Since all compute nodes have access to this directory, all compute nodes periodically review the _base directory. If there is only one instance using an image, and the node that the instance is on is down for a few minutes, it won't be able to mark the image as still in use. Therefore, the image seems like it's not in use and is deleted. When the compute node comes back online, the instance hosted on that node is unable to start." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:364(title) -msgid "The Valentine's Day Compute Node Massacre" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:365(para) -msgid "Although the title of this story is much more dramatic than the actual event, I don't think, or hope, that I'll have the opportunity to use \"Valentine's Day Massacre\" again in a title." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:369(para) -msgid "This past Valentine's Day, I received an alert that a compute node was no longer available in the cloudmeaning, showed this particular node with a status of XXX." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:374(para) -msgid "I logged into the cloud controller and was able to both and SSH into the problematic compute node which seemed very odd. Usually if I receive this type of alert, the compute node has totally locked up and would be inaccessible." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:379(para) -msgid "After a few minutes of troubleshooting, I saw the following details:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:383(para) -msgid "A user recently tried launching a CentOS instance on that node" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:387(para) -msgid "This user was the only user on the node (new node)" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:391(para) -msgid "The load shot up to 8 right before I received the alert" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:395(para) -msgid "The bonded 10gb network device (bond0) was in a DOWN state" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:399(para) -msgid "The 1gb NIC was still alive and active" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:402(para) -msgid "I looked at the status of both NICs in the bonded pair and saw that neither was able to communicate with the switch port. Seeing as how each NIC in the bond is connected to a separate switch, I thought that the chance of a switch port dying on each switch at the same time was quite improbable. I concluded that the 10gb dual port NIC had died and needed replaced. I created a ticket for the hardware support department at the data center where the node was hosted. I felt lucky that this was a new node and no one else was hosted on it yet." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:412(para) -msgid "An hour later I received the same alert, but for another compute node. Crap. OK, now there's definitely a problem going on. Just like the original node, I was able to log in by SSH. The bond0 NIC was DOWN but the 1gb NIC was active." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:417(para) -msgid "And the best part: the same user had just tried creating a CentOS instance. What?" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:419(para) -msgid "I was totally confused at this point, so I texted our network admin to see if he was available to help. He logged in to both switches and immediately saw the problem: the switches detected spanning tree packets coming from the two compute nodes and immediately shut the ports down to prevent spanning tree loops: " -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:432(para) -msgid "He re-enabled the switch ports and the two compute nodes immediately came back to life." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:434(para) -msgid "Unfortunately, this story has an open ending... we're still looking into why the CentOS image was sending out spanning tree packets. Further, we're researching a proper way on how to mitigate this from happening. It's a bigger issue than one might think. While it's extremely important for switches to prevent spanning tree loops, it's very problematic to have an entire compute node be cut from the network when this happens. If a compute node is hosting 100 instances and one of them sends a spanning tree packet, that instance has effectively DDOS'd the other 99 instances." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:445(para) -msgid "This is an ongoing and hot topic in networking circles especially with the raise of virtualization and virtual switches." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:450(title) -msgid "Down the Rabbit Hole" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:451(para) -msgid "Users being able to retrieve console logs from running instances is a boon for supportmany times they can figure out what's going on inside their instance and fix what's going on without bothering you. Unfortunately, sometimes overzealous logging of failures can cause problems of its own." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:457(para) -msgid "A report came in: VMs were launching slowly, or not at all. Cue the standard checksnothing on the Nagios, but there was a spike in network towards the current master of our RabbitMQ cluster. Investigation started, but soon the other parts of the queue cluster were leaking memory like a sieve. Then the alert came inthe master Rabbit server went down and connections failed over to the slave." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:464(para) -msgid "At that time, our control services were hosted by another team and we didn't have much debugging information to determine what was going on with the master, and we could not reboot it. That team noted that it failed without alert, but managed to reboot it. After an hour, the cluster had returned to its normal state and we went home for the day." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:471(para) -msgid "Continuing the diagnosis the next morning was kick started by another identical failure. We quickly got the message queue running again, and tried to work out why Rabbit was suffering from so much network traffic. Enabling debug logging on nova-api quickly brought understanding. A was scrolling by faster than we'd ever seen before. CTRL+C on that and we could plainly see the contents of a system log spewing failures over and over again - a system log from one of our users' instances." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:483(para) -msgid "After finding the instance ID we headed over to /var/lib/nova/instances to find the console.log: " -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:491(para) -msgid "Sure enough, the user had been periodically refreshing the console log page on the dashboard and the 5G file was traversing the Rabbit cluster to get to the dashboard." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:495(para) -msgid "We called them and asked them to stop for a while, and they were happy to abandon the horribly broken VM. After that, we started monitoring the size of console logs." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:499(para) -msgid "To this day, the issue (https://bugs.launchpad.net/nova/+bug/832507) doesn't have a permanent resolution, but we look forward to the discussion at the next summit." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:507(title) -msgid "Havana Haunted by the Dead" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:508(para) -msgid "Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed this story." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:510(para) -msgid "I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO repository and everything was running pretty wellexcept the EC2 API." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:513(para) -msgid "I noticed that the API would suffer from a heavy load and respond slowly to particular EC2 requests such as RunInstances." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:516(para) -msgid "Output from /var/log/nova/nova-api.log on Havana:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:525(para) -msgid "This request took over two minutes to process, but executed quickly on another co-existing Grizzly deployment using the same hardware and system configuration." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:528(para) -msgid "Output from /var/log/nova/nova-api.log on Grizzly:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:537(para) -msgid "While monitoring system resources, I noticed a significant increase in memory consumption while the EC2 API processed this request. I thought it wasn't handling memory properlypossibly not releasing memory. If the API received several of these requests, memory consumption quickly grew until the system ran out of RAM and began using swap. Each node has 48 GB of RAM and the \"nova-api\" process would consume all of it within minutes. Once this happened, the entire system would become unusably slow until I restarted the nova-api service." -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:547(para) -msgid "So, I found myself wondering what changed in the EC2 API on Havana that might cause this to happen. Was it a bug or a normal behavior that I now need to work around?" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:550(para) -msgid "After digging into the nova (OpenStack Compute) code, I noticed two areas in api/ec2/cloud.py potentially impacting my system:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:562(para) -msgid "Since my database contained many recordsover 1 million metadata records and over 300,000 instance records in \"deleted\" or \"errored\" stateseach search took a long time. I decided to clean up the database by first archiving a copy for backup and then performing some deletions using the MySQL client. For example, I ran the following SQL command to remove rows of instances deleted for over a year:" -msgstr "" - -#: ./doc/openstack-ops/app_crypt.xml:570(para) -msgid "Performance increased greatly after deleting the old records and my new deployment continues to behave well." -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-nova.xml:411(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_01in01.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#. When image changes, this message will be marked fuzzy or untranslated for you. -#. It doesn't matter what you translate it to: it's not used at all. -#: ./doc/openstack-ops/section_arch_example-nova.xml:450(None) -msgid "@@image: 'http://git.openstack.org/cgit/openstack/operations-guide/plain/doc/openstack-ops/figures/osog_01in02.png'; md5=THIS FILE DOESN'T EXIST" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:12(title) -msgid "Example Architecture—Legacy Networking (nova)" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:14(para) -msgid "This particular example architecture has been upgraded from Grizzly to Havana and tested in production environments where many public IP addresses are available for assignment to multiple instances. You can find a second example architecture that uses OpenStack Networking (neutron) after this section. Each example offers high availability, meaning that if a particular node goes down, another node with the same configuration can take over the tasks so that the services continue to be available.HavanaGrizzly" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:29(para) -msgid "The simplest architecture you can build upon for Compute has a single cloud controller and multiple compute nodes. The simplest architecture for Object Storage has five nodes: one for identifying users and proxying requests to the API, then four for storage itself to provide enough replication for eventual consistency. This example architecture does not dictate a particular number of nodes, but shows the thinking and considerations that went into choosing this architecture including the features offered.CentOSRDO (Red Hat Distributed OpenStack)Ubuntulegacy networking (nova)component overviewexample architectureslegacy networking; OpenStack networkingObject Storagesimplest architecture forComputesimplest architecture for" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:87(para) -msgid "Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, including derivatives such as CentOS and Scientific Linux" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:95(para) -msgid "Ubuntu Cloud Archive or RDO*" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:109(para) -msgid "MySQL*" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:115(para) -msgid "RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux and derivatives" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:122(literal) -msgid "nova-network" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:126(para) -msgid "Network manager" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:132(para) -msgid "Single nova-network or multi-host?" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:135(para) -msgid "multi-host*" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:139(para) -msgid "Image service (glance) back end" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:141(para) -msgid "file" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:145(para) -msgid "Identity (keystone) driver" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:151(para) -msgid "Block Storage (cinder) back end" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:153(para) -msgid "LVM/iSCSI" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:157(para) -msgid "Live Migration back end" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:159(para) -msgid "Shared storage using NFS*" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:170(para) -msgid "An asterisk (*) indicates when the example architecture deviates from the settings of a default installation. We'll offer explanations for those deviations next.objectsobject storagestorageobject storagemigrationlive migrationIP addressesfloatingfloating IP addressstorageblock storageblock storagedashboardlegacy networking (nova)features supported by" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:209(para) -msgid "Dashboard: You probably want to offer a dashboard, but your users may be more interested in API access only." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:215(para) -msgid "Block storage: You don't have to offer users block storage if their use case only needs ephemeral storage on compute nodes, for example." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:221(para) -msgid "Floating IP address: Floating IP addresses are public IP addresses that you allocate from a predefined pool to assign to virtual machines at launch. Floating IP address ensure that the public IP address is available whenever an instance is booted. Not every organization can offer thousands of public floating IP addresses for thousands of instances, so this feature is considered optional." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:232(para) -msgid "Live migration: If you need to move running virtual machine instances from one host to another with little or no service interruption, you would enable live migration, but it is considered optional." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:239(para) -msgid "Object storage: You may choose to store machine images on a file system rather than in object storage if you do not have the extra hardware for the required replication and redundancy that OpenStack Object Storage offers." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:205(para) -msgid "The following features of OpenStack are supported by the example architecture documented in this guide, but are optional:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:252(para) -msgid "This example architecture has been selected based on the current default feature set of OpenStack Havana, with an emphasis on stability. We believe that many clouds that currently run OpenStack in production have made similar choices.legacy networking (nova)rationale for choice of" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:262(para) -msgid "You must first choose the operating system that runs on all of the physical nodes. While OpenStack is supported on several distributions of Linux, we used Ubuntu 12.04 LTS (Long Term Support), which is used by the majority of the development community, has feature completeness compared with other distributions and has clear future support plans." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:269(para) -msgid "We recommend that you do not use the default Ubuntu OpenStack install packages and instead use the Ubuntu Cloud Archive. The Cloud Archive is a package repository supported by Canonical that allows you to upgrade to future OpenStack releases while remaining on Ubuntu 12.04." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:276(para) -msgid "KVM as a hypervisor complements the choice of Ubuntu—being a matched pair in terms of support, and also because of the significant degree of attention it garners from the OpenStack development community (including the authors, who mostly use KVM). It is also feature complete, free from licensing charges and restrictions.kernel-based VM (KVM) hypervisorhypervisorsKVM" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:289(para) -msgid "MySQL follows a similar trend. Despite its recent change of ownership, this database is the most tested for use with OpenStack and is heavily documented. We deviate from the default database, SQLite, because SQLite is not an appropriate database for production usage." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:295(para) -msgid "The choice of RabbitMQ over other AMQP compatible options that are gaining support in OpenStack, such as ZeroMQ and Qpid, is due to its ease of use and significant testing in production. It also is the only option that supports features such as Compute cells. We recommend clustering with RabbitMQ, as it is an integral component of the system and fairly simple to implement due to its inbuilt nature.Advanced Message Queuing Protocol (AMQP)" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:305(para) -msgid "As discussed in previous chapters, there are several options for networking in OpenStack Compute. We recommend FlatDHCP and to use Multi-Host networking mode for high availability, running one nova-network daemon per OpenStack compute host. This provides a robust mechanism for ensuring network interruptions are isolated to individual compute hosts, and allows for the direct use of hardware network gateways." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:314(para) -msgid "Live Migration is supported by way of shared storage, with NFS as the distributed file system." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:318(para) -msgid "Acknowledging that many small-scale deployments see running Object Storage just for the storage of virtual machine images as too costly, we opted for the file back end in the OpenStack Image service (Glance). If your cloud will include Object Storage, you can easily add it as a back end." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:324(para) -msgid "We chose the SQL back end for Identity over others, such as LDAP. This back end is simple to install and is robust. The authors acknowledge that many installations want to bind with existing directory services and caution careful understanding of the array of options available." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:332(para) -msgid "Block Storage (cinder) is installed natively on external storage nodes and uses the LVM/iSCSI plug-in. Most Block Storage plug-ins are tied to particular vendor products and implementations limiting their use to consumers of those hardware platforms, but LVM/iSCSI is robust and stable on commodity hardware." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:341(para) -msgid "While the cloud can be run without the OpenStack Dashboard, we consider it to be indispensable, not just for user interaction with the cloud, but also as a tool for operators. Additionally, the dashboard's use of Django makes it a flexible framework for extension." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:349(title) -msgid "Why not use OpenStack Networking?" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:351(para) -msgid "This example architecture does not use OpenStack Networking, because it does not yet support multi-host networking and our organizations (university, government) have access to a large range of publicly-accessible IPv4 addresses.legacy networking (nova)vs. OpenStack Networking (neutron)" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:362(title) -msgid "Why use multi-host networking?" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:364(para) -msgid "In a default OpenStack deployment, there is a single nova-network service that runs within the cloud (usually on the cloud controller) that provides services such as network address translation (NAT), DHCP, and DNS to the guest instances. If the single node that runs the nova-network service goes down, you cannot access your instances, and the instances cannot access the Internet. The single node that runs the nova-network service can become a bottleneck if excessive network traffic comes in and goes out of the cloud.networksmulti-hostmulti-host networkinglegacy networking (nova)benefits of multi-host networking" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:385(para) -msgid "Multi-host is a high-availability option for the network configuration, where the nova-network service is run on every compute node instead of running on only a single node." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:396(para) -msgid "The reference architecture consists of multiple compute nodes, a cloud controller, an external NFS storage server for instance storage, and an OpenStack Block Storage server for volume storage.legacy networking (nova)detailed description A network time service (Network Time Protocol, or NTP) synchronizes time on all the nodes. FlatDHCPManager in multi-host mode is used for the networking. A logical diagram for this example architecture shows which services are running on each node:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:416(para) -msgid "The cloud controller runs the dashboard, the API services, the database (MySQL), a message queue server (RabbitMQ), the scheduler for choosing compute resources (nova-scheduler), Identity services (keystone, nova-consoleauth), Image services (glance-api, glance-registry), services for console access of guests, and Block Storage services, including the scheduler for storage resources (cinder-api and cinder-scheduler).cloud controllersduties of" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:429(para) -msgid "Compute nodes are where the computing resources are held, and in our example architecture, they run the hypervisor (KVM), libvirt (the driver for the hypervisor, which enables live migration from node to node), nova-compute, nova-api-metadata (generally only used when running in multi-host mode, it retrieves instance-specific metadata), nova-vncproxy, and nova-network." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:437(para) -msgid "The network consists of two switches, one for the management or private traffic, and one that covers public access, including floating IPs. To support this, the cloud controller and the compute nodes have two network cards. The OpenStack Block Storage and NFS storage servers only need to access the private network and therefore only need one network card, but multiple cards run in a bonded configuration are recommended if possible. Floating IP access is direct to the Internet, whereas Flat IP access goes through a NAT. To envision the network traffic, use this diagram:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:457(title) -msgid "Optional Extensions" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:459(para) -msgid "You can extend this reference architecture aslegacy networking (nova)optional extensions follows:" -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:468(para) -msgid "Add additional cloud controllers (see )." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:473(para) -msgid "Add an OpenStack Storage service (see the Object Storage chapter in the OpenStack Installation Guide for your distribution)." -msgstr "" - -#: ./doc/openstack-ops/section_arch_example-nova.xml:479(para) -msgid "Add additional OpenStack Block Storage hosts (see )." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:12(title) -msgid "Lay of the Land" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:14(para) -msgid "This chapter helps you set up your working environment and use it to take a look around your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:18(title) -msgid "Using the OpenStack Dashboard for Administration" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:20(para) -msgid "As a cloud administrative user, you can use the OpenStack dashboard to create and manage projects, users, images, and flavors. Users are allowed to create and manage images within specified projects and to share images, depending on the Image service configuration. Typically, the policy configuration allows admin users only to set quotas and create and manage services. The dashboard provides an Admin tab with a System Panel and an Identity tab. These interfaces give you access to system information and usage as well as to settings for configuring what end users can do. Refer to the OpenStack Admin User Guide for detailed how-to information about using the dashboard as an admin user.working environmentdashboarddashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:41(title) -msgid "Command-Line Tools" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:43(para) -msgid "We recommend using a combination of the OpenStack command-line interface (CLI) tools and the OpenStack dashboard for administration. Some users with a background in other cloud technologies may be using the EC2 Compatibility API, which uses naming conventions somewhat different from the native API. We highlight those differences.working environmentcommand-line tools" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:53(para) -msgid "We strongly suggest that you install the command-line clients from the Python Package Index (PyPI) instead of from the distribution packages. The clients are under heavy development, and it is very likely at any given time that the version of the packages distributed by your operating-system vendor are out of date.command-line toolsPython Package Index (PyPI)pip utilityPython Package Index (PyPI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:68(para) -msgid "The pip utility is used to manage package installation from the PyPI archive and is available in the python-pip package in most Linux distributions. Each OpenStack project has its own client, so depending on which services your site runs, install some or all of the followingneutronpython-neutronclientswiftpython-swiftclientcinderkeystoneglancepython-glanceclientnovapython-novaclient packages:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:96(para) -msgid "python-novaclient (nova CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:100(para) -msgid "python-glanceclient (glance CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:104(para) -msgid "python-keystoneclient (keystone CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:109(para) -msgid "python-cinderclient (cinder CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:113(para) -msgid "python-swiftclient (swift CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:117(para) -msgid "python-neutronclient (neutron CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:122(title) -msgid "Installing the Tools" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:124(para) -msgid "To install (or upgrade) a package from the PyPI archive with pip, command-line toolsinstallingas root:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:133(para) -msgid "To remove the package:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:137(para) -msgid "If you need even newer versions of the clients, pip can install directly from the upstream git repository using the -e flag. You must specify a name for the Python egg that is installed. For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:145(para) -msgid "If you support the EC2 API on your cloud, you should also install the euca2ools package or some other EC2 API tool so that you can get the same view your users have. Using EC2 API-based tools is mostly out of the scope of this guide, though we discuss getting credentials for use with it." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:153(title) -msgid "Administrative Command-Line Tools" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:155(para) -msgid "There are also several *-manage command-line tools. These are installed with the project's services on the cloud controller and do not need to be installed*-manage command-line toolscommand-line toolsadministrative separately:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:167(literal) -msgid "nova-manage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:171(literal) -msgid "glance-manage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:175(literal) -msgid "keystone-manage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:179(literal) -msgid "cinder-manage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:183(para) -msgid "Unlike the CLI tools mentioned above, the *-manage tools must be run from the cloud controller, as root, because they need read access to the config files such as /etc/nova/nova.conf and to make queries directly against the database rather than against the OpenStack API endpoints.API (application programming interface)API endpointendpointsAPI endpoint" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:199(para) -msgid "The existence of the *-manage tools is a legacy issue. It is a goal of the OpenStack project to eventually migrate all of the remaining functionality in the *-manage tools into the API-based tools. Until that day, you need to SSH into the cloud controller node to perform some maintenance operations that require one of the *-manage tools.cloud controller nodescommand-line tools and" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:215(title) -msgid "Getting Credentials" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:217(para) -msgid "You must have the appropriate credentials if you want to use the command-line tools to make queries against your OpenStack cloud. By far, the easiest way to obtain authentication credentials to use with command-line clients is to use the OpenStack dashboard. Select Project, click the Project tab, and click Access & Security on the Compute category. On the Access & Security page, click the API Access tab to display two buttons, Download OpenStack RC File and Download EC2 Credentials, which let you generate files that you can source in your shell to populate the environment variables the command-line tools require to know where your service endpoints and your authentication information are. The user you logged in to the dashboard dictates the filename for the openrc file, such as demo-openrc.sh. When logged in as admin, the file is named admin-openrc.sh.credentialsauthenticationcommand-line toolsgetting credentials" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:244(para) -msgid "The generated file looks something like this:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:272(para) -msgid "This does not save your password in plain text, which is a good thing. But when you source or run the script, it prompts you for your password and then stores your response in the environment variable OS_PASSWORD. It is important to note that this does require interactivity. It is possible to store a value directly in the script if you require a noninteractive operation, but you then need to be extremely cautious with the security and permissions of this file.passwordssecurity issuespasswords" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:288(para) -msgid "EC2 compatibility credentials can be downloaded by selecting Project, then Compute , then Access & Security, then API Access to display the Download EC2 Credentials button. Click the button to generate a ZIP file with server x509 certificates and a shell script fragment. Create a new directory in a secure location because these are live credentials containing all the authentication information required to access your cloud identity, unlike the default user-openrc. Extract the ZIP file here. You should have cacert.pem, cert.pem, ec2rc.sh, and pk.pem. The ec2rc.sh is similar to this:access key" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:325(para) -msgid "To put the EC2 credentials into your environment, source the ec2rc.sh file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:330(title) -msgid "Inspecting API Calls" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:332(para) -msgid "The command-line tools can be made to show the OpenStack API calls they make by passing the --debug flag to them.API (application programming interface)API calls, inspectingcommand-line toolsinspecting API calls For example:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:346(para) -msgid "This example shows the HTTP requests from the client and the responses from the endpoints, which can be helpful in creating custom tools written to the OpenStack API." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:351(title) -msgid "Using cURL for further inspection" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:353(para) -msgid "Underlying the use of the command-line tools is the OpenStack API, which is a RESTful API that runs over HTTP. There may be cases where you want to interact with the API directly or need to use it because of a suspected bug in one of the CLI tools. The best way to do this is to use a combination of cURL and another tool, such as jq, to parse the JSON from the responses.authentication tokenscURL" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:366(para) -msgid "The first thing you must do is authenticate with the cloud using your credentials to get an authentication token." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:370(para) -msgid "Your credentials are a combination of username, password, and tenant (project). You can extract these values from the openrc.sh discussed above. The token allows you to interact with your other service endpoints without needing to reauthenticate for every request. Tokens are typically good for 24 hours, and when the token expires, you are alerted with a 401 (Unauthorized) response and you can request another token.catalog" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:383(para) -msgid "Look at your OpenStack service catalog:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:395(para) -msgid "Read through the JSON response to get a feel for how the catalog is laid out." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:398(para) -msgid "To make working with subsequent requests easier, store the token in an environment variable:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:408(para) -msgid "Now you can refer to your token on the command line as $TOKEN." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:413(para) -msgid "Pick a service endpoint from your service catalog, such as compute. Try a request, for example, listing instances (servers):" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:424(para) -msgid "To discover how API requests should be structured, read the OpenStack API Reference. To chew through the responses using jq, see the jq Manual." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:429(para) -msgid "The -s flag used in the cURL commands above are used to prevent the progress meter from being shown. If you are having trouble running cURL commands, you'll want to remove it. Likewise, to help you troubleshoot cURL commands, you can include the -v flag to show you the verbose output. There are many more extremely useful features in cURL; refer to the man page for all the options." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:440(title) -msgid "Servers and Services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:442(para) -msgid "As an administrator, you have a few ways to discover what your OpenStack cloud looks like simply by using the OpenStack tools available. This section gives you an idea of how to get an overview of your cloud, its shape, size, and current state.servicesobtaining overview ofserversobtaining overview ofcloud computingcloud overviewcommand-line toolsservers and services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:464(para) -msgid "First, you can discover what servers belong to your OpenStack cloud by running:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:469(para) -msgid "The output looks like the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:483(para) -msgid "The output shows that there are five compute nodes and one cloud controller. You see a smiley face, such as :-), which indicates that the services are up and running. If a service is no longer available, the :-) symbol changes to XXX. This is an indication that you should troubleshoot why the service is down." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:490(para) -msgid "If you are using cinder, run the following command to see a similar listing:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:503(para) -msgid "With these two tables, you now have a good overview of what servers and services make up your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:506(para) -msgid "You can also use the Identity service (keystone) to see what services are available in your cloud as well as what endpoints have been configured for the services.Identitydisplaying services and endpoints with" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:514(para) -msgid "The following command requires you to have your shell environment configured with the proper administrative variables:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:537(para) -msgid "The preceding output has been truncated to show only two services. You will see one service entry for each service that your cloud provides. Note how the endpoint domain can be different depending on the endpoint type. Different endpoint domains per type are not required, but this can be done for different reasons, such as endpoint privacy or network traffic segregation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:544(para) -msgid "You can find the version of the Compute installation by using the nova-managecommand: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:550(title) -msgid "Diagnose Your Compute Nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:552(para) -msgid "You can obtain extra information about virtual machines that are running—their CPU usage, the memory, the disk I/O or network I/O—per instance, by running the nova diagnostics command withcompute nodesdiagnosingcommand-line toolscompute node diagnostics a server ID:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:567(para) -msgid "The output of this command varies depending on the hypervisor because hypervisors support different attributes.hypervisorscompute node diagnosis and The following demonstrates the difference between the two most popular hypervisors. Here is example output when the hypervisor is Xen: While the command should work with any hypervisor that is controlled through libvirt (KVM, QEMU, or LXC), it has been tested only with KVM. Here is the example output when the hypervisor is KVM:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:617(title) -msgid "Network Inspection" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:619(para) -msgid "To see which fixed IP networks are configured in your cloud, you can use the nova command-line client to get the IP ranges:networksinspection ofworking environmentnetwork inspection" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:637(para) -msgid "The nova-manage tool can provide some additional details:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:645(para) -msgid "This output shows that two networks are configured, each network containing 255 IPs (a /24 subnet). The first network has been assigned to a certain project, while the second network is still open for assignment. You can assign this network manually; otherwise, it is automatically assigned when a project launches its first instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:651(para) -msgid "To find out whether any floating IPs are available in your cloud, run:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:659(para) -msgid "Here, two floating IPs are available. The first has been allocated to a project, while the other is unallocated." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:664(title) -msgid "Users and Projects" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:666(para) -msgid "To see a list of projects that have been added to the cloud,projectsobtaining list of currentuser managementlisting usersworking environmentusers and projects run:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:693(para) -msgid "To see a list of users, run:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:709(para) -msgid "Sometimes a user and a group have a one-to-one mapping. This happens for standard system accounts, such as cinder, glance, nova, and swift, or when only one user is part of a group." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:716(title) -msgid "Running Instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:718(para) -msgid "To see a list of running instances,instanceslist of runningworking environmentrunning instances run:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:742(para) -msgid "Unfortunately, this command does not tell you various details about the running instances, such as what compute node the instance is running on, what flavor the instance is, and so on. You can use the following command to view details about individual instances:config drive" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:752(para) -msgid "For example: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:782(para) -msgid "This output shows that an instance named was created from an Ubuntu 12.04 image using a flavor of m1.small and is hosted on the compute node c02.example.com." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_lay_of_land.xml:791(para) -msgid "We hope you have enjoyed this quick tour of your working environment, including how to interact with your cloud and extract useful information. From here, you can use the Admin User Guide as your reference for all of the command-line functionality in your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_examples.xml:12(title) -msgid "Architecture Examples" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_examples.xml:14(para) -msgid "To understand the possibilities that OpenStack offers, it's best to start with basic architecture that has been tested in production environments. We offer two examples with basic pivots on the base operating system (Ubuntu and Red Hat Enterprise Linux) and the networking architecture. There are other differences between these two examples and this guide provides reasons for each choice made." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_examples.xml:21(para) -msgid "Because OpenStack is highly configurable, with many different back ends and network configuration options, it is difficult to write documentation that covers all possible OpenStack deployments. Therefore, this guide defines examples of architecture to simplify the task of documenting, as well as to provide the scope for this guide. Both of the offered architecture examples are currently running in production and serving users." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_examples.xml:29(para) -msgid "As always, refer to the if you are unclear about any of the terminology mentioned in architecture examples." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_examples.xml:39(title) -msgid "Parting Thoughts on Architecture Examples" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_examples.xml:41(para) -msgid "With so many considerations and options available, our hope is to provide a few clearly-marked and tested paths for your OpenStack exploration. If you're looking for additional ideas, check out , the OpenStack Installation Guides, or the OpenStack User Stories page." -msgstr "" - -#: ./doc/openstack-ops/bk_ops_guide.xml:16(title) -msgid "OpenStack Operations Guide" -msgstr "" - -#: ./doc/openstack-ops/bk_ops_guide.xml:18(titleabbrev) -msgid "OpenStack Ops Guide" -msgstr "" - -#: ./doc/openstack-ops/bk_ops_guide.xml:26(orgname) ./doc/openstack-ops/bk_ops_guide.xml:32(holder) -msgid "OpenStack Foundation" -msgstr "" - -#: ./doc/openstack-ops/bk_ops_guide.xml:31(year) -msgid "2014" -msgstr "" - -#: ./doc/openstack-ops/bk_ops_guide.xml:38(remark) -msgid "Copyright details are filled in by the template." -msgstr "" - -#: ./doc/openstack-ops/bk_ops_guide.xml:43(para) -msgid "This book provides information about designing and operating OpenStack clouds." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:12(title) -msgid "Maintenance, Failures, and Debugging" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:14(para) -msgid "Downtime, whether planned or unscheduled, is a certainty when running a cloud. This chapter aims to provide useful information for dealing proactively, or reactively, with these occurrences.maintenance/debuggingtroubleshooting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:26(title) -msgid "Cloud Controller and Storage Proxy Failures and Maintenance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:28(para) -msgid "The cloud controller and storage proxy are very similar to each other when it comes to expected and unexpected downtime. One of each server type typically runs in the cloud, which makes them very noticeable when they are not running." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:33(para) -msgid "For the cloud controller, the good news is if your cloud is using the FlatDHCP multi-host HA network mode, existing instances and volumes continue to operate while the cloud controller is offline. For the storage proxy, however, no storage traffic is possible until it is back up and running." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:42(title) ./doc/openstack-ops/ch_ops_maintenance.xml:174(title) -msgid "Planned Maintenance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:44(para) -msgid "One way to plan for cloud controller or storage proxy maintenance is to simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy affects fewer users. If your cloud controller or storage proxy is too important to have unavailable at any point in time, you must look into high-availability options.cloud controllersplanned maintenance ofmaintenance/debuggingcloud controller planned maintenance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:62(title) -msgid "Rebooting a Cloud Controller or Storage Proxy" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:64(para) -msgid "All in all, just issue the \"reboot\" command. The operating system cleanly shuts down services and then automatically reboots. If you want to be very thorough, run your backup jobs just before you reboot.maintenance/debuggingrebooting followingstoragestorage proxy maintenancerebootcloud controller or storage proxycloud controllersrebooting" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:89(title) -msgid "After a Cloud Controller or Storage Proxy Reboots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:91(para) -msgid "After a cloud controller reboots, ensure that all required services were successfully started. The following commands use ps and grep to determine if nova, glance, and keystone are currently running:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:101(para) -msgid "Also check that all services are functioning. The following set of commands sources the openrc file, then runs some basic glance, nova, and openstack commands. If the commands work as expected, you can be confident that those services are in working condition:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:112(para) -msgid "For the storage proxy, ensure that the Object Storage service has resumed:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:117(para) -msgid "Also check that it is functioning:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:125(title) -msgid "Total Cloud Controller Failure" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:127(para) -msgid "The cloud controller could completely fail if, for example, its motherboard goes bad. Users will immediately notice the loss of a cloud controller since it provides core functionality to your cloud environment. If your infrastructure monitoring does not alert you that your cloud controller has failed, your users definitely will. Unfortunately, this is a rough situation. The cloud controller is an integral part of your cloud. If you have only one controller, you will have many missing services if it goes down.cloud controllerstotal failure ofmaintenance/debuggingcloud controller total failure" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:144(para) -msgid "To avoid this situation, create a highly available cloud controller cluster. This is outside the scope of this document, but you can read more in the OpenStack High Availability Guide." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:150(para) -msgid "The next best approach is to use a configuration-management tool, such as Puppet, to automatically build a cloud controller. This should not take more than 15 minutes if you have a spare server available. After the controller rebuilds, restore any backups taken (see )." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:156(para) -msgid "Also, in practice, the nova-compute services on the compute nodes do not always reconnect cleanly to rabbitmq hosted on the controller when it comes back up after a long reboot; a restart on the nova services on the compute nodes is required." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:166(title) -msgid "Compute Node Failures and Maintenance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:168(para) -msgid "Sometimes a compute node either crashes unexpectedly or requires a reboot for maintenance reasons." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:176(para) -msgid "If you need to reboot a compute node due to planned maintenance (such as a software or hardware upgrade), first ensure that all hosted instances have been moved off the node. If your cloud is utilizing shared storage, use the nova live-migration command. First, get a list of instances that need to be moved:compute nodesmaintenancemaintenance/debuggingcompute node planned maintenance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:193(para) -msgid "Next, migrate them one by one:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:197(para) -msgid "If you are not using shared storage, you can use the --block-migrate option:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:202(para) -msgid "After you have migrated all instances, ensure that the nova-compute service has stopped:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:208(para) -msgid "If you use a configuration-management system, such as Puppet, that ensures the nova-compute service is always running, you can temporarily move the init files:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:216(para) -msgid "Next, shut down your compute node, perform your maintenance, and turn the node back on. You can reenable the nova-compute service by undoing the previous commands:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:223(para) -msgid "Then start the nova-compute service:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:227(para) -msgid "You can now optionally migrate the instances back to their original compute node." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:234(title) -msgid "After a Compute Node Reboots" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:236(para) -msgid "When you reboot a compute node, first verify that it booted successfully. This includes ensuring that the nova-compute service is running:rebootcompute nodemaintenance/debuggingcompute node reboot" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:251(para) -msgid "Also ensure that it has successfully connected to the AMQP server:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:257(para) -msgid "After the compute node is successfully running, you must deal with the instances that are hosted on that compute node because none of them are running. Depending on your SLA with your users or customers, you might have to start each instance and ensure that they start correctly." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:269(para) -msgid "You can create a list of instances that are hosted on the compute node by performing the following command:instancesmaintenance/debuggingmaintenance/debugginginstances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:282(para) -msgid "After you have the list, you can use the nova command to start each instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:288(para) -msgid "Any time an instance shuts down unexpectedly, it might have problems on boot. For example, the instance might require an fsck on the root partition. If this happens, the user can use the dashboard VNC console to fix this." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:294(para) -msgid "If an instance does not boot, meaning virsh list never shows the instance as even attempting to boot, do the following on the compute node:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:300(para) -msgid "Try executing the nova reboot command again. You should see an error message about why the instance was not able to boot" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:304(para) -msgid "In most cases, the error is the result of something in libvirt's XML file (/etc/libvirt/qemu/instance-xxxxxxxx.xml) that no longer exists. You can enforce re-creation of the XML file as well as rebooting the instance by running the following command:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:315(title) -msgid "Inspecting and Recovering Data from Failed Instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:317(para) -msgid "In some scenarios, instances are running but are inaccessible through SSH and do not respond to any command. The VNC console could be displaying a boot failure or kernel panic error messages. This could be an indication of file system corruption on the VM itself. If you need to recover files or inspect the content of the instance, qemu-nbd can be used to mount the disk.datainspecting/recovering failed instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:329(para) -msgid "If you access or view the user's content and data, get approval first!security issuesfailed instance data inspection" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:337(para) -msgid "To access the instance's disk (/var/lib/nova/instances/instance-xxxxxx/disk), use the following steps:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:343(para) -msgid "Suspend the instance using the virsh command." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:348(para) -msgid "Connect the qemu-nbd device to the disk." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:352(para) ./doc/openstack-ops/ch_ops_maintenance.xml:412(para) -msgid "Mount the qemu-nbd device." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:356(para) -msgid "Unmount the device after inspecting." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:360(para) -msgid "Disconnect the qemu-nbd device." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:364(para) -msgid "Resume the instance." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:368(para) -msgid "If you do not follow steps 4 through 6, OpenStack Compute cannot manage the instance any longer. It fails to respond to any command issued by OpenStack Compute, and it is marked as shut down." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:372(para) -msgid "Once you mount the disk file, you should be able to access it and treat it as a collection of normal directories with files and a directory structure. However, we do not recommend that you edit or touch any files because this could change the access control lists (ACLs) that are used to determine which accounts can perform what operations on files and directories. Changing ACLs can make the instance unbootable if it is not already.access control list (ACL)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:384(para) -msgid "Suspend the instance using the virsh command, taking note of the internal ID:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:399(para) -msgid "Connect the qemu-nbd device to the disk:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:414(para) -msgid "The qemu-nbd device tries to export the instance disk's different partitions as separate devices. For example, if vda is the disk and vda1 is the root partition, qemu-nbd exports the device as /dev/nbd0 and /dev/nbd0p1, respectively:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:422(para) -msgid "You can now access the contents of /mnt, which correspond to the first partition of the instance's disk." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:425(para) -msgid "To examine the secondary or ephemeral disk, use an alternate mount point if you want both primary and secondary drives mounted at the same time:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:458(para) -msgid "Once you have completed the inspection, unmount the mount point and release the qemu-nbd device:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:467(para) -msgid "Resume the instance using virsh:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:485(title) -msgid "Volumes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:487(para) -msgid "If the affected instances also had attached volumes, first generate a list of instance and volume UUIDs:volumemaintenance/debuggingmaintenance/debuggingvolumes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:505(para) -msgid "You should see a result similar to the following:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:515(para) -msgid "Next, manually detach and reattach the volumes, where X is the proper mount point:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:521(para) -msgid "Be sure that the instance has successfully booted and is at a login screen before doing the above." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:528(title) -msgid "Total Compute Node Failure" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:530(para) -msgid "Compute nodes can fail the same way a cloud controller can fail. A motherboard failure or some other type of hardware failure can cause an entire compute node to go offline. When this happens, all instances running on that compute node will not be available. Just like with a cloud controller failure, if your infrastructure monitoring does not detect a failed compute node, your users will notify you because of their lost instances.compute nodesfailuresmaintenance/debuggingcompute node total failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:546(para) -msgid "If a compute node fails and won't be fixed for a few hours (or at all), you can relaunch all instances that are hosted on the failed node if you use shared storage for /var/lib/nova/instances." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:551(para) -msgid "To do this, generate a list of instance UUIDs that are hosted on the failed node by running the following query on the nova database:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:558(para) -msgid "Next, update the nova database to indicate that all instances that used to be hosted on c01.example.com are now hosted on c02.example.com:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:565(para) -msgid "If you're using the Networking service ML2 plug-in, update the Networking service database to indicate that all ports that used to be hosted on c01.example.com are now hosted on c02.example.com:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:576(para) -msgid "After that, use the nova command to reboot all instances that were on c01.example.com while regenerating their XML files at the same time:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:582(para) -msgid "Finally, reattach volumes using the same method described in the section Volumes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:589(title) -msgid "/var/lib/nova/instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:591(para) -msgid "It's worth mentioning this directory in the context of failed compute nodes. This directory contains the libvirt KVM file-based disk images for the instances that are hosted on that compute node. If you are not running your cloud in a shared storage environment, this directory is unique across all compute nodes./var/lib/nova/instances directorymaintenance/debugging/var/lib/nova/instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:603(para) -msgid "/var/lib/nova/instances contains two types of directories." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:606(para) -msgid "The first is the _base directory. This contains all the cached base images from glance for each unique image that has been launched on that compute node. Files ending in _20 (or a different number) are the ephemeral base images." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:611(para) -msgid "The other directories are titled instance-xxxxxxxx. These directories correspond to instances running on that compute node. The files inside are related to one of the files in the _base directory. They're essentially differential-based files containing only the changes made from the original _base directory." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:618(para) -msgid "All files and directories in /var/lib/nova/instances are uniquely named. The files in _base are uniquely titled for the glance image that they are based on, and the directory names instance-xxxxxxxx are uniquely titled for that particular instance. For example, if you copy all data from /var/lib/nova/instances on one compute node to another, you do not overwrite any files or cause any damage to images that have the same unique name, because they are essentially the same file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:627(para) -msgid "Although this method is not documented or supported, you can use it when your compute node is permanently offline but you have instances locally stored on it." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:636(title) -msgid "Storage Node Failures and Maintenance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:638(para) -msgid "Because of the high redundancy of Object Storage, dealing with object storage node issues is a lot easier than dealing with compute node issues." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:645(title) -msgid "Rebooting a Storage Node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:647(para) -msgid "If a storage node requires a reboot, simply reboot it. Requests for data hosted on that node are redirected to other copies while the server is rebooting.storage nodenodesstorage nodesmaintenance/debuggingstorage node reboot" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:665(title) -msgid "Shutting Down a Storage Node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:667(para) -msgid "If you need to shut down a storage node for an extended period of time (one or more days), consider removing the node from the storage ring. For example:maintenance/debuggingstorage node shut down" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:682(para) -msgid "Next, redistribute the ring files to the other nodes:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:689(para) -msgid "These actions effectively take the storage node out of the storage cluster." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:692(para) -msgid "When the node is able to rejoin the cluster, just add it back to the ring. The exact syntax you use to add a node to your swift cluster with swift-ring-builder heavily depends on the original options used when you originally created your cluster. Please refer back to those commands." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:702(title) -msgid "Replacing a Swift Disk" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:704(para) -msgid "If a hard drive fails in an Object Storage node, replacing it is relatively easy. This assumes that your Object Storage environment is configured correctly, where the data that is stored on the failed drive is also replicated to other drives in the Object Storage environment.hard drives, replacingmaintenance/debuggingswift disk replacement" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:716(para) -msgid "This example assumes that /dev/sdb has failed." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:718(para) -msgid "First, unmount the disk:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:722(para) -msgid "Next, physically remove the disk from the server and replace it with a working disk." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:725(para) -msgid "Ensure that the operating system has recognized the new disk:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:730(para) -msgid "You should see a message about /dev/sdb." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:732(para) -msgid "Because it is recommended to not use partitions on a swift disk, simply format the disk as a whole:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:737(para) -msgid "Finally, mount the disk:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:741(para) -msgid "Swift should notice the new disk and that no data exists. It then begins replicating the data to the disk from the other existing replicas." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:750(title) -msgid "Handling a Complete Failure" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:752(para) -msgid "A common way of dealing with the recovery from a full system failure, such as a power outage of a data center, is to assign each service a priority, and restore in order. shows an example.service restorationmaintenance/debuggingcomplete failures" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:765(caption) -msgid "Example service restoration priority list" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:769(th) -msgid "Priority" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:771(th) -msgid "Services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:777(para) ./doc/openstack-ops/ch_arch_scaling.xml:94(para) ./doc/openstack-ops/ch_arch_scaling.xml:106(para) -msgid "1" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:779(para) -msgid "Internal network connectivity" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:783(para) ./doc/openstack-ops/ch_arch_scaling.xml:118(para) -msgid "2" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:785(para) -msgid "Backing storage services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:789(para) -msgid "3" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:791(para) -msgid "Public network connectivity for user virtual machines" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:796(para) ./doc/openstack-ops/ch_arch_scaling.xml:130(para) -msgid "4" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:798(para) -msgid "nova-compute, nova-network, cinder hosts" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:803(para) -msgid "5" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:805(para) -msgid "User virtual machines" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:809(para) -msgid "10" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:811(para) -msgid "Message queue and database services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:815(para) -msgid "15" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:817(para) -msgid "Keystone services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:821(para) -msgid "20" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:823(literal) -msgid "cinder-scheduler" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:827(para) -msgid "21" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:829(para) -msgid "Image Catalog and Delivery services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:833(para) -msgid "22" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:835(para) -msgid "nova-scheduler services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:839(para) -msgid "98" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:841(literal) -msgid "cinder-api" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:845(para) -msgid "99" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:847(para) -msgid "nova-api services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:851(para) -msgid "100" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:853(para) -msgid "Dashboard node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:858(para) -msgid "Use this example priority list to ensure that user-affected services are restored as soon as possible, but not before a stable environment is in place. Of course, despite being listed as a single-line item, each step requires significant work. For example, just after starting the database, you should check its integrity, or, after starting the nova services, you should verify that the hypervisor matches the database and fix any mismatches." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:872(para) -msgid "Maintaining an OpenStack cloud requires that you manage multiple physical servers, and this number might grow over time. Because managing nodes manually is error prone, we strongly recommend that you use a configuration-management tool. These tools automate the process of ensuring that all your nodes are configured properly and encourage you to maintain your configuration information (such as packages and configuration options) in a version-controlled repository.configuration managementnetworksconfiguration managementmaintenance/debuggingconfiguration management" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:892(para) -msgid "Several configuration-management tools are available, and this guide does not recommend a specific one. The two most popular ones in the OpenStack community are Puppet, with available OpenStack Puppet modules; and Chef, with available OpenStack Chef recipes. Other newer configuration tools include Juju, Ansible, and Salt; and more mature configuration management tools include CFEngine and Bcfg2." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:913(title) -msgid "Working with Hardware" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:915(para) -msgid "As for your initial deployment, you should ensure that all hardware is appropriately burned in before adding it to production. Run software that uses the hardware to its limits—maxing out RAM, CPU, disk, and network. Many options are available, and normally double as benchmark software, so you also get a good idea of the performance of your system.hardwaremaintenance/debuggingmaintenance/debugginghardware" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:933(title) -msgid "Adding a Compute Node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:935(para) -msgid "If you find that you have reached or are reaching the capacity limit of your computing resources, you should plan to add additional compute nodes. Adding more nodes is quite easy. The process for adding compute nodes is the same as when the initial compute nodes were deployed to your cloud: use an automated deployment system to bootstrap the bare-metal server with the operating system and then have a configuration-management system install and configure OpenStack Compute. Once the Compute service has been installed and configured in the same way as the other compute nodes, it automatically attaches itself to the cloud. The cloud controller notices the new node(s) and begins scheduling instances to launch there.cloud controllersnew compute nodes andnodesaddingcompute nodesadding" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:959(para) -msgid "If your OpenStack Block Storage nodes are separate from your compute nodes, the same procedure still applies because the same queuing and polling system is used in both services." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:963(para) -msgid "We recommend that you use the same hardware for new compute and block storage nodes. At the very least, ensure that the CPUs are similar in the compute nodes to not break live migration." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:971(title) -msgid "Adding an Object Storage Node" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:973(para) -msgid "Adding a new object storage node is different from adding compute or block storage nodes. You still want to initially configure the server by using your automated deployment and configuration-management systems. After that is done, you need to add the local disks of the object storage node into the object storage ring. The exact command to do this is the same command that was used to add the initial disks to the ring. Simply rerun this command on the object storage proxy server for all disks on the new object storage node. Once this has been done, rebalance the ring and copy the resulting ring files to the other storage nodes.Object Storageadding nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:989(para) -msgid "If your new object storage node has a different number of disks than the original nodes have, the command to add the new node is different from the original commands. These parameters vary from environment to environment." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:999(title) -msgid "Replacing Components" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1001(para) -msgid "Failures of hardware are common in large-scale deployments such as an infrastructure cloud. Consider your processes and balance time saving against availability. For example, an Object Storage cluster can easily live with dead disks in it for some period of time if it has sufficient capacity. Or, if your compute installation is not full, you could consider live migrating instances off a host with a RAM failure until you have time to deal with the problem." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1016(para) -msgid "Almost all OpenStack components have an underlying database to store persistent information. Usually this database is MySQL. Normal MySQL administration is applicable to these databases. OpenStack does not configure the databases out of the ordinary. Basic administration includes performance tweaking, high availability, backup, recovery, and repairing. For more information, see a standard MySQL administration guide.databasesmaintenance/debuggingmaintenance/debuggingdatabases" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1032(para) -msgid "You can perform a couple of tricks with the database to either more quickly retrieve information or fix a data inconsistency error—for example, an instance was terminated, but the status was not updated in the database. These tricks are discussed throughout this book." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1040(title) -msgid "Database Connectivity" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1042(para) -msgid "Review the component's configuration file to see how each OpenStack component accesses its corresponding database. Look for either sql_connection or simply connection. The following command uses grep to display the SQL connection string for nova, glance, cinder, and keystone:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1048(emphasis) -msgid "grep -hE \"connection ?=\" /etc/nova/nova.conf /etc/glance/glance-*.conf /etc/cinder/cinder.conf /etc/keystone/keystone.conf" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1056(para) -msgid "The connection strings take this format:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1064(title) -msgid "Performance and Optimizing" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1066(para) -msgid "As your cloud grows, MySQL is utilized more and more. If you suspect that MySQL might be becoming a bottleneck, you should start researching MySQL optimization. The MySQL manual has an entire section dedicated to this topic: Optimization Overview." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1078(title) -msgid "HDWMY" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1080(para) -msgid "Here's a quick list of various to-do items for each hour, day, week, month, and year. Please note that these tasks are neither required nor definitive but helpful ideas:maintenance/debuggingschedule of tasks" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1091(title) -msgid "Hourly" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1095(para) -msgid "Check your monitoring system for alerts and act on them." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1100(para) -msgid "Check your ticket queue for new tickets." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1108(title) -msgid "Daily" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1112(para) -msgid "Check for instances in a failed or weird state and investigate why." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1117(para) -msgid "Check for security patches and apply them as needed." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1125(title) -msgid "Weekly" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1131(para) -msgid "User quotas" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1135(para) -msgid "Disk space" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1139(para) -msgid "Image usage" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1143(para) -msgid "Large instances" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1147(para) -msgid "Network usage (bandwidth and IP usage)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1129(para) -msgid "Check cloud usage: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1153(para) -msgid "Verify your alert mechanisms are still working." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1161(title) -msgid "Monthly" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1165(para) -msgid "Check usage and trends over the past month." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1169(para) -msgid "Check for user accounts that should be removed." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1173(para) -msgid "Check for operator accounts that should be removed." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1181(title) -msgid "Quarterly" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1185(para) -msgid "Review usage and trends over the past quarter." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1189(para) -msgid "Prepare any quarterly reports on usage and statistics." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1193(para) -msgid "Review and plan any necessary cloud additions." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1197(para) -msgid "Review and plan any major OpenStack upgrades." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1205(title) -msgid "Semiannually" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1209(para) -msgid "Upgrade OpenStack." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1213(para) -msgid "Clean up after an OpenStack upgrade (any unused or new services to be aware of?)." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1223(title) -msgid "Determining Which Component Is Broken" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1225(para) -msgid "OpenStack's collection of different components interact with each other strongly. For example, uploading an image requires interaction from nova-api, glance-api, glance-registry, keystone, and potentially swift-proxy. As a result, it is sometimes difficult to determine exactly where problems lie. Assisting in this is the purpose of this section.logging/monitoringtailing logsmaintenance/debuggingdetermining component affected" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1244(title) -msgid "Tailing Logs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1246(para) -msgid "The first place to look is the log file related to the command you are trying to run. For example, if nova list is failing, try tailing a nova log file and running the command again:tailing logs" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1253(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1268(para) -msgid "Terminal 1:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1257(para) ./doc/openstack-ops/ch_ops_maintenance.xml:1272(para) -msgid "Terminal 2:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1261(para) -msgid "Look for any errors or traces in the log file. For more information, see ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1264(para) -msgid "If the error indicates that the problem is with another component, switch to tailing that component's log file. For example, if nova cannot access glance, look at the glance-api log:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1276(para) -msgid "Wash, rinse, and repeat until you find the core cause of the problem." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1283(title) -msgid "Running Daemons on the CLI" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1285(para) -msgid "Unfortunately, sometimes the error is not apparent from the log files. In this case, switch tactics and use a different command; maybe run the service directly on the command line. For example, if the glance-api service refuses to start and stay running, try launching the daemon from the command line:daemonsrunning on CLICommand-line interface (CLI)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1300(para) -msgid "The -H flag is required when running the daemons with sudo because some daemons will write files relative to the user's home directory, and this write may fail if -H is left off." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1299(para) -msgid "This might print the error and cause of the problem." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1307(title) -msgid "Example of Complexity" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1309(para) -msgid "One morning, a compute node failed to run any instances. The log files were a bit vague, claiming that a certain instance was unable to be started. This ended up being a red herring because the instance was simply the first instance in alphabetical order, so it was the first instance that nova-compute would touch." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1315(para) -msgid "Further troubleshooting showed that libvirt was not running at all. This made more sense. If libvirt wasn't running, then no instance could be virtualized through KVM. Upon trying to start libvirt, it would silently die immediately. The libvirt logs did not explain why." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1321(para) -msgid "Next, the libvirtd daemon was run on the command line. Finally a helpful error message: it could not connect to d-bus. As ridiculous as it sounds, libvirt, and thus nova-compute, relies on d-bus and somehow d-bus crashed. Simply starting d-bus set the entire chain back on track, and soon everything was back up and running." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1336(title) -msgid "What to do when things are running slowly" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1338(para) -msgid "When you are getting slow responses from various services, it can be hard to know where to start looking. The first thing to check is the extent of the slowness: is it specific to a single service, or varied among different services? If your problem is isolated to a specific service, it can temporarily be fixed by restarting the service, but that is often only a fix for the symptom and not the actual problem." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1347(para) -msgid "This is a collection of ideas from experienced operators on common things to look at that may be the cause of slowness. It is not, however, designed to be an exhaustive list." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1355(title) -msgid "OpenStack Identity service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1356(para) -msgid "If OpenStack Identity is responding slowly, it could be due to the token table getting large. This can be fixed by running the command." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1361(para) -msgid "Additionally, for Identity-related issues, try the tips in ." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1370(para) -msgid "OpenStack Image service can be slowed down by things related to the Identity service, but the Image service itself can be slowed down if connectivity to the back-end storage in use is slow or otherwise problematic. For example, your back-end NFS server might have gone down." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1381(title) -msgid "OpenStack Block Storage service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1382(para) -msgid "OpenStack Block Storage service is similar to the Image service, so start by checking Identity-related services, and the back-end storage. Additionally, both the Block Storage and Image services rely on AMQP and SQL functionality, so consider these when debugging." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1392(title) -msgid "OpenStack Compute service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1393(para) -msgid "Services related to OpenStack Compute are normally fairly fast and rely on a couple of backend services: Identity for authentication and authorization), and AMQP for interoperability. Any slowness related to services is normally related to one of these. Also, as with all other services, SQL is used extensively." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1404(title) -msgid "OpenStack Networking service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1405(para) -msgid "Slowness in the OpenStack Networking service can be caused by services that it relies upon, but it can also be related to either physical or virtual networking. For example: network namespaces that do not exist or are not tied to interfaces correctly; DHCP daemons that have hung or are not running; a cable being physically disconnected; a switch not being configured correctly. When debugging Networking service problems, begin by verifying all physical networking functionality (switch configuration, physical cabling, etc.). After the physical networking is verified, check to be sure all of the Networking services are running (neutron-server, neutron-dhcp-agent, etc.), then check on AMQP and SQL back ends." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1422(title) -msgid "AMQP broker" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1423(para) -msgid "Regardless of which AMQP broker you use, such as RabbitMQ, there are common issues which not only slow down operations, but can also cause real problems. Sometimes messages queued for services stay on the queues and are not consumed. This can be due to dead or stagnant services and can be commonly cleared up by either restarting the AMQP-related services or the OpenStack service in question." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1435(title) -msgid "SQL back end" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1436(para) -msgid "Whether you use SQLite or an RDBMS (such as MySQL), SQL interoperability is essential to a functioning OpenStack environment. A large or fragmented SQLite file can cause slowness when using files as a back end. A locked or long-running query can cause delays for most RDBMS services. In this case, do not kill the query immediately, but look into it to see if it is a problem with something that is hung, or something that is just taking a long time to run and needs to finish on its own. The administration of an RDBMS is outside the scope of this document, but it should be noted that a properly functioning RDBMS is essential to most OpenStack services." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1457(title) -msgid "Uninstalling" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1459(para) -msgid "While we'd always recommend using your automated deployment system to reinstall systems from scratch, sometimes you do need to remove OpenStack from a system the hard way. Here's how:uninstall operationmaintenance/debugginguninstalling" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1472(para) -msgid "Remove all packages." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1476(para) -msgid "Remove remaining files." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1480(para) -msgid "Remove databases." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_maintenance.xml:1484(para) -msgid "These steps depend on your underlying distribution, but in general you should be looking for \"purge\" commands in your package manager, like aptitude purge ~c $package. Following this, you can look for orphaned files in the directories referenced throughout this guide. To uninstall the database properly, refer to the manual appropriate for the product in use." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:15(title) -msgid "Scaling" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:17(para) -msgid "Whereas traditional applications required larger hardware to scale (\"vertical scaling\"), cloud-based applications typically request more, discrete hardware (\"horizontal scaling\"). If your cloud is successful, eventually you must add resources to meet the increasing demand.scalingvertical vs. horizontal" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:27(para) -msgid "To suit the cloud paradigm, OpenStack itself is designed to be horizontally scalable. Rather than switching to larger servers, you procure more servers and simply install identically configured services. Ideally, you scale out and load balance among groups of functionally identical services (for example, compute nodes or nova-api nodes), that communicate on a message bus." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:35(title) -msgid "The Starting Point" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:37(para) -msgid "Determining the scalability of your cloud and how to improve it is an exercise with many variables to balance. No one solution meets everyone's scalability goals. However, it is helpful to track a number of metrics. Since you can define virtual hardware templates, called \"flavors\" in OpenStack, you can start to make scaling decisions based on the flavors you'll provide. These templates define sizes for memory in RAM, root disk size, amount of ephemeral data disk space available, and number of cores for starters.virtual machine (VM)hardwarevirtual hardwareflavorscalingmetrics for" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:58(para) -msgid "The default OpenStack flavors are shown in ." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:64(caption) -msgid "OpenStack default flavors" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:80(th) -msgid "Virtual cores" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:82(th) -msgid "Memory" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:92(para) -msgid "m1.tiny" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:96(para) -msgid "512 MB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:98(para) -msgid "1 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:100(para) -msgid "0 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:104(para) -msgid "m1.small" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:108(para) -msgid "2 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:110(para) ./doc/openstack-ops/ch_arch_scaling.xml:122(para) ./doc/openstack-ops/ch_arch_scaling.xml:134(para) ./doc/openstack-ops/ch_arch_scaling.xml:146(para) -msgid "10 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:112(para) -msgid "20 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:116(para) -msgid "m1.medium" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:120(para) -msgid "4 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:124(para) -msgid "40 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:128(para) -msgid "m1.large" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:132(para) -msgid "8 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:136(para) -msgid "80 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:140(para) -msgid "m1.xlarge" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:142(para) -msgid "8" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:144(para) -msgid "16 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:148(para) -msgid "160 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:156(para) -msgid "The number of virtual machines (VMs) you expect to run, ((overcommit fraction " -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:162(para) -msgid "How much storage is required (flavor disk size " -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:153(para) -msgid "The starting point for most is the core count of your cloud. By applying some ratios, you can gather information about: You can use these ratios to determine how much additional infrastructure you need to support your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:168(para) -msgid "Here is an example using the ratios for gathering scalability information for the number of VMs expected as well as the storage needed. The following numbers support (200 / 2) 16 = 1600 VM instances and require 80 TB of storage for /var/lib/nova/instances:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:175(para) -msgid "200 physical cores." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:179(para) -msgid "Most instances are size m1.medium (two virtual cores, 50 GB of storage)." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:184(para) -msgid "Default CPU overcommit ratio (cpu_allocation_ratio in nova.conf) of 16:1." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:189(para) -msgid "However, you need more than the core count alone to estimate the load that the API services, database servers, and queue servers are likely to encounter. You must also consider the usage patterns of your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:194(para) -msgid "As a specific example, compare a cloud that supports a managed web-hosting platform with one running integration tests for a development project that creates one VM per code commit. In the former, the heavy work of creating a VM happens only every few months, whereas the latter puts constant heavy load on the cloud controller. You must consider your average VM lifetime, as a larger number generally means less load on the cloud controller.cloud controllersscalability and" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:206(para) -msgid "Aside from the creation and termination of VMs, you must consider the impact of users accessing the service—particularly on nova-api and its associated database. Listing instances garners a great deal of information and, given the frequency with which users run this operation, a cloud with a large number of users can increase the load significantly. This can occur even without their knowledge—leaving the OpenStack dashboard instances tab open in the browser refreshes the list of VMs every 30 seconds." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:215(para) -msgid "After you consider these factors, you can determine how many cloud controller cores you require. A typical eight core, 8 GB of RAM server is sufficient for up to a rack of compute nodes — given the above caveats." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:220(para) -msgid "You must also consider key hardware specifications for the performance of user VMs, as well as budget and performance needs, including storage performance (spindles/core), memory availability (RAM/core), network bandwidthbandwidthhardware specifications and (Gbps/core), and overall CPU performance (CPU/core)." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:230(para) -msgid "For a discussion of metric tracking, including how to extract metrics from your cloud, see ." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:237(title) -msgid "Adding Cloud Controller Nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:239(para) -msgid "You can facilitate the horizontal expansion of your cloud by adding nodes. Adding compute nodes is straightforward—they are easily picked up by the existing installation. However, you must consider some important points when you design your cluster to be highly available.compute nodesaddinghigh availabilityconfiguration optionshigh availabilitycloud controller nodesaddingscalingadding cloud controller nodes" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:263(para) -msgid "Recall that a cloud controller node runs several different services. You can install services that communicate only using the message queue internally—nova-scheduler and nova-console—on a new server for expansion. However, other integral parts require more care." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:269(para) -msgid "You should load balance user-facing services such as dashboard, nova-api, or the Object Storage proxy. Use any standard HTTP load-balancing method (DNS round robin, hardware load balancer, or software such as Pound or HAProxy). One caveat with dashboard is the VNC proxy, which uses the WebSocket protocol—something that an L7 load balancer might struggle with. See also Horizon session storage." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:279(para) -msgid "You can configure some services, such as nova-api and glance-api, to use multiple processes by changing a flag in their configuration file—allowing them to share work between multiple cores on the one machine." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:285(para) -msgid "Several options are available for MySQL load balancing, and the supported AMQP brokers have built-in clustering support. Information on how to configure these and many of the other services can be found in .Advanced Message Queuing Protocol (AMQP)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:296(title) -msgid "Segregating Your Cloud" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:298(para) -msgid "When you want to offer users different regions to provide legal considerations for data storage, redundancy across earthquake fault lines, or for low-latency API calls, you segregate your cloud. Use one of the following OpenStack methods to segregate your cloud: cells, regions, availability zones, or host aggregates.segregation methodsscalingcloud segregation" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:312(para) -msgid "Each method provides different functionality and can be best divided into two groups:" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:317(para) -msgid "Cells and regions, which segregate an entire cloud and result in running separate Compute deployments." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:322(para) -msgid "Availability zones and host aggregates, which merely divide a single Compute deployment." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:328(para) -msgid " provides a comparison view of each segregation method currently provided by OpenStack Compute.endpointsAPI endpoint" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:337(caption) -msgid "OpenStack segregation methods" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:343(th) -msgid "Cells" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:345(th) -msgid "Regions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:347(th) -msgid "Availability zones" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:349(th) -msgid "Host aggregates" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:355(emphasis) -msgid "Use when you need" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:358(para) -msgid "A single API endpoint for compute, or you require a second level of scheduling." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:361(para) -msgid "Discrete regions with separate API endpoints and no coordination between regions." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:364(para) -msgid "Logical separation within your nova deployment for physical isolation or redundancy." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:367(para) -msgid "To schedule a group of hosts with common features." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:372(emphasis) -msgid "Example" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:374(para) -msgid "A cloud with multiple sites where you can schedule VMs \"anywhere\" or on a particular site." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:377(para) -msgid "A cloud with multiple sites, where you schedule VMs to a particular site and you want a shared infrastructure." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:380(para) -msgid "A single-site cloud with equipment fed by separate power supplies." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:383(para) -msgid "Scheduling to hosts with trusted hardware support." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:388(emphasis) -msgid "Overhead" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:390(para) -msgid "Considered experimental." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:390(para) -msgid "A new service, nova-cells." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:391(para) -msgid "Each cell has a full nova installation except nova-api." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:394(para) -msgid "A different API endpoint for every region." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:395(para) -msgid "Each region has a full nova installation." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:398(para) ./doc/openstack-ops/ch_arch_scaling.xml:400(para) -msgid "Configuration changes to nova.conf." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:404(emphasis) -msgid "Shared services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:407(para) ./doc/openstack-ops/ch_arch_scaling.xml:409(para) ./doc/openstack-ops/ch_arch_scaling.xml:411(para) ./doc/openstack-ops/ch_arch_scaling.xml:413(para) -msgid "Keystone" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:407(code) -msgid "nova-api" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:411(para) ./doc/openstack-ops/ch_arch_scaling.xml:413(para) -msgid "All nova services" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:419(title) -msgid "Cells and Regions" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:421(para) -msgid "OpenStack Compute cells are designed to allow running the cloud in a distributed fashion without having to use more complicated technologies, or be invasive to existing nova installations. Hosts in a cloud are partitioned into groups called cells. Cells are configured in a tree. The top-level cell (\"API cell\") has a host that runs the nova-api service, but no nova-compute services. Each child cell runs all of the other typical nova-* services found in a regular installation, except for the nova-api service. Each cell has its own message queue and database service and also runs nova-cells, which manages the communication between the API cell and child cells.scalingcells and regionscellscloud segregationregion" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:444(para) -msgid "This allows for a single API server being used to control access to multiple cloud installations. Introducing a second level of scheduling (the cell selection), in addition to the regular nova-scheduler selection of hosts, provides greater flexibility to control where virtual machines are run." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:450(para) -msgid "Unlike having a single API endpoint, regions have a separate API endpoint per installation, allowing for a more discrete separation. Users wanting to run instances across sites have to explicitly select a region. However, the additional complexity of a running a new service is not required." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:456(para) -msgid "The OpenStack dashboard (horizon) can be configured to use multiple regions. This can be configured through the parameter." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:461(title) -msgid "Availability Zones and Host Aggregates" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:463(para) -msgid "You can use availability zones, host aggregates, or both to partition a nova deployment.scalingavailability zones" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:470(para) -msgid "Availability zones are implemented through and configured in a similar way to host aggregates." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:473(para) -msgid "However, you use them for different reasons." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:476(title) -msgid "Availability zone" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:478(para) -msgid "This enables you to arrange OpenStack compute hosts into logical groups and provides a form of physical isolation and redundancy from other availability zones, such as by using a separate power supply or network equipment.availability zone" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:485(para) -msgid "You define the availability zone in which a specified compute host resides locally on each server. An availability zone is commonly used to identify a set of servers that have a common attribute. For instance, if some of the racks in your data center are on a separate power source, you can put servers in those racks in their own availability zone. Availability zones can also help separate different classes of hardware." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:493(para) -msgid "When users provision resources, they can specify from which availability zone they want their instance to be built. This allows cloud consumers to ensure that their application resources are spread across disparate machines to achieve high availability in the event of hardware failure." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:501(title) -msgid "Host aggregates zone" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:503(para) -msgid "This enables you to partition OpenStack Compute deployments into logical groups for load balancing and instance distribution. You can use host aggregates to further partition an availability zone. For example, you might use host aggregates to partition an availability zone into groups of hosts that either share common resources, such as storage and network, or have a special property, such as trusted computing hardware.scalinghost aggregatehost aggregate" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:517(para) -msgid "A common use of host aggregates is to provide information for use with the nova-scheduler. For example, you might use a host aggregate to group a set of hosts that share specific flavors or images." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:522(para) -msgid "The general case for this is setting key-value pairs in the aggregate metadata and matching key-value pairs in flavor's extra_specs metadata. The AggregateInstanceExtraSpecsFilter in the filter scheduler will enforce that instances be scheduled only on hosts in aggregates that define the same key to the same value." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:528(para) -msgid "An advanced use of this general concept allows different flavor types to run with different CPU and RAM allocation ratios so that high-intensity computing loads and low-intensity development and testing systems can share the same cloud without either starving the high-use systems or wasting resources on low-utilization systems. This works by setting metadata in your host aggregates and matching extra_specs in your flavor types." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:537(para) -msgid "The first step is setting the aggregate metadata keys cpu_allocation_ratio and ram_allocation_ratio to a floating-point value. The filter schedulers AggregateCoreFilter and AggregateRamFilter will use those values rather than the global defaults in nova.conf when scheduling to hosts in the aggregate. It is important to be cautious when using this feature, since each host can be in multiple aggregates but should have only one allocation ratio for each resources. It is up to you to avoid putting a host in multiple aggregates that define different values for the same resource." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:551(para) -msgid "This is the first half of the equation. To get flavor types that are guaranteed a particular ratio, you must set the extra_specs in the flavor type to the key-value pair you want to match in the aggregate. For example, if you define extra_specscpu_allocation_ratio to \"1.0\", then instances of that type will run in aggregates only where the metadata key cpu_allocation_ratio is also defined as \"1.0.\" In practice, it is better to define an additional key-value pair in the aggregate metadata to match on rather than match directly on cpu_allocation_ratio or core_allocation_ratio. This allows better abstraction. For example, by defining a key overcommit and setting a value of \"high,\" \"medium,\" or \"low,\" you could then tune the numeric allocation ratios in the aggregates without also needing to change all flavor types relating to them." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:570(para) -msgid "Previously, all services had an availability zone. Currently, only the nova-compute service has its own availability zone. Services such as nova-scheduler, nova-network, and nova-conductor have always spanned all availability zones." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:581(para) -msgid "nova host-list (os-hosts)" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:585(para) -msgid "euca-describe-availability-zones verbose" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:589(para) -msgid "nova-manage service list" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:577(para) -msgid "When you run any of the following operations, the services appear in their own internal availability zone (CONF.internal_service_availability_zone): The internal availability zone is hidden in euca-describe-availability_zones (nonverbose)." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:594(para) -msgid "CONF.node_availability_zone has been renamed to CONF.default_availability_zone and is used only by the nova-api and nova-scheduler services." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:599(para) -msgid "CONF.node_availability_zone still works but is deprecated." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:607(title) -msgid "Scalable Hardware" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:609(para) -msgid "While several resources already exist to help with deploying and installing OpenStack, it's very important to make sure that you have your deployment planned out ahead of time. This guide presumes that you have at least set aside a rack for the OpenStack cloud but also offers suggestions for when and what to scale." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:616(title) -msgid "Hardware Procurement" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:618(para) -msgid "“The Cloud” has been described as a volatile environment where servers can be created and terminated at will. While this may be true, it does not mean that your servers must be volatile. Ensuring that your cloud's hardware is stable and configured correctly means that your cloud environment remains up and running. Basically, put effort into creating a stable hardware environment so that you can host a cloud that users may treat as unstable and volatile.serversavoiding volatility inhardwarescalability planningscalinghardware procurement" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:638(para) -msgid "OpenStack can be deployed on any hardware supported by an OpenStack-compatible Linux distribution." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:641(para) -msgid "Hardware does not have to be consistent, but it should at least have the same type of CPU to support instance migration." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:644(para) -msgid "The typical hardware recommended for use with OpenStack is the standard value-for-money offerings that most hardware vendors stock. It should be straightforward to divide your procurement into building blocks such as \"compute,\" \"object storage,\" and \"cloud controller,\" and request as many of these as you need. Alternatively, should you be unable to spend more, if you have existing servers—provided they meet your performance requirements and virtualization technology—they are quite likely to be able to support OpenStack." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:655(title) -msgid "Capacity Planning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:657(para) -msgid "OpenStack is designed to increase in size in a straightforward manner. Taking into account the considerations that we've mentioned in this chapter—particularly on the sizing of the cloud controller—it should be possible to procure additional compute or object storage nodes as needed. New nodes do not need to be the same specification, or even vendor, as existing nodes.capabilityscaling andweightcapacity planningscalingcapacity planning" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:676(para) -msgid "For compute nodes, nova-scheduler will take care of differences in sizing having to do with core count and RAM amounts; however, you should consider that the user experience changes with differing CPU speeds. When adding object storage nodes, a weight should be specified that reflects the capability of the node." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:683(para) -msgid "Monitoring the resource usage and user growth will enable you to know when to procure. details some useful metrics." -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:689(title) -msgid "Burn-in Testing" -msgstr "" - -#: ./doc/openstack-ops/ch_arch_scaling.xml:691(para) -msgid "The chances of failure for the server's hardware are high at the start and the end of its life. As a result, dealing with hardware failures while in production can be avoided by appropriate burn-in testing to attempt to trigger the early-stage failures. The general principle is to stress the hardware to its limits. Examples of burn-in tests include running a CPU or disk benchmark for several days.testingburn-in testingtroubleshootingburn-in testingburn-in testingscalingburn-in testing" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:12(title) -msgid "Customization" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:14(para) -msgid "OpenStack might not do everything you need it to do out of the box. To add a new feature, you can follow different paths.customizationpaths available" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:22(para) -msgid "To take the first path, you can modify the OpenStack code directly. Learn how to contribute, follow the code review workflow, make your changes, and contribute them back to the upstream OpenStack project. This path is recommended if the feature you need requires deep integration with an existing project. The community is always open to contributions and welcomes new functionality that follows the feature-development guidelines. This path still requires you to use DevStack for testing your feature additions, so this chapter walks you through the DevStack environment.OpenStack communitycustomization and" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:37(para) -msgid "For the second path, you can write new features and plug them in using changes to a configuration file. If the project where your feature would need to reside uses the Python Paste framework, you can create middleware for it and plug it in through configuration. There may also be specific ways of customizing a project, such as creating a new scheduler driver for Compute or a custom tab for the dashboard." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:44(para) -msgid "This chapter focuses on the second path for customizing OpenStack by providing two examples for writing new features. The first example shows how to modify Object Storage (swift) middleware to add a new feature, and the second example provides a new scheduler feature for OpenStack Compute (nova). To customize OpenStack this way you need a development environment. The best way to get an environment up and running quickly is to run DevStack within your cloud." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:53(title) -msgid "Create an OpenStack Development Environment" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:55(para) -msgid "To create a development environment, you can use DevStack. DevStack is essentially a collection of shell scripts and configuration files that builds an OpenStack development environment for you. You use it to create such an environment for developing a new feature.customizationdevelopment environment creation fordevelopment environments, creatingDevStackdevelopment environment creation" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:71(para) -msgid "You can find all of the documentation at the DevStack website." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:76(title) -msgid "To run DevStack on an instance in your OpenStack cloud:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:80(para) -msgid "Boot an instance from the dashboard or the nova command-line interface (CLI) with the following parameters:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:85(para) -msgid "Name: devstack" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:89(para) -msgid "Image: Ubuntu 14.04 LTS" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:93(para) -msgid "Memory Size: 4 GB RAM" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:97(para) -msgid "Disk Size: minimum 5 GB" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:101(para) -msgid "If you are using the nova client, specify --flavor 3 for the nova boot command to get adequate memory and disk sizes." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:107(para) -msgid "Log in and set up DevStack. Here's an example of the commands you can use to set up DevStack on a virtual machine:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:114(replaceable) -msgid "username" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:114(replaceable) -msgid "my.instance.ip.address" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:112(para) -msgid "Log in to the instance: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:118(para) -msgid "Update the virtual machine's operating system: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:124(para) -msgid "Install git: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:130(para) -msgid "Clone the devstack repository: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:137(para) -msgid "Change to the devstack repository: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:145(para) -msgid "(Optional) If you've logged in to your instance as the root user, you must create a \"stack\" user; otherwise you'll run into permission issues. If you've logged in as a user other than root, you can skip these steps:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:152(para) -msgid "Run the DevStack script to create the stack user:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:158(para) -msgid "Give ownership of the devstack directory to the stack user:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:165(para) -msgid "Set some permissions you can use to view the DevStack screen later:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:172(para) -msgid "Switch to the stack user:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:180(para) -msgid "Edit the local.conf configuration file that controls what DevStack will deploy. Copy the example local.conf file at the end of this section ():" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:189(para) -msgid "Run the stack script that will install OpenStack: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:195(para) -msgid "When the stack script is done, you can open the screen session it started to view all of the running OpenStack services: " -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:202(para) -msgid "Press CtrlA followed by 0 to go to the first screen window." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:214(para) -msgid "The stack.sh script takes a while to run. Perhaps you can take this opportunity to join the OpenStack Foundation." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:221(para) -msgid "Screen is a useful program for viewing many related services at once. For more information, see the GNU screen quick reference." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:229(para) -msgid "Now that you have an OpenStack development environment, you're free to hack around without worrying about damaging your production deployment. provides a working environment for running OpenStack Identity, Compute, Block Storage, Image service, the OpenStack dashboard, and Object Storage as the starting point." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:236(title) -msgid "local.conf" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:253(title) -msgid "Customizing Object Storage (Swift) Middleware" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:255(para) -msgid "OpenStack Object Storage, known as swift when reading the code, is based on the Python Paste framework. The best introduction to its architecture is A Do-It-Yourself Framework. Because of the swift project's use of this framework, you are able to add features to a project by placing some custom code in a project's pipeline without having to change any of the core code.Paste frameworkPythonswiftswift middlewareObject Storagecustomization ofcustomizationObject StorageDevStackcustomizing Object Storage (swift)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:284(para) -msgid "Imagine a scenario where you have public access to one of your containers, but what you really want is to restrict access to that to a set of IPs based on a whitelist. In this example, we'll create a piece of middleware for swift that allows access to a container from only a set of IP addresses, as determined by the container's metadata items. Only those IP addresses that you explicitly whitelist using the container's metadata will be able to access the container." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:293(para) -msgid "This example is for illustrative purposes only. It should not be used as a container IP whitelist solution without further development and extensive security testing.security issuesmiddleware example" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:302(para) -msgid "When you join the screen session that stack.sh starts with screen -r stack, you see a screen for each service running, which can be a few or several, depending on how many services you configured DevStack to run." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:307(para) -msgid "The asterisk * indicates which screen window you are viewing. This example shows we are viewing the key (for keystone) screen window:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:312(para) -msgid "The purpose of the screen windows are as follows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:316(code) ./doc/openstack-ops/ch_ops_customize.xml:795(code) -msgid "shell" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:319(para) ./doc/openstack-ops/ch_ops_customize.xml:798(para) -msgid "A shell where you can get some work done" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:324(code) -msgid "key*" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:327(para) ./doc/openstack-ops/ch_ops_customize.xml:806(para) -msgid "The keystone service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:335(para) ./doc/openstack-ops/ch_ops_customize.xml:814(para) -msgid "The horizon dashboard web application" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:340(code) -msgid "s-{name}" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:343(para) -msgid "The swift services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:349(title) -msgid "To create the middleware and plug it in through Paste configuration:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:352(para) -msgid "All of the code for OpenStack lives in /opt/stack. Go to the swift directory in the shell screen and edit your middleware module." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:357(para) -msgid "Change to the directory where Object Storage is installed:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:364(para) -msgid "Create the ip_whitelist.py Python source code file:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:371(para) -msgid "Copy the code in into ip_whitelist.py. The following code is a middleware example that restricts access to a container based on IP address as explained at the beginning of the section. Middleware passes the request on to another application. This example uses the swift \"swob\" library to wrap Web Server Gateway Interface (WSGI) requests and responses into objects for swift to interact with. When you're done, save and close the file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:381(title) -msgid "ip_whitelist.py" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:478(para) -msgid "There is a lot of useful information in env and conf that you can use to decide what to do with the request. To find out more about what properties are available, you can insert the following log statement into the __init__ method:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:486(para) -msgid "and the following log statement into the __call__ method:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:493(para) -msgid "To plug this middleware into the swift Paste pipeline, you edit one configuration file, /etc/swift/proxy-server.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:501(para) -msgid "Find the [filter:ratelimit] section in /etc/swift/proxy-server.conf, and copy in the following configuration section after it:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:517(para) -msgid "Find the [pipeline:main] section in /etc/swift/proxy-server.conf, and add ip_whitelist after ratelimit to the list like so. When you're done, save and close the file:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:527(para) -msgid "Restart the swift proxy service to make swift use your middleware. Start by switching to the swift-proxy screen:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:533(para) -msgid "Press CtrlA followed by 3." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:541(para) ./doc/openstack-ops/ch_ops_customize.xml:1040(para) -msgid "Press CtrlC to kill the service." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:549(para) ./doc/openstack-ops/ch_ops_customize.xml:1048(para) -msgid "Press Up Arrow to bring up the last command." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:553(para) ./doc/openstack-ops/ch_ops_customize.xml:1052(para) -msgid "Press Enter to run it." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:559(para) -msgid "Test your middleware with the swift CLI. Start by switching to the shell screen and finish by switching back to the swift-proxy screen to check the log output:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:565(para) -msgid "Press CtrlA followed by 0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:573(para) -msgid "Make sure you're in the devstack directory:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:580(para) -msgid "Source openrc to set up your environment variables for the CLI:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:587(para) -msgid "Create a container called middleware-test:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:594(para) -msgid "Press CtrlA followed by 3 to check the log output." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:605(para) -msgid "Among the log statements you'll see the lines:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:610(para) -msgid "These two statements are produced by our middleware and show that the request was sent from our DevStack instance and was allowed." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:616(para) -msgid "Test the middleware from outside DevStack on a remote machine that has access to your DevStack instance:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:621(para) -msgid "Install the keystone and swift clients on your local machine:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:628(para) -msgid "Attempt to list the objects in the middleware-test container:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:641(para) -msgid "Press CtrlA followed by 3 to check the log output. Look at the swift log statements again, and among the log statements, you'll see the lines:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:656(para) -msgid "Here we can see that the request was denied because the remote IP address wasn't in the set of allowed IPs." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:661(para) -msgid "Back in your DevStack instance on the shell screen, add some metadata to your container to allow the request from the remote machine:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:667(para) -msgid "Press CtrlA followed by 0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:675(para) -msgid "Add metadata to the container to allow the IP:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:681(para) -msgid "Now try the command from Step 10 again and it succeeds. There are no objects in the container, so there is nothing to list; however, there is also no error to report." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:690(para) -msgid "Functional testing like this is not a replacement for proper unit and integration testing, but it serves to get you started.testingfunctional testingfunctional testing" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:701(para) -msgid "You can follow a similar pattern in other projects that use the Python Paste framework. Simply create a middleware module and plug it in through configuration. The middleware runs in sequence as part of that project's pipeline and can call out to other services as necessary. No project core code is touched. Look for a pipeline value in the project's conf or ini configuration files in /etc/<project> to identify projects that use Paste." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:710(para) -msgid "When your middleware is done, we encourage you to open source it and let the community know on the OpenStack mailing list. Perhaps others need the same functionality. They can use your code, provide feedback, and possibly contribute. If enough support exists for it, perhaps you can propose that it be added to the official swift middleware." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:719(title) -msgid "Customizing the OpenStack Compute (nova) Scheduler" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:721(para) -msgid "Many OpenStack projects allow for customization of specific features using a driver architecture. You can write a driver that conforms to a particular interface and plug it in through configuration. For example, you can easily plug in a new scheduler for Compute. The existing schedulers for Compute are feature full and well documented at Scheduling. However, depending on your user's use cases, the existing schedulers might not meet your requirements. You might need to create a new scheduler.customizationOpenStack Compute (nova) Schedulerschedulerscustomization ofDevStackcustomizing OpenStack Compute (nova) scheduler" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:743(para) -msgid "To create a scheduler, you must inherit from the class nova.scheduler.driver.Scheduler. Of the five methods that you can override, you must override the two methods marked with an asterisk (*) below:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:750(code) -msgid "update_service_capabilities" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:754(code) -msgid "hosts_up" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:758(code) -msgid "group_hosts" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:762(para) -msgid "* schedule_run_instance" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:766(para) -msgid "* select_destinations" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:770(para) -msgid "To demonstrate customizing OpenStack, we'll create an example of a Compute scheduler that randomly places an instance on a subset of hosts, depending on the originating IP address of the request and the prefix of the hostname. Such an example could be useful when you have a group of users on a subnet and you want all of their instances to start within some subset of your hosts." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:778(para) -msgid "This example is for illustrative purposes only. It should not be used as a scheduler for Compute without further development and testing.security issuesscheduler example" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:787(para) -msgid "When you join the screen session that stack.sh starts with screen -r stack, you are greeted with many screen windows:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:819(code) -msgid "n-{name}" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:822(para) -msgid "The nova services" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:827(code) -msgid "n-sch" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:830(para) -msgid "The nova scheduler service" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:836(title) -msgid "To create the scheduler and plug it in through configuration:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:840(para) -msgid "The code for OpenStack lives in /opt/stack, so go to the nova directory and edit your scheduler module. Change to the directory where nova is installed:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:849(para) -msgid "Create the ip_scheduler.py Python source code file:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:856(para) -msgid "The code in is a driver that will schedule servers to hosts based on IP address as explained at the beginning of the section. Copy the code into ip_scheduler.py. When you're done, save and close the file." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:863(title) -msgid "ip_scheduler.py" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:991(para) -msgid "There is a lot of useful information in context, request_spec, and filter_properties that you can use to decide where to schedule the instance. To find out more about what properties are available, you can insert the following log statements into the schedule_run_instance method of the scheduler above:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1004(para) -msgid "To plug this scheduler into nova, edit one configuration file, /etc/nova/nova.conf:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1011(para) -msgid "Find the scheduler_driver config and change it like so:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1018(para) -msgid "Restart the nova scheduler service to make nova use your scheduler. Start by switching to the n-sch screen:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1023(para) -msgid "Press CtrlA followed by 9." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1031(para) -msgid "Press CtrlA followed by N until you reach the n-sch screen." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1058(para) -msgid "Test your scheduler with the nova CLI. Start by switching to the shell screen and finish by switching back to the n-sch screen to check the log output:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1064(para) -msgid "Press CtrlA followed by 0." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1072(para) -msgid "Make sure you're in the devstack directory:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1079(para) -msgid "Source openrc to set up your environment variables for the CLI:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1086(para) -msgid "Put the image ID for the only installed image into an environment variable:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1093(para) -msgid "Boot a test server:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1101(para) -msgid "Switch back to the n-sch screen. Among the log statements, you'll see the line:" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1112(para) -msgid "Functional testing like this is not a replacement for proper unit and integration testing, but it serves to get you started." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1116(para) -msgid "A similar pattern can be followed in other projects that use the driver architecture. Simply create a module and class that conform to the driver interface and plug it in through configuration. Your code runs when that feature is used and can call out to other services as necessary. No project core code is touched. Look for a \"driver\" value in the project's .conf configuration files in /etc/<project> to identify projects that use a driver architecture." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1125(para) -msgid "When your scheduler is done, we encourage you to open source it and let the community know on the OpenStack mailing list. Perhaps others need the same functionality. They can use your code, provide feedback, and possibly contribute. If enough support exists for it, perhaps you can propose that it be added to the official Compute schedulers." -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1134(title) -msgid "Customizing the Dashboard (Horizon)" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1136(para) -msgid "The dashboard is based on the Python Django web application framework. The best guide to customizing it has already been written and can be found at Building on Horizon.DjangoPythondashboardDevStackcustomizing dashboardcustomizationdashboard" -msgstr "" - -#: ./doc/openstack-ops/ch_ops_customize.xml:1160(para) -msgid "When operating an OpenStack cloud, you may discover that your users can be quite demanding. If OpenStack doesn't do what your users need, it may be up to you to fulfill those requirements. This chapter provided you with some options for customization and gave you the tools you need to get started." -msgstr "" - -#. Put one translator per line, in the form of NAME , YEAR1, YEAR2 -#: ./doc/openstack-ops/ch_ops_customize.xml:0(None) -msgid "translator-credits" -msgstr "" - diff --git a/doc/openstack-ops/openstack.ent b/doc/openstack-ops/openstack.ent deleted file mode 100644 index 2f1898f0..00000000 --- a/doc/openstack-ops/openstack.ent +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - - - - -COPY'> -GET'> -HEAD'> -PUT'> -POST'> -DELETE'> - diff --git a/doc/openstack-ops/part_architecture.xml b/doc/openstack-ops/part_architecture.xml deleted file mode 100644 index 77319388..00000000 --- a/doc/openstack-ops/part_architecture.xml +++ /dev/null @@ -1,101 +0,0 @@ - - - Architecture - - - Designing an OpenStack cloud is a great achievement. It requires a - robust understanding of the requirements and needs of the cloud's users to - determine the best possible configuration to meet them. OpenStack provides - a great deal of flexibility to achieve your needs, and this part of the - book aims to shine light on many of the decisions you need to make during - the process. - - To design, deploy, and configure OpenStack, administrators must - understand the logical architecture. A diagram can help you envision all - the integrated services within OpenStack and how they interact with each - other. - modules, types of - - OpenStack - - module types in - - - OpenStack modules are one of the following types: - - - - Daemon - - - Runs as a background process. On Linux platforms, a daemon is - usually installed as a service. - daemons - - basics of - - - - - - Script - - - Installs a virtual environment and runs tests. - script modules - - - - - - Command-line interface (CLI) - - - Enables users to submit API calls to OpenStack services - through commands. - Command-line interface (CLI) - - - - - - As shown, end users can interact through the dashboard, CLIs, and - APIs. All services authenticate through a common Identity service, and - individual services interact with each other through public APIs, except - where privileged administrator commands are necessary. shows the most common, but not the - only logical architecture for an OpenStack cloud. -
- OpenStack Logical Architecture (<link - xlink:href="http://docs.openstack.org/openstack-ops/content/figures/2/figures/osog_0001.png" - />) - - - - - -
-
- - - - - - - - - - - - - - -
diff --git a/doc/openstack-ops/part_operations.xml b/doc/openstack-ops/part_operations.xml deleted file mode 100644 index 7c76295b..00000000 --- a/doc/openstack-ops/part_operations.xml +++ /dev/null @@ -1,62 +0,0 @@ - - - Operations - - - Congratulations! By now, you should have a solid design for your - cloud. We now recommend that you turn to the OpenStack Installation Guides - (), - which contains a step-by-step guide on how to manually install the OpenStack - packages and dependencies on your cloud. - - While it is important for an operator to be familiar with the steps - involved in deploying OpenStack, we also strongly encourage you to - evaluate configuration-management tools, such as - Puppet or Chef, which can - help automate this deployment process. - Chef - - Puppet - - - In the remainder of this guide, we assume that you have successfully - deployed an OpenStack cloud and are able to perform basic operations such - as adding images, booting instances, and attaching volumes. - - As your focus turns to stable operations, we recommend that you do - skim the remainder of this book to get a sense of the content. Some of - this content is useful to read in advance so that you can put best - practices into effect to simplify your life in the long run. Other content - is more useful as a reference that you might turn to when an unexpected - event occurs (such as a power failure), or to troubleshoot a particular - problem. - - - - - - - - - - - - - - - - - - - - - - - - diff --git a/doc/openstack-ops/pom.xml b/doc/openstack-ops/pom.xml deleted file mode 100644 index b83be7a3..00000000 --- a/doc/openstack-ops/pom.xml +++ /dev/null @@ -1,111 +0,0 @@ - - - 4.0.0 - - org.openstack.docs - openstack-ops-manual - 1.0.0 - jar - OpenStack Manuals - - - - local - 1 - - - - - - - - - - com.rackspace.cloud.api - clouddocs-maven-plugin - 2.1.4 - - - - generate-webhelp - - generate-webhelp - - generate-sources - - - ${comments.enabled} - . - openstack-ops - 1 - UA-17511903-1 - - 1 - 1 - 0 - 1 - 0 - - appendix toc,title - article/appendix nop - article toc,title - book toc,title,figure,table,equation - chapter toc - part toc,title - acknowledgements toc,title - preface toc - qandadiv toc - qandaset toc - reference toc,title - section toc - set toc,title - - - openstack-ops - openstack-ops-manual - 7.44in - 9.68in - 1 - 1 - - - - - - true - . - bk_ops_guide.xml - openstack - ${basedir}/../glossary/glossary-terms.xml - http://docs.openstack.org/openstack-ops/content/ - - - - - - - - Rackspace Research Repositories - - true - - - - rackspace-research - Rackspace Research Repository - http://maven.research.rackspacecloud.com/content/groups/public/ - - - - - rackspace-research - Rackspace Research Repository - http://maven.research.rackspacecloud.com/content/groups/public/ - - - - - diff --git a/doc/openstack-ops/preface_ops.xml b/doc/openstack-ops/preface_ops.xml deleted file mode 100644 index 49fa6d8d..00000000 --- a/doc/openstack-ops/preface_ops.xml +++ /dev/null @@ -1,850 +0,0 @@ - - - - - Preface - - OpenStack is an open source platform that lets you build an - Infrastructure as a Service (IaaS) cloud that runs on commodity - hardware. - -
- Introduction to OpenStack - - OpenStack believes in open source, open design, and open - development, all in an open community that encourages - participation by anyone. The long-term vision for OpenStack is - to produce a ubiquitous open source cloud computing platform - that meets the needs of public and private cloud providers - regardless of size. OpenStack services control large pools of - compute, storage, and networking resources throughout a data - center. - - The technology behind OpenStack consists of a series of - interrelated projects delivering various components for a cloud - infrastructure solution. Each service provides an open API so - that all of these resources can be managed through a dashboard - that gives administrators control while empowering users to - provision resources through a web interface, a command-line - client, or software development kits that support the API. Many - OpenStack APIs are extensible, meaning you can keep - compatibility with a core set of calls while providing access to - more resources and innovating through API extensions. The - OpenStack project is a global collaboration of developers and - cloud computing technologists. The project produces an open - standard cloud computing platform for both public and private - clouds. By focusing on ease of implementation, massive - scalability, a variety of rich features, and tremendous - extensibility, the project aims to deliver a practical and - reliable cloud solution for all types of organizations. -
- -
- Getting Started with OpenStack - - As an open source project, one of the unique aspects of - OpenStack is that it has many different levels at which you can - begin to engage with it—you don't have to do everything - yourself. - -
- Using OpenStack - - You could ask, "Do I even need to build a cloud?" If you - want to start using a compute or storage service by just - swiping your credit card, you can go to eNovance, HP, - Rackspace, or other organizations to start using their public - OpenStack clouds. Using their OpenStack cloud resources is - similar to accessing the publicly available Amazon Web - Services Elastic Compute Cloud (EC2) or Simple Storage - Solution (S3). -
- -
- Plug and Play OpenStack - - However, the enticing part of OpenStack might be to build your own - private cloud, and there are several ways to accomplish this goal. - Perhaps the simplest of all is an appliance-style solution. You purchase - an appliance, unpack it, plug in the power and the network, and watch it - transform into an OpenStack cloud with minimal additional configuration. - - However, hardware choice is important for many applications, so if - that applies to you, consider that there are several software - distributions available that you can run on servers, storage, and - network products of your choosing. Canonical (where OpenStack replaced - Eucalyptus as the default cloud option in 2011), Red Hat, and SUSE offer - enterprise OpenStack solutions and support. You may also want to take a - look at some of the specialized distributions, such as those from - Rackspace, Piston, SwiftStack, or Cloudscaling. - - Alternatively, if you want someone to help guide you through the - decisions about the underlying hardware or your applications, perhaps - adding in a few features or integrating components along the way, - consider contacting one of the system integrators with OpenStack - experience, such as Mirantis or Metacloud. - - If your preference is to build your own OpenStack expertise - internally, a good way to kick-start that might be to attend or arrange - a training session. The OpenStack Foundation has a Training Marketplace where - you can look for nearby events. Also, the OpenStack community is working to produce open - source training materials. -
- -
- Roll Your Own OpenStack - - However, this guide has a different audience—those seeking - flexibility from the OpenStack framework by deploying - do-it-yourself solutions. - - OpenStack is designed for horizontal scalability, so you - can easily add new compute, network, and storage resources to - grow your cloud over time. In addition to the pervasiveness of - massive OpenStack public clouds, many organizations, such as - PayPal, Intel, and Comcast, build large-scale private clouds. - OpenStack offers much more than a typical software package - because it lets you integrate a number of different - technologies to construct a cloud. This approach provides - great flexibility, but the number of options might be daunting - at first. -
-
-
- Who This Book Is For - This book is for those of you starting to run OpenStack - clouds as well as those of you who were handed an operational - one and want to keep it running well. Perhaps you're on a DevOps - team, perhaps you are a system administrator starting to dabble - in the cloud, or maybe you want to get on the OpenStack cloud - team at your company. This book is for all of you. - This guide assumes that you are familiar with a Linux - distribution that supports OpenStack, SQL databases, and - virtualization. You must be comfortable administering and - configuring multiple Linux machines for networking. You must - install and maintain an SQL database and occasionally run - queries against it. - One of the most complex aspects of an OpenStack cloud is the - networking configuration. You should be familiar with concepts - such as DHCP, Linux bridges, VLANs, and iptables. You must also - have access to a network hardware expert who can configure the - switches and routers required in your OpenStack cloud. - - Cloud computing is quite an advanced topic, and this book - requires a lot of background knowledge. However, if you are - fairly new to cloud computing, we recommend that you make use - of the at the back of the - book, as well as the online documentation for OpenStack and - additional resources mentioned in this book in . - -
- Further Reading - There are other books on the OpenStack - documentation website that can help you get the job - done. - - OpenStack Guides - - OpenStack Installation Guides - - Describes a manual installation process, as in, by - hand, without automation, for multiple distributions - based on a packaging system: - - - Installation Guide for openSUSE 13.2 and SUSE Linux - Enterprise Server 12 - - - Installation Guide for Red Hat Enterprise Linux 7 and - CentOS 7 - - - Installation Guide for Ubuntu 14.04 (LTS) - Server - - - - - - - OpenStack - Configuration Reference - - - Contains a reference listing of all configuration - options for core and integrated OpenStack services by - release version - - - - - OpenStack - Administrator Guide - - - Contains how-to information for managing an - OpenStack cloud as needed for your use cases, such as - storage, computing, or - software-defined-networking - - - - - OpenStack - High Availability Guide - - - Describes potential strategies for making your - OpenStack services and related controllers and data - stores highly available - - - - - OpenStack - Security Guide - - - Provides best practices and conceptual information - about securing an OpenStack cloud - - - - - Virtual - Machine Image Guide - - - Shows you how to obtain, create, and modify virtual - machine images that are compatible with OpenStack - - - - - OpenStack - End User Guide - - - Shows OpenStack end users how to create and manage - resources in an OpenStack cloud with the OpenStack - dashboard and OpenStack client commands - - - - - OpenStack - Administrator Guide - - - Shows OpenStack administrators how to create and - manage resources in an OpenStack cloud with the - OpenStack dashboard and OpenStack client commands - - - - - Networking - Guide - - - This guide targets OpenStack administrators seeking - to deploy and manage OpenStack Networking (neutron). - - - - - OpenStack - API Guide - - - A brief overview of how to send REST API requests to - endpoints for OpenStack services - - - -
-
- -
- How This Book Is Organized - - This book is organized into two parts: the architecture - decisions for designing OpenStack clouds and the repeated - operations for running OpenStack clouds. - - Part I: - - - - - - - Because of all the decisions the other chapters - discuss, this chapter describes the decisions made for - this particular book and much of the justification for the - example architecture. - - - - - - - - While this book doesn't describe installation, we do - recommend automation for deployment and configuration, - discussed in this chapter. - - - - - - - - The cloud controller is an invention for the sake of - consolidating and describing which services run on which - nodes. This chapter discusses hardware and network - considerations as well as how to design the cloud - controller for performance and separation of - services. - - - - - - - - This chapter describes the compute nodes, which are - dedicated to running virtual machines. Some hardware - choices come into play here, as well as logging and - networking descriptions. - - - - - - - - This chapter discusses the growth of your cloud - resources through scaling and segregation - considerations. - - - - - - - - As with other architecture decisions, storage concepts - within OpenStack offer many options. This - chapter lays out the choices for you. - - - - - - - - Your OpenStack cloud networking needs to fit into your - existing networks while also enabling the best design for - your users and administrators, and this chapter gives you - in-depth information about networking decisions. - - - - - Part II: - - - - - - - This chapter is written to let you get your hands - wrapped around your OpenStack cloud through command-line - tools and understanding what is already set up in your - cloud. - - - - - - - - This chapter walks through user-enabling processes - that all admins must face to manage users, give them - quotas to parcel out resources, and so on. - - - - - - - - This chapter shows you how to use OpenStack cloud - resources and how to train your users. - - - - - - - - This chapter goes into the common failures that the - authors have seen while running clouds in production, - including troubleshooting. - - - - - - - - Because network troubleshooting is especially - difficult with virtual resources, this chapter is - chock-full of helpful tips and tricks for tracing network - traffic, finding the root cause of networking failures, - and debugging related services, such as DHCP and - DNS. - - - - - - - - This chapter shows you where OpenStack places logs and - how to best read and manage logs for monitoring - purposes. - - - - - - - - This chapter describes what you need to back up within - OpenStack as well as best practices for recovering - backups. - - - - - - - - For readers who need to get a specialized feature into - OpenStack, this chapter describes how to use DevStack to - write custom middleware or a custom scheduler to rebalance - your resources. - - - - - - - - Because OpenStack is so, well, open, this chapter is - dedicated to helping you navigate the community and find - out where you can help and where you can get help. - - - - - - - - Much of OpenStack is driver-oriented, so you can plug - in different solutions to the base set of services. This - chapter describes some advanced configuration topics. - - - - - - - - This chapter provides upgrade information based on the - architectures used in this book. - - - - - - - Back matter: - - - - - - - You can read a small selection of use cases from the - OpenStack community with some technical details and - further resources. - - - - - - - - These are shared legendary tales of image - disappearances, VM massacres, and crazy troubleshooting - techniques that result in hard-learned lessons and wisdom. - - - - - - - - Read about how to track the OpenStack roadmap through - the open and transparent development processes. - - - - - - - - So many OpenStack resources are available online - because of the fast-moving nature of the project, but - there are also resources listed here that the authors - found helpful while learning themselves. - - - - - - - - A list of terms used in this book is included, which - is a subset of the larger OpenStack glossary available - online. - - - -
- -
- Why and How We Wrote This Book - - We wrote this book because we have deployed and maintained - OpenStack clouds for at least a year and we wanted to - share this knowledge with others. After months of being the - point people for an OpenStack cloud, we also wanted to have a - document to hand to our system administrators so that they'd - know how to operate the cloud on a daily basis—both reactively - and pro-actively. We wanted to provide more detailed technical - information about the decisions that deployers make along the - way. - - We wrote this book to help you: - - Design and create an architecture for your first - nontrivial OpenStack cloud. After you read this guide, - you'll know which questions to ask and how to organize - your compute, networking, and storage resources and the - associated software packages. - - - - Perform the day-to-day tasks required to administer a - cloud. - - - - We wrote this book in a book sprint, which is a facilitated, - rapid development production method for books. For more - information, see the BookSprints site. Your authors cobbled this book - together in five days during February 2013, fueled by caffeine - and the best takeout food that Austin, Texas, could offer. - - On the first day, we filled white boards with colorful - sticky notes to start to shape this nebulous book about how to - architect and operate clouds: - - - - - - - - We wrote furiously from our own experiences and bounced - ideas between each other. At regular intervals we reviewed the - shape and organization of the book and further molded it, - leading to what you see today. - - The team includes: - - - - Tom Fifield - - - After learning about scalability in computing from - particle physics experiments, such as ATLAS at the Large - Hadron Collider (LHC) at CERN, Tom worked on OpenStack - clouds in production to support the Australian public - research sector. Tom currently serves as an OpenStack - community manager and works on OpenStack documentation in - his spare time. - - - - - Diane Fleming - - - Diane works on the OpenStack API documentation - tirelessly. She helped out wherever she could on this - project. - - - - - Anne Gentle - - - Anne is the documentation coordinator for OpenStack - and also served as an individual contributor to the Google - Documentation Summit in 2011, working with the Open Street - Maps team. She has worked on book sprints in the past, - with FLOSS Manuals’ Adam Hyde facilitating. Anne lives in - Austin, Texas. - - - - - Lorin Hochstein - - - An academic turned software-developer-slash-operator, - Lorin worked as the lead architect for Cloud Services at - Nimbis Services, where he deploys OpenStack for technical - computing applications. He has been working with OpenStack - since the Cactus release. Previously, he worked on - high-performance computing extensions for OpenStack at - University of Southern California's Information Sciences - Institute (USC-ISI). - - - - - Adam Hyde - - - Adam facilitated this book sprint. He also founded the - book sprint methodology and is the most experienced - book-sprint facilitator around. See for more - information. Adam founded FLOSS Manuals—a community of - some 3,000 individuals developing Free Manuals about Free - Software. He is also the founder and project manager for - Booktype, an open source project for writing, editing, and - publishing books online and in print. - - - - - Jonathan Proulx - - - Jon has been piloting an OpenStack cloud as a senior - technical architect at the MIT Computer Science and - Artificial Intelligence Lab for his researchers to have as - much computing power as they need. He started contributing - to OpenStack documentation and reviewing the documentation - so that he could accelerate his learning. - - - - - Everett Toews - - - Everett is a developer advocate at Rackspace making - OpenStack and the Rackspace Cloud easy to use. Sometimes - developer, sometimes advocate, and sometimes operator, - he's built web applications, taught workshops, given - presentations around the world, and deployed OpenStack for - production use by academia and business. - - - - - Joe Topjian - - - Joe has designed and deployed several clouds at - Cybera, a nonprofit where they are building - e-infrastructure to support entrepreneurs and local - researchers in Alberta, Canada. He also actively maintains - and operates these clouds as a systems architect, and his - experiences have generated a wealth of troubleshooting - skills for cloud environments. - - - - - OpenStack community members - - - Many individual efforts keep a community book alive. - Our community members updated content for this book - year-round. Also, a year after the first sprint, Jon - Proulx hosted a second two-day mini-sprint at MIT with the - goal of updating the book for the latest release. Since - the book's inception, more than 30 contributors have - supported this book. We have a tool chain for reviews, - continuous builds, and translations. Writers and - developers continuously review patches, enter doc bugs, - edit content, and fix doc bugs. We want to recognize their - efforts! - - The following people have contributed to this book: - Akihiro Motoki, Alejandro Avella, Alexandra Settle, - Andreas Jaeger, Andy McCallum, Benjamin Stassart, Chandan - Kumar, Chris Ricker, David Cramer, David Wittman, Denny - Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio Barrio, - James E. Blair, Jay Clark, Jeff White, Jeremy Stanley, K - Jonathan Harker, KATO Tomoyuki, Lana Brindley, Laura - Alves, Lee Li, Lukasz Jernas, Mario B. Codeniera, Matthew - Kassawara, Michael Still, Monty Taylor, Nermina Miller, - Nigel Williams, Phil Hopkins, Russell Bryant, Sahid - Orentino Ferdjaoui, Sandy Walsh, Sascha Peilicke, Sean M. - Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, - Summer Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica - Musso, Ying Chun "Daisy" Guo, Zhengguang Ou, and ZhiQiang - Fan. - - - -
- -
- How to Contribute to This Book - - The genesis of this book was an in-person event, but now - that the book is in your hands, we want you to contribute to it. - OpenStack documentation follows the coding principles of - iterative work, with bug logging, investigating, and fixing. We - also store the source content on GitHub and invite collaborators - through the OpenStack Gerrit installation, which offers reviews. - For the O'Reilly edition of this book, we are using the - company's Atlas system, which also stores source content on - GitHub and enables collaboration among contributors. - - Learn more about how to contribute to the OpenStack docs at - OpenStack - Documentation Contributor Guide. - - If you find a bug and can't fix it or aren't sure it's - really a doc bug, log a bug at OpenStack Manuals. - Tag the bug under Extra options with the - ops-guide tag to indicate that the bug is - in this guide. You can assign the bug to yourself if you know - how to fix it. Also, a member of the OpenStack doc-core team can - triage the doc bug. -
- - - -
- Conventions Used in This Book - - The following typographical conventions are used in this - book: - - - - Italic - - - Indicates new terms, URLs, email addresses, filenames, - and file extensions. - - - - - Constant width - - - Used for program listings, as well as within - paragraphs to refer to program elements such as variable - or function names, databases, data types, environment - variables, statements, and keywords. - - - - - Constant width bold - - - Shows commands or other text that should be typed - literally by the user. - - - - - Constant width italic - - - Shows text that should be replaced with user-supplied - values or by values determined by context. - - - - - Command prompts - - - Commands prefixed with the # prompt - should be executed by the root user. - These examples can also be executed using the - sudo command, if available. - - Commands prefixed with the $ prompt - can be executed by any user, including - root. - - - - - - This element signifies a tip or suggestion. - - - - This element signifies a general note. - - - - This element indicates a warning or caution. - -
-
diff --git a/doc/openstack-ops/section_arch_example-neutron.xml b/doc/openstack-ops/section_arch_example-neutron.xml deleted file mode 100644 index 0f54e893..00000000 --- a/doc/openstack-ops/section_arch_example-neutron.xml +++ /dev/null @@ -1,926 +0,0 @@ - - -%openstack; -]> -
- - - Example Architecture—OpenStack Networking - - This chapter provides an example architecture using OpenStack - Networking, also known as the Neutron project, in a highly available - environment. - -
- Overview - - A highly-available environment can be put into place if you require - an environment that can scale horizontally, or want your cloud to continue - to be operational in case of node failure. This example architecture has - been written based on the current default feature set of OpenStack Havana, - with an emphasis on high availability. - RDO (Red Hat Distributed OpenStack) - - OpenStack Networking (neutron) - - component overview - - -
- Components - - - - - - - - - Component - - Details - - - - - - OpenStack release - - Havana - - - - Host operating system - - Red Hat Enterprise Linux 6.5 - - - - OpenStack package repository - - Red Hat - Distributed OpenStack (RDO) - - - - Hypervisor - - KVM - - - - Database - - MySQL - - - - Message queue - - Qpid - - - - Networking service - - OpenStack Networking - - - - Tenant Network Separation - - VLAN - - - - Image service back end - - GlusterFS - - - - Identity driver - - SQL - - - - Block Storage back end - - GlusterFS - - - -
- -
- Rationale - - This example architecture has been selected based on the current - default feature set of OpenStack Havana, with an emphasis on high - availability. This architecture is currently being deployed in an - internal Red Hat OpenStack cloud and used to run hosted and shared - services, which by their nature must be highly available. - OpenStack Networking (neutron) - - rationale for choice of - - - This architecture's components have been selected for the - following reasons: - - - - Red Hat Enterprise Linux - - - You must choose an operating system that can run on all of - the physical nodes. This example architecture is based on Red Hat - Enterprise Linux, which offers reliability, long-term support, - certified testing, and is hardened. Enterprise customers, now - moving into OpenStack usage, typically require these - advantages. - - - - - RDO - - - The Red Hat Distributed OpenStack package offers an easy way - to download the most current OpenStack release that is built for - the Red Hat Enterprise Linux platform. - - - - - KVM - - - KVM is the supported hypervisor of choice for Red Hat - Enterprise Linux (and included in distribution). It is feature - complete and free from licensing charges and restrictions. - - - - - MySQL - - - MySQL is used as the database back end for all databases in - the OpenStack environment. MySQL is the supported database of - choice for Red Hat Enterprise Linux (and included in - distribution); the database is open source, scalable, and handles - memory well. - - - - - Qpid - - - Apache Qpid offers 100 percent compatibility with the - Advanced Message Queuing Protocol Standard, and its broker is - available for both C++ and Java. - - - - - OpenStack Networking - - - OpenStack Networking offers sophisticated networking - functionality, including Layer 2 (L2) network segregation and - provider networks. - - - - - VLAN - - - Using a virtual local area network offers broadcast control, - security, and physical layer transparency. If needed, use VXLAN to - extend your address space. - - - - - GlusterFS - - - GlusterFS offers scalable storage. As your environment - grows, you can continue to add more storage nodes (instead of - being restricted, for example, by an expensive storage - array). - - - -
-
- -
- Detailed Description - -
- Node types - - This section gives you a breakdown of the different nodes that - make up the OpenStack environment. A node is a physical machine that is - provisioned with an operating system, and running a defined software - stack on top of it. provides node - descriptions and specifications. - OpenStack Networking (neutron) - - detailed description of - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Node types
TypeDescriptionExample hardware
ControllerController nodes are responsible for running the - management software services needed for the OpenStack environment - to function. These nodes: - - - - Provide the front door that people access as well as - the API services that all other components in the - environment talk to. - - - - Run a number of services in a highly available - fashion, utilizing Pacemaker and HAProxy to provide a - virtual IP and load-balancing functions so all controller - nodes are being used. - - - - Supply highly available "infrastructure" services, - such as MySQL and Qpid, that underpin all the - services. - - - - Provide what is known as "persistent storage" - through services run on the host as well. This persistent - storage is backed onto the storage nodes for - reliability. - - - - See .Model: Dell R620 CPU: 2x Intel® Xeon® CPU - E5-2620 0 @ 2.00 GHz Memory: 32 GB Disk: - two 300 GB 10000 RPM SAS Disks Network: two 10G - network ports
ComputeCompute nodes run the virtual machine instances in - OpenStack. They: - - - - Run the bare minimum of services needed to - facilitate these instances. - - - Use local storage on the node for the virtual - machines so that no VM migration or instance recovery at - node failure is possible. - - - See . - Model: Dell R620 CPU: 2x Intel® Xeon® CPU - E5-2650 0 @ 2.00 GHz Memory: 128 GB - Disk: two 600 GB 10000 RPM SAS Disks Network: - four 10G network ports (For future proofing expansion)
StorageStorage nodes store all the data required for the - environment, including disk images in the Image service library, - and the persistent storage volumes created by the Block Storage - service. Storage nodes use GlusterFS technology to keep the data - highly available and scalable. - See .Model: Dell R720xd CPU: 2x Intel® Xeon® - CPU E5-2620 0 @ 2.00 GHz Memory: 64 GB - Disk: two 500 GB 7200 RPM SAS Disks and twenty-four 600 GB 10000 RPM - SAS Disks Raid Controller: PERC H710P Integrated RAID - Controller, 1 GB NV Cache Network: two 10G network - ports
NetworkNetwork nodes are responsible for doing all the virtual - networking needed for people to create public or private networks - and uplink their virtual machines into external networks. Network - nodes: - - - - Form the only ingress and egress point for instances - running on top of OpenStack. - - - Run all of the environment's networking services, - with the exception of the networking API service (which - runs on the controller node). - - - - See .Model: Dell R620 CPU: 1x Intel® Xeon® CPU - E5-2620 0 @ 2.00 GHz Memory: 32 GB Disk: - two 300 GB 10000 RPM SAS Disks Network: five 10G - network ports
UtilityUtility nodes are used by internal administration staff - only to provide a number of basic system administration functions - needed to get the environment up and running and to maintain the - hardware, OS, and software on which it runs. These - nodes run services such as provisioning, configuration management, - monitoring, or GlusterFS management software. They are not - required to scale, although these machines are usually backed - up.Model: Dell R620 CPU: 2x Intel® Xeon® CPU - E5-2620 0 @ 2.00 GHz Memory: 32 GB Disk: - two 500 GB 7200 RPM SAS Disks Network: two 10G - network ports
-
- -
- Networking layout - - The network contains all the management devices for all hardware - in the environment (for example, by including Dell iDrac7 devices for - the hardware nodes, and management interfaces for network switches). The - network is accessed by internal staff only when diagnosing or recovering - a hardware issue. - -
- OpenStack internal network - - This network is used for OpenStack management functions and - traffic, including services needed for the provisioning of physical - nodes (pxe, tftp, - kickstart), traffic between various OpenStack node - types using OpenStack APIs and messages (for example, - nova-compute talking to keystone - or cinder-volume talking to - nova-api), and all traffic for storage data to the - storage layer underneath by the Gluster protocol. All physical nodes - have at least one network interface (typically - eth0) in this network. This network is only - accessible from other VLANs on port 22 (for ssh - access to manage machines). -
- - - -
- Public Network - - This network is a combination of: - - IP addresses for public-facing interfaces on the - controller nodes (which end users will access the OpenStack - services) - - - - A range of publicly routable, IPv4 network addresses to be - used by OpenStack Networking for floating IPs. You may be - restricted in your access to IPv4 addresses; a large range of - IPv4 addresses is not necessary. - - - - Routers for private networks created within - OpenStack. - - - - This network is connected to the controller nodes so users can - access the OpenStack interfaces, and connected to the network nodes to - provide VMs with publicly routable traffic functionality. The network - is also connected to the utility machines so that any utility services - that need to be made public (such as system monitoring) can be - accessed. -
- -
- VM traffic network - - This is a closed network that is not publicly routable and is - simply used as a private, internal network for traffic between virtual - machines in OpenStack, and between the virtual machines and the - network nodes that provide l3 routes out to the public network (and - floating IPs for connections back in to the VMs). Because this is a - closed network, we are using a different address space to the others - to clearly define the separation. Only Compute and OpenStack - Networking nodes need to be connected to this network. -
-
- -
- Node connectivity - - The following section details how the nodes are connected to the - different networks (see ) and what - other considerations need to take place (for example, bonding) when - connecting nodes to the networks. - -
- Initial deployment - - Initially, the connection setup should revolve around keeping - the connectivity simple and straightforward in order to minimize - deployment complexity and time to deploy. The deployment shown in - aims to have 1 × 10G connectivity available - to all compute nodes, while still leveraging bonding on appropriate - nodes for maximum performance. - -
- Basic node deployment - - - - - - -
-
- -
- Connectivity for maximum performance - - If the networking performance of the basic layout is not enough, - you can move to , which provides 2 × 10G - network links to all instances in the environment as well as providing - more network bandwidth to the storage layer. - bandwidth - - obtaining maximum performance - - -
- Performance node deployment - - - - - - -
-
-
- -
- Node diagrams - - The following diagrams ( - through ) include logical - information about the different types of nodes, indicating what services - will be running on top of them and how they interact with each other. - The diagrams also illustrate how the availability and scalability of - services are achieved. - -
- Controller node - - - - - - -
- -
- Compute node - - - - - - -
- -
- Network node - - - - - - -
- -
- Storage node - - - - - - -
-
-
- -
- Example Component Configuration - - and include example configuration and - considerations for both third-party and OpenStack - OpenStack Networking (neutron) - - third-party component configuration - components: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Third-party component configuration
ComponentTuningAvailabilityScalability
MySQLbinlog-format = rowMaster/master replication. However, both nodes are not used at - the same time. Replication keeps all nodes as close to being up to - date as possible (although the asynchronous nature of the - replication means a fully consistent state is not possible). - Connections to the database only happen through a Pacemaker - virtual IP, ensuring that most problems that occur with - master-master replication can be avoided.Not heavily considered. Once load on the MySQL server - increases enough that scalability needs to be considered, multiple - masters or a master/slave setup can be used.
Qpidmax-connections=1000 - worker-threads=20 - connection-backlog=10, sasl security enabled - with SASL-BASIC authenticationQpid is added as a resource to the Pacemaker software that - runs on Controller nodes where Qpid is situated. This ensures only - one Qpid instance is running at one time, and the node with the - Pacemaker virtual IP will always be the node running Qpid.Not heavily considered. However, Qpid can be changed to run on - all controller nodes for scalability and availability purposes, - and removed from Pacemaker.
HAProxymaxconn 3000HAProxy is a software layer-7 load balancer used to front door - all clustered OpenStack API components and do SSL termination. - HAProxy can be added as a resource to the Pacemaker software that - runs on the Controller nodes where HAProxy is situated. This - ensures that only one HAProxy instance is running at one time, and - the node with the Pacemaker virtual IP will always be the node - running HAProxy.Not considered. HAProxy has small enough performance overheads - that a single instance should scale enough for this level of - workload. If extra scalability is needed, - keepalived or other Layer-4 load balancing can - be introduced to be placed in front of multiple copies of - HAProxy.
MemcachedMAXCONN="8192" CACHESIZE="30457"Memcached is a fast in-memory key-value cache software that is - used by OpenStack components for caching data and increasing - performance. Memcached runs on all controller nodes, ensuring that - should one go down, another instance of Memcached is - available.Not considered. A single instance of Memcached should be able - to scale to the desired workloads. If scalability is desired, - HAProxy can be placed in front of Memcached (in raw - tcp mode) to utilize multiple Memcached - instances for scalability. However, this might cause cache - consistency issues.
PacemakerConfigured to use corosync and - cman as a cluster communication stack/quorum - manager, and as a two-node cluster.Pacemaker is the clustering software used to ensure the - availability of services running on the controller and network - nodes: - - Because Pacemaker is cluster software, the software - itself handles its own availability, leveraging - corosync and cman - underneath. - - - - If you use the GlusterFS native client, no virtual IP - is needed, since the client knows all about nodes after - initial connection and automatically routes around failures - on the client side. - - - - If you use the NFS or SMB adaptor, you will need a - virtual IP on which to mount the GlusterFS volumes. - - If more nodes need to be made cluster aware, Pacemaker can - scale to 64 nodes.
GlusterFSglusterfs performance profile "virt" - enabled on all volumes. Volumes are setup in two-node - replication.Glusterfs is a clustered file system that is run on the - storage nodes to provide persistent scalable data storage in the - environment. Because all connections to gluster use the - gluster native mount points, the - gluster instances themselves provide - availability and failover functionality.The scalability of GlusterFS storage can be achieved by adding - in more storage volumes.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OpenStack component configuration
ComponentNode typeTuningAvailabilityScalability
Dashboard (horizon)ControllerConfigured to use Memcached as a session store, - neutron support is enabled, - can_set_mount_point = FalseThe dashboard is run on all controller nodes, ensuring at least - one instance will be available in case of node failure. It also - sits behind HAProxy, which detects when the software fails and - routes requests around the failing instance.The dashboard is run on all controller nodes, so scalability - can be achieved with additional controller nodes. HAProxy allows - scalability for the dashboard as more nodes are added.
Identity (keystone)ControllerConfigured to use Memcached for caching and PKI for - tokens.Identity is run on all controller nodes, ensuring at least one - instance will be available in case of node failure. Identity also - sits behind HAProxy, which detects when the software fails and - routes requests around the failing instance.Identity is run on all controller nodes, so scalability can be - achieved with additional controller nodes. HAProxy allows - scalability for Identity as more nodes are added.
Image service (glance)Controller/var/lib/glance/images is a GlusterFS - native mount to a Gluster volume off the storage layer.The Image service is run on all controller nodes, ensuring at - least one instance will be available in case of node failure. It - also sits behind HAProxy, which detects when the software fails - and routes requests around the failing instance.The Image service is run on all controller nodes, so - scalability can be achieved with additional controller nodes. - HAProxy allows scalability for the Image service as more nodes are - added.
Compute (nova)Controller, ComputeConfigured to use Qpid, - qpid_heartbeat = - 10, - configured to - use Memcached for caching, configured to use libvirt, - configured to use neutron. - Configured nova-consoleauth to use - Memcached for session management (so that it can have multiple - copies and run in a load balancer).The nova API, scheduler, objectstore, cert, consoleauth, conductor, and - vncproxy services are run on all controller nodes, ensuring at - least one instance will be available in case of node failure. - Compute is also behind HAProxy, which detects when the software - fails and routes requests around the failing instance. - Nova-compute and nova-conductor services, which run on the - compute nodes, are only needed to run services on that node, so - availability of those services is coupled tightly to the nodes - that are available. As long as a compute node is up, it will - have the needed services running on top of it.The nova API, scheduler, objectstore, cert, consoleauth, - conductor, and vncproxy services are run on all controller nodes, - so scalability can be achieved with additional controller nodes. - HAProxy allows scalability for Compute as more nodes are added. - The scalability of services running on the compute nodes (compute, - conductor) is achieved linearly by adding in more compute - nodes.
Block Storage (cinder)ControllerConfigured to use Qpid, qpid_heartbeat = 10, configured to use a Gluster volume - from the storage layer as the back end for Block Storage, using the - Gluster native client.Block Storage API, scheduler, and volume services are run on all - controller nodes, ensuring at least one instance will be available - in case of node failure. Block Storage also sits behind HAProxy, - which detects if the software fails and routes requests around the - failing instance.Block Storage API, scheduler and volume services are run on - all controller nodes, so scalability can be achieved with - additional controller nodes. HAProxy allows scalability for Block - Storage as more nodes are added.
OpenStack Networking (neutron)Controller, Compute, NetworkConfigured to use QPID, qpid_heartbeat = - 10, kernel namespace support enabled, - tenant_network_type = vlan, - allow_overlapping_ips = true, - tenant_network_type = vlan, - bridge_uplinks = br-ex:em2, - bridge_mappings = physnet1:br-exThe OpenStack Networking service is run on all - controller nodes, ensuring at least one instance will be available - in case of node failure. It also sits behind HAProxy, which - detects if the software fails and routes requests around the - failing instance. OpenStack Networking's - ovs-agent, - l3-agent, - dhcp-agent, and - metadata-agent services run on the network - nodes, as lsb resources inside of Pacemaker. - This means that in the case of network node failure, services are - kept running on another node. Finally, the - ovs-agent service is also run on all compute - nodes, and in case of compute node failure, the other nodes will - continue to function using the copy of the service running on - them.The OpenStack Networking server service is run on all - controller nodes, so scalability can be achieved with additional - controller nodes. HAProxy allows scalability for OpenStack - Networking as more nodes are added. Scalability of services - running on the network nodes is not currently supported by - OpenStack Networking, so they are not be considered. One copy of - the services should be sufficient to handle the workload. - Scalability of the ovs-agent running on compute - nodes is achieved by adding in more compute nodes as - necessary.
-
-
diff --git a/doc/openstack-ops/section_arch_example-nova.xml b/doc/openstack-ops/section_arch_example-nova.xml deleted file mode 100644 index 8bfef42c..00000000 --- a/doc/openstack-ops/section_arch_example-nova.xml +++ /dev/null @@ -1,484 +0,0 @@ - -
- - - Example Architecture—Legacy Networking (nova) - - This particular example architecture has been upgraded from Grizzly to - Havana and tested in production environments where many public IP addresses - are available for assignment to multiple instances. You can find a second - example architecture that uses OpenStack Networking (neutron) after this - section. Each example offers high availability, meaning that if a particular - node goes down, another node with the same configuration can take over the - tasks so that the services continue to be available. - Havana - - Grizzly - - -
- Overview - - The simplest architecture you can build upon for Compute has a - single cloud controller and multiple compute nodes. The simplest - architecture for Object Storage has five nodes: one for identifying users - and proxying requests to the API, then four for storage itself to provide - enough replication for eventual consistency. This example architecture - does not dictate a particular number of nodes, but shows the thinking - and considerations that went into - choosing this architecture including the features offered. - CentOS - - RDO (Red Hat Distributed OpenStack) - - Ubuntu - - legacy networking (nova) - - component overview - - example architectures - - legacy networking; OpenStack networking - - Object Storage - - simplest architecture for - - Compute - - simplest architecture for - - -
- Components - - - - - - - - - Component - - Details - - - - - - OpenStack release - - Havana - - - - Host operating system - - Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, - including derivatives such as CentOS and Scientific - Linux - - - - OpenStack package repository - - Ubuntu Cloud - Archive or RDO* - - - - Hypervisor - - KVM - - - - Database - - MySQL* - - - - Message queue - - RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux - and derivatives - - - - Networking service - - nova-network - - - - Network manager - - FlatDHCP - - - - Single nova-network or - multi-host? - - multi-host* - - - - Image service (glance) back end - - file - - - - Identity (keystone) driver - - SQL - - - - Block Storage (cinder) back end - - LVM/iSCSI - - - - Live Migration back end - - Shared storage using NFS* - - - - Object storage - - OpenStack Object Storage (swift) - - - - - An asterisk (*) indicates when the example architecture deviates - from the settings of a default installation. We'll offer explanations - for those deviations next. - objects - - object storage - - storage - - object storage - - migration - - live migration - - IP addresses - - floating - - floating IP address - - storage - - block storage - - block storage - - dashboard - - legacy networking (nova) - - features supported by - - - - The following features of OpenStack are supported by the example - architecture documented in this guide, but are optional: - - Dashboard: You probably want to - offer a dashboard, but your users may be more interested in API - access only. - - - - Block storage: You don't have to - offer users block storage if their use case only needs ephemeral - storage on compute nodes, for example. - - - - Floating IP address: Floating IP - addresses are public IP addresses that you allocate from a - predefined pool to assign to virtual machines at launch. - Floating IP address ensure that the public IP address is - available whenever an instance is booted. Not every organization - can offer thousands of public floating IP addresses for - thousands of instances, so this feature is considered - optional. - - - - Live migration: If you need to move - running virtual machine instances from one host to another with - little or no service interruption, you would enable live - migration, but it is considered optional. - - - - Object storage: You may choose to - store machine images on a file system rather than in object - storage if you do not have the extra hardware for the required - replication and redundancy that OpenStack Object Storage - offers. - - - -
- -
- Rationale - - This example architecture has been selected based on the current - default feature set of OpenStack Havana, with an - emphasis on stability. We believe that many clouds that currently run - OpenStack in production have made similar choices. - legacy networking (nova) - - rationale for choice of - - - You must first choose the operating system that runs on all of the - physical nodes. While OpenStack is supported on several distributions of - Linux, we used Ubuntu 12.04 LTS (Long Term - Support), which is used by the majority of the development - community, has feature completeness compared with other distributions - and has clear future support plans. - - We recommend that you do not use the default Ubuntu OpenStack - install packages and instead use the Ubuntu Cloud Archive. The - Cloud Archive is a package repository supported by Canonical that allows - you to upgrade to future OpenStack releases while remaining on Ubuntu - 12.04. - - KVM as a hypervisor - complements the choice of Ubuntu—being a matched pair in terms of - support, and also because of the significant degree of attention it - garners from the OpenStack development community (including the authors, - who mostly use KVM). It is also feature complete, free from licensing - charges and restrictions. - kernel-based VM (KVM) hypervisor - - hypervisors - - KVM - - - MySQL follows a similar trend. Despite its - recent change of ownership, this database is the most tested for use - with OpenStack and is heavily documented. We deviate from the default - database, SQLite, because SQLite is not an - appropriate database for production usage. - - The choice of RabbitMQ over other AMQP - compatible options that are gaining support in OpenStack, such as ZeroMQ - and Qpid, is due to its ease of use and significant testing in - production. It also is the only option that supports features such as - Compute cells. We recommend clustering with RabbitMQ, as it is an - integral component of the system and fairly simple to implement due to - its inbuilt nature. - Advanced Message Queuing Protocol (AMQP) - - - As discussed in previous chapters, there are several options for - networking in OpenStack Compute. We recommend - FlatDHCP and to use Multi-Host - networking mode for high availability, running one - nova-network daemon per OpenStack compute host. This - provides a robust mechanism for ensuring network interruptions are - isolated to individual compute hosts, and allows for the direct use of - hardware network gateways. - - Live Migration is supported by way of shared - storage, with NFS as the distributed file - system. - - Acknowledging that many small-scale deployments see running Object - Storage just for the storage of virtual machine images as too costly, we - opted for the file back end in the OpenStack Image service (Glance). If - your cloud will include Object Storage, you can easily add it as a - back end. - - We chose the SQL back end for Identity - over others, such as LDAP. This back end is simple - to install and is robust. The authors acknowledge that many - installations want to bind with existing directory services and caution - careful understanding of the array of options - available. - - Block Storage (cinder) is installed natively on external storage - nodes and uses the LVM/iSCSI plug-in. Most Block - Storage plug-ins are tied to particular vendor products and - implementations limiting their use to consumers of those hardware - platforms, but LVM/iSCSI is robust and stable on commodity - hardware. - - - - While the cloud can be run without the OpenStack - Dashboard, we consider it to be indispensable, not just for - user interaction with the cloud, but also as a tool for operators. - Additionally, the dashboard's use of Django makes it a flexible - framework for extension. -
- -
- Why not use OpenStack Networking? - - This example architecture does not use OpenStack Networking, - because it does not yet support multi-host networking - and our organizations (university, government) have access to a large - range of publicly-accessible IPv4 addresses. - legacy networking (nova) - - vs. OpenStack Networking (neutron) - -
- -
- Why use multi-host networking? - - In a default OpenStack deployment, there is a single - nova-network service that runs within the cloud (usually on - the cloud controller) that provides services such as network address - translation (NAT), DHCP, and DNS to the guest instances. If the single - node that runs the nova-network service goes down, you - cannot access your instances, and the instances cannot access the - Internet. The single node that runs the nova-network - service can become a bottleneck if excessive network traffic comes in - and goes out of the cloud. - networks - - multi-host - - multi-host networking - - legacy networking (nova) - - benefits of multi-host networking - - - - Multi-host is - a high-availability option for the network configuration, where the - nova-network service is run on every compute node - instead of running on only a single node. - -
-
- -
- Detailed Description - - The reference architecture consists of multiple compute nodes, a - cloud controller, an external NFS storage server for instance storage, and - an OpenStack Block Storage server for volume - storage. - legacy networking (nova) - - detailed description - A network time service (Network Time Protocol, or NTP) - synchronizes time on all the nodes. FlatDHCPManager in multi-host mode is - used for the networking. A logical diagram for this example architecture - shows which services are running on each node: - - - - - - - - - - The cloud controller runs the dashboard, the API services, the - database (MySQL), a message queue server (RabbitMQ), the scheduler for - choosing compute resources (nova-scheduler), Identity - services (keystone, nova-consoleauth), Image services - (glance-api, glance-registry), services for - console access of guests, and Block Storage services, including the - scheduler for storage resources (cinder-api and - cinder-scheduler). - cloud controllers - - duties of - - - Compute nodes are where the computing resources are held, and in our - example architecture, they run the hypervisor (KVM), libvirt (the driver - for the hypervisor, which enables live migration from node to node), - nova-compute, nova-api-metadata (generally only - used when running in multi-host mode, it retrieves instance-specific - metadata), nova-vncproxy, and - nova-network. - - The network consists of two switches, one for the management or - private traffic, and one that covers public access, including floating - IPs. To support this, the cloud controller and the compute nodes have two - network cards. The OpenStack Block Storage and NFS storage servers only - need to access the private network and therefore only need one network - card, but multiple cards run in a bonded configuration are recommended if - possible. Floating IP access is direct to the Internet, whereas Flat IP - access goes through a NAT. To envision the network traffic, use this - diagram: - - - - - - - - -
- -
- Optional Extensions - - You can extend this reference architecture as - legacy networking (nova) - - optional extensions - follows: - - - - Add additional cloud controllers (see ). - - - - Add an OpenStack Storage service (see the Object Storage chapter - in the OpenStack Installation Guide for your - distribution). - - - - Add additional OpenStack Block Storage hosts (see ). - - -
-
diff --git a/doc/ops-guide/setup.cfg b/doc/ops-guide/setup.cfg deleted file mode 100644 index 0ae3c3df..00000000 --- a/doc/ops-guide/setup.cfg +++ /dev/null @@ -1,30 +0,0 @@ -[metadata] -name = openstackopsguide -summary = OpenStack Operations Guide -author = OpenStack -author-email = openstack-docs@lists.openstack.org -home-page = http://docs.openstack.org/ -classifier = -Environment :: OpenStack -Intended Audience :: Information Technology -Intended Audience :: System Administrators -License :: OSI Approved :: Apache Software License -Operating System :: POSIX :: Linux -Topic :: Documentation - -[global] -setup-hooks = - pbr.hooks.setup_hook - -[files] - -[build_sphinx] -all_files = 1 -build-dir = build -source-dir = source - -[wheel] -universal = 1 - -[pbr] -warnerrors = True diff --git a/doc/ops-guide/setup.py b/doc/ops-guide/setup.py deleted file mode 100644 index 73637574..00000000 --- a/doc/ops-guide/setup.py +++ /dev/null @@ -1,30 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT -import setuptools - -# In python < 2.7.4, a lazy loading of package `pbr` will break -# setuptools if some other modules registered functions in `atexit`. -# solution from: http://bugs.python.org/issue15881#msg170215 -try: - import multiprocessing # noqa -except ImportError: - pass - -setuptools.setup( - setup_requires=['pbr'], - pbr=True) diff --git a/doc/ops-guide/source/acknowledgements.rst b/doc/ops-guide/source/acknowledgements.rst deleted file mode 100644 index ad027b78..00000000 --- a/doc/ops-guide/source/acknowledgements.rst +++ /dev/null @@ -1,51 +0,0 @@ -================ -Acknowledgements -================ - -The OpenStack Foundation supported the creation of this book with plane -tickets to Austin, lodging (including one adventurous evening without -power after a windstorm), and delicious food. For about USD $10,000, we -could collaborate intensively for a week in the same room at the -Rackspace Austin office. The authors are all members of the OpenStack -Foundation, which you can join. Go to the `Foundation web -site `_. - -We want to acknowledge our excellent host Rackers at Rackspace in -Austin: - -- Emma Richards of Rackspace Guest Relations took excellent care of our - lunch orders and even set aside a pile of sticky notes that had - fallen off the walls. - -- Betsy Hagemeier, a Fanatical Executive Assistant, took care of a room - reshuffle and helped us settle in for the week. - -- The Real Estate team at Rackspace in Austin, also known as "The - Victors," were super responsive. - -- Adam Powell in Racker IT supplied us with bandwidth each day and - second monitors for those of us needing more screens. - -- On Wednesday night we had a fun happy hour with the Austin OpenStack - Meetup group and Racker Katie Schmidt took great care of our group. - -We also had some excellent input from outside of the room: - -- Tim Bell from CERN gave us feedback on the outline before we started - and reviewed it mid-week. - -- Sébastien Han has written excellent blogs and generously gave his - permission for re-use. - -- Oisin Feeley read it, made some edits, and provided emailed feedback - right when we asked. - -Inside the book sprint room with us each day was our book sprint -facilitator Adam Hyde. Without his tireless support and encouragement, -we would have thought a book of this scope was impossible in five days. -Adam has proven the book sprint method effectively again and again. He -creates both tools and faith in collaborative authoring at -`www.booksprints.net `_. - -We couldn't have pulled it off without so much supportive help and -encouragement. diff --git a/doc/ops-guide/source/app_crypt.rst b/doc/ops-guide/source/app_crypt.rst deleted file mode 100644 index cb1d42fe..00000000 --- a/doc/ops-guide/source/app_crypt.rst +++ /dev/null @@ -1,542 +0,0 @@ -================================= -Tales From the Cryp^H^H^H^H Cloud -================================= - -Herein lies a selection of tales from OpenStack cloud operators. Read, -and learn from their wisdom. - -Double VLAN -~~~~~~~~~~~ - -I was on-site in Kelowna, British Columbia, Canada setting up a new -OpenStack cloud. The deployment was fully automated: Cobbler deployed -the OS on the bare metal, bootstrapped it, and Puppet took over from -there. I had run the deployment scenario so many times in practice and -took for granted that everything was working. - -On my last day in Kelowna, I was in a conference call from my hotel. In -the background, I was fooling around on the new cloud. I launched an -instance and logged in. Everything looked fine. Out of boredom, I ran -:command:`ps aux` and all of the sudden the instance locked up. - -Thinking it was just a one-off issue, I terminated the instance and -launched a new one. By then, the conference call ended and I was off to -the data center. - -At the data center, I was finishing up some tasks and remembered the -lock-up. I logged into the new instance and ran :command:`ps aux` again. -It worked. Phew. I decided to run it one more time. It locked up. - -After reproducing the problem several times, I came to the unfortunate -conclusion that this cloud did indeed have a problem. Even worse, my -time was up in Kelowna and I had to return back to Calgary. - -Where do you even begin troubleshooting something like this? An instance -that just randomly locks up when a command is issued. Is it the image? -Nope—it happens on all images. Is it the compute node? Nope—all nodes. -Is the instance locked up? No! New SSH connections work just fine! - -We reached out for help. A networking engineer suggested it was an MTU -issue. Great! MTU! Something to go on! What's MTU and why would it cause -a problem? - -MTU is maximum transmission unit. It specifies the maximum number of -bytes that the interface accepts for each packet. If two interfaces have -two different MTUs, bytes might get chopped off and weird things -happen—such as random session lockups. - -.. note:: - - Not all packets have a size of 1500. Running the :command:`ls` command over - SSH might only create a single packets less than 1500 bytes. - However, running a command with heavy output, such as :command:`ps aux` - requires several packets of 1500 bytes. - -OK, so where is the MTU issue coming from? Why haven't we seen this in -any other deployment? What's new in this situation? Well, new data -center, new uplink, new switches, new model of switches, new servers, -first time using this model of servers… so, basically everything was -new. Wonderful. We toyed around with raising the MTU at various areas: -the switches, the NICs on the compute nodes, the virtual NICs in the -instances, we even had the data center raise the MTU for our uplink -interface. Some changes worked, some didn't. This line of -troubleshooting didn't feel right, though. We shouldn't have to be -changing the MTU in these areas. - -As a last resort, our network admin (Alvaro) and myself sat down with -four terminal windows, a pencil, and a piece of paper. In one window, we -ran ping. In the second window, we ran ``tcpdump`` on the cloud -controller. In the third, ``tcpdump`` on the compute node. And the forth -had ``tcpdump`` on the instance. For background, this cloud was a -multi-node, non-multi-host setup. - -One cloud controller acted as a gateway to all compute nodes. -VlanManager was used for the network config. This means that the cloud -controller and all compute nodes had a different VLAN for each OpenStack -project. We used the :option:`-s` option of ``ping`` to change the packet -size. We watched as sometimes packets would fully return, sometimes they'd -only make it out and never back in, and sometimes the packets would stop at a -random point. We changed ``tcpdump`` to start displaying the hex dump of -the packet. We pinged between every combination of outside, controller, -compute, and instance. - -Finally, Alvaro noticed something. When a packet from the outside hits -the cloud controller, it should not be configured with a VLAN. We -verified this as true. When the packet went from the cloud controller to -the compute node, it should only have a VLAN if it was destined for an -instance. This was still true. When the ping reply was sent from the -instance, it should be in a VLAN. True. When it came back to the cloud -controller and on its way out to the Internet, it should no longer have -a VLAN. False. Uh oh. It looked as though the VLAN part of the packet -was not being removed. - -That made no sense. - -While bouncing this idea around in our heads, I was randomly typing -commands on the compute node: - -.. code-block:: console - - $ ip a - … - 10: vlan100@vlan20: mtu 1500 qdisc noqueue master br100 state UP - … - -"Hey Alvaro, can you run a VLAN on top of a VLAN?" - -"If you did, you'd add an extra 4 bytes to the packet…" - -Then it all made sense… - -.. code-block:: console - - $ grep vlan_interface /etc/nova/nova.conf - vlan_interface=vlan20 - -In ``nova.conf``, ``vlan_interface`` specifies what interface OpenStack -should attach all VLANs to. The correct setting should have been: - -.. code-block:: ini - - vlan_interface=bond0 - -As this would be the server's bonded NIC. - -vlan20 is the VLAN that the data center gave us for outgoing Internet -access. It's a correct VLAN and is also attached to bond0. - -By mistake, I configured OpenStack to attach all tenant VLANs to vlan20 -instead of bond0 thereby stacking one VLAN on top of another. This added -an extra 4 bytes to each packet and caused a packet of 1504 bytes to be -sent out which would cause problems when it arrived at an interface that -only accepted 1500. - -As soon as this setting was fixed, everything worked. - -"The Issue" -~~~~~~~~~~~ - -At the end of August 2012, a post-secondary school in Alberta, Canada -migrated its infrastructure to an OpenStack cloud. As luck would have -it, within the first day or two of it running, one of their servers just -disappeared from the network. Blip. Gone. - -After restarting the instance, everything was back up and running. We -reviewed the logs and saw that at some point, network communication -stopped and then everything went idle. We chalked this up to a random -occurrence. - -A few nights later, it happened again. - -We reviewed both sets of logs. The one thing that stood out the most was -DHCP. At the time, OpenStack, by default, set DHCP leases for one minute -(it's now two minutes). This means that every instance contacts the -cloud controller (DHCP server) to renew its fixed IP. For some reason, -this instance could not renew its IP. We correlated the instance's logs -with the logs on the cloud controller and put together a conversation: - -#. Instance tries to renew IP. - -#. Cloud controller receives the renewal request and sends a response. - -#. Instance "ignores" the response and re-sends the renewal request. - -#. Cloud controller receives the second request and sends a new - response. - -#. Instance begins sending a renewal request to ``255.255.255.255`` - since it hasn't heard back from the cloud controller. - -#. The cloud controller receives the ``255.255.255.255`` request and - sends a third response. - -#. The instance finally gives up. - -With this information in hand, we were sure that the problem had to do -with DHCP. We thought that for some reason, the instance wasn't getting -a new IP address and with no IP, it shut itself off from the network. - -A quick Google search turned up this: `DHCP lease errors in VLAN -mode `_ -(https://lists.launchpad.net/openstack/msg11696.html) which further -supported our DHCP theory. - -An initial idea was to just increase the lease time. If the instance -only renewed once every week, the chances of this problem happening -would be tremendously smaller than every minute. This didn't solve the -problem, though. It was just covering the problem up. - -We decided to have ``tcpdump`` run on this instance and see if we could -catch it in action again. Sure enough, we did. - -The ``tcpdump`` looked very, very weird. In short, it looked as though -network communication stopped before the instance tried to renew its IP. -Since there is so much DHCP chatter from a one minute lease, it's very -hard to confirm it, but even with only milliseconds difference between -packets, if one packet arrives first, it arrived first, and if that -packet reported network issues, then it had to have happened before -DHCP. - -Additionally, this instance in question was responsible for a very, very -large backup job each night. While "The Issue" (as we were now calling -it) didn't happen exactly when the backup happened, it was close enough -(a few hours) that we couldn't ignore it. - -Further days go by and we catch The Issue in action more and more. We -find that dhclient is not running after The Issue happens. Now we're -back to thinking it's a DHCP issue. Running ``/etc/init.d/networking`` -restart brings everything back up and running. - -Ever have one of those days where all of the sudden you get the Google -results you were looking for? Well, that's what happened here. I was -looking for information on dhclient and why it dies when it can't renew -its lease and all of the sudden I found a bunch of OpenStack and dnsmasq -discussions that were identical to the problem we were seeing! - -`Problem with Heavy Network IO and -Dnsmasq `_ -(http://www.gossamer-threads.com/lists/openstack/operators/18197) - -`instances losing IP address while running, due to No -DHCPOFFER `_ -(http://www.gossamer-threads.com/lists/openstack/dev/14696) - -Seriously, Google. - -This bug report was the key to everything: `KVM images lose connectivity -with bridged -network `_ -(https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978) - -It was funny to read the report. It was full of people who had some -strange network problem but didn't quite explain it in the same way. - -So it was a qemu/kvm bug. - -At the same time of finding the bug report, a co-worker was able to -successfully reproduce The Issue! How? He used ``iperf`` to spew a ton -of bandwidth at an instance. Within 30 minutes, the instance just -disappeared from the network. - -Armed with a patched qemu and a way to reproduce, we set out to see if -we've finally solved The Issue. After 48 hours straight of hammering the -instance with bandwidth, we were confident. The rest is history. You can -search the bug report for "joe" to find my comments and actual tests. - -Disappearing Images -~~~~~~~~~~~~~~~~~~~ - -At the end of 2012, Cybera (a nonprofit with a mandate to oversee the -development of cyberinfrastructure in Alberta, Canada) deployed an -updated OpenStack cloud for their `DAIR -project `_ -(http://www.canarie.ca/en/dair-program/about). A few days into -production, a compute node locks up. Upon rebooting the node, I checked -to see what instances were hosted on that node so I could boot them on -behalf of the customer. Luckily, only one instance. - -The :command:`nova reboot` command wasn't working, so I used :command:`virsh`, -but it immediately came back with an error saying it was unable to find the -backing disk. In this case, the backing disk is the Glance image that is -copied to ``/var/lib/nova/instances/_base`` when the image is used for -the first time. Why couldn't it find it? I checked the directory and -sure enough it was gone. - -I reviewed the ``nova`` database and saw the instance's entry in the -``nova.instances`` table. The image that the instance was using matched -what virsh was reporting, so no inconsistency there. - -I checked Glance and noticed that this image was a snapshot that the -user created. At least that was good news—this user would have been the -only user affected. - -Finally, I checked StackTach and reviewed the user's events. They had -created and deleted several snapshots—most likely experimenting. -Although the timestamps didn't match up, my conclusion was that they -launched their instance and then deleted the snapshot and it was somehow -removed from ``/var/lib/nova/instances/_base``. None of that made sense, -but it was the best I could come up with. - -It turns out the reason that this compute node locked up was a hardware -issue. We removed it from the DAIR cloud and called Dell to have it -serviced. Dell arrived and began working. Somehow or another (or a fat -finger), a different compute node was bumped and rebooted. Great. - -When this node fully booted, I ran through the same scenario of seeing -what instances were running so I could turn them back on. There were a -total of four. Three booted and one gave an error. It was the same error -as before: unable to find the backing disk. Seriously, what? - -Again, it turns out that the image was a snapshot. The three other -instances that successfully started were standard cloud images. Was it a -problem with snapshots? That didn't make sense. - -A note about DAIR's architecture: ``/var/lib/nova/instances`` is a -shared NFS mount. This means that all compute nodes have access to it, -which includes the ``_base`` directory. Another centralized area is -``/var/log/rsyslog`` on the cloud controller. This directory collects -all OpenStack logs from all compute nodes. I wondered if there were any -entries for the file that :command:`virsh` is reporting: - -.. code-block:: console - - dair-ua-c03/nova.log:Dec 19 12:10:59 dair-ua-c03 - 2012-12-19 12:10:59 INFO nova.virt.libvirt.imagecache - [-] Removing base file: - /var/lib/nova/instances/_base/7b4783508212f5d242cbf9ff56fb8d33b4ce6166_10 - -Ah-hah! So OpenStack was deleting it. But why? - -A feature was introduced in Essex to periodically check and see if there -were any ``_base`` files not in use. If there were, OpenStack Compute -would delete them. This idea sounds innocent enough and has some good -qualities to it. But how did this feature end up turned on? It was -disabled by default in Essex. As it should be. It was `decided to be -turned on in Folsom `_ -(https://bugs.launchpad.net/nova/+bug/1029674). I cannot emphasize -enough that: - -*Actions which delete things should not be enabled by default.* - -Disk space is cheap these days. Data recovery is not. - -Secondly, DAIR's shared ``/var/lib/nova/instances`` directory -contributed to the problem. Since all compute nodes have access to this -directory, all compute nodes periodically review the \_base directory. -If there is only one instance using an image, and the node that the -instance is on is down for a few minutes, it won't be able to mark the -image as still in use. Therefore, the image seems like it's not in use -and is deleted. When the compute node comes back online, the instance -hosted on that node is unable to start. - -The Valentine's Day Compute Node Massacre -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Although the title of this story is much more dramatic than the actual -event, I don't think, or hope, that I'll have the opportunity to use -"Valentine's Day Massacre" again in a title. - -This past Valentine's Day, I received an alert that a compute node was -no longer available in the cloud—meaning, - -.. code-block:: console - - $ nova service-list - -showed this particular node in a down state. - -I logged into the cloud controller and was able to both ``ping`` and SSH -into the problematic compute node which seemed very odd. Usually if I -receive this type of alert, the compute node has totally locked up and -would be inaccessible. - -After a few minutes of troubleshooting, I saw the following details: - -- A user recently tried launching a CentOS instance on that node - -- This user was the only user on the node (new node) - -- The load shot up to 8 right before I received the alert - -- The bonded 10gb network device (bond0) was in a DOWN state - -- The 1gb NIC was still alive and active - -I looked at the status of both NICs in the bonded pair and saw that -neither was able to communicate with the switch port. Seeing as how each -NIC in the bond is connected to a separate switch, I thought that the -chance of a switch port dying on each switch at the same time was quite -improbable. I concluded that the 10gb dual port NIC had died and needed -replaced. I created a ticket for the hardware support department at the -data center where the node was hosted. I felt lucky that this was a new -node and no one else was hosted on it yet. - -An hour later I received the same alert, but for another compute node. -Crap. OK, now there's definitely a problem going on. Just like the -original node, I was able to log in by SSH. The bond0 NIC was DOWN but -the 1gb NIC was active. - -And the best part: the same user had just tried creating a CentOS -instance. What? - -I was totally confused at this point, so I texted our network admin to -see if he was available to help. He logged in to both switches and -immediately saw the problem: the switches detected spanning tree packets -coming from the two compute nodes and immediately shut the ports down to -prevent spanning tree loops: - -.. code-block:: console - - Feb 15 01:40:18 SW-1 Stp: %SPANTREE-4-BLOCK_BPDUGUARD: Received BPDU packet on Port-Channel35 with BPDU guard enabled. Disabling interface. (source mac fa:16:3e:24:e7:22) - Feb 15 01:40:18 SW-1 Ebra: %ETH-4-ERRDISABLE: bpduguard error detected on Port-Channel35. - Feb 15 01:40:18 SW-1 Mlag: %MLAG-4-INTF_INACTIVE_LOCAL: Local interface Port-Channel35 is link down. MLAG 35 is inactive. - Feb 15 01:40:18 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Port-Channel35 (Server35), changed state to down - Feb 15 01:40:19 SW-1 Stp: %SPANTREE-6-INTERFACE_DEL: Interface Port-Channel35 has been removed from instance MST0 - Feb 15 01:40:19 SW-1 Ebra: %LINEPROTO-5-UPDOWN: Line protocol on Interface Ethernet35 (Server35), changed state to down - -He re-enabled the switch ports and the two compute nodes immediately -came back to life. - -Unfortunately, this story has an open ending... we're still looking into -why the CentOS image was sending out spanning tree packets. Further, -we're researching a proper way on how to mitigate this from happening. -It's a bigger issue than one might think. While it's extremely important -for switches to prevent spanning tree loops, it's very problematic to -have an entire compute node be cut from the network when this happens. -If a compute node is hosting 100 instances and one of them sends a -spanning tree packet, that instance has effectively DDOS'd the other 99 -instances. - -This is an ongoing and hot topic in networking circles —especially with -the raise of virtualization and virtual switches. - -Down the Rabbit Hole -~~~~~~~~~~~~~~~~~~~~ - -Users being able to retrieve console logs from running instances is a -boon for support—many times they can figure out what's going on inside -their instance and fix what's going on without bothering you. -Unfortunately, sometimes overzealous logging of failures can cause -problems of its own. - -A report came in: VMs were launching slowly, or not at all. Cue the -standard checks—nothing on the Nagios, but there was a spike in network -towards the current master of our RabbitMQ cluster. Investigation -started, but soon the other parts of the queue cluster were leaking -memory like a sieve. Then the alert came in—the master Rabbit server -went down and connections failed over to the slave. - -At that time, our control services were hosted by another team and we -didn't have much debugging information to determine what was going on -with the master, and we could not reboot it. That team noted that it -failed without alert, but managed to reboot it. After an hour, the -cluster had returned to its normal state and we went home for the day. - -Continuing the diagnosis the next morning was kick started by another -identical failure. We quickly got the message queue running again, and -tried to work out why Rabbit was suffering from so much network traffic. -Enabling debug logging on nova-api quickly brought understanding. A -``tail -f /var/log/nova/nova-api.log`` was scrolling by faster -than we'd ever seen before. CTRL+C on that and we could plainly see the -contents of a system log spewing failures over and over again - a system -log from one of our users' instances. - -After finding the instance ID we headed over to -``/var/lib/nova/instances`` to find the ``console.log``: - -.. code-block:: console - - adm@cc12:/var/lib/nova/instances/instance-00000e05# wc -l console.log - 92890453 console.log - adm@cc12:/var/lib/nova/instances/instance-00000e05# ls -sh console.log - 5.5G console.log - -Sure enough, the user had been periodically refreshing the console log -page on the dashboard and the 5G file was traversing the Rabbit cluster -to get to the dashboard. - -We called them and asked them to stop for a while, and they were happy -to abandon the horribly broken VM. After that, we started monitoring the -size of console logs. - -To this day, `the issue `__ -(https://bugs.launchpad.net/nova/+bug/832507) doesn't have a permanent -resolution, but we look forward to the discussion at the next summit. - -Havana Haunted by the Dead -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Felix Lee of Academia Sinica Grid Computing Centre in Taiwan contributed -this story. - -I just upgraded OpenStack from Grizzly to Havana 2013.2-2 using the RDO -repository and everything was running pretty well—except the EC2 API. - -I noticed that the API would suffer from a heavy load and respond slowly -to particular EC2 requests such as ``RunInstances``. - -Output from ``/var/log/nova/nova-api.log`` on :term:`Havana`: - -.. code-block:: console - - 2014-01-10 09:11:45.072 129745 INFO nova.ec2.wsgi.server - [req-84d16d16-3808-426b-b7af-3b90a11b83b0 - 0c6e7dba03c24c6a9bce299747499e8a 7052bd6714e7460caeb16242e68124f9] - 117.103.103.29 "GET - /services/Cloud?AWSAccessKeyId=[something]&Action=RunInstances&ClientToken=[something]&ImageId=ami-00000001&InstanceInitiatedShutdownBehavior=terminate... - HTTP/1.1" status: 200 len: 1109 time: 138.5970151 - -This request took over two minutes to process, but executed quickly on -another co-existing Grizzly deployment using the same hardware and -system configuration. - -Output from ``/var/log/nova/nova-api.log`` on :term:`Grizzly`: - -.. code-block:: console - - 2014-01-08 11:15:15.704 INFO nova.ec2.wsgi.server - [req-ccac9790-3357-4aa8-84bd-cdaab1aa394e - ebbd729575cb404081a45c9ada0849b7 8175953c209044358ab5e0ec19d52c37] - 117.103.103.29 "GET - /services/Cloud?AWSAccessKeyId=[something]&Action=RunInstances&ClientToken=[something]&ImageId=ami-00000007&InstanceInitiatedShutdownBehavior=terminate... - HTTP/1.1" status: 200 len: 931 time: 3.9426181 - -While monitoring system resources, I noticed a significant increase in -memory consumption while the EC2 API processed this request. I thought -it wasn't handling memory properly—possibly not releasing memory. If the -API received several of these requests, memory consumption quickly grew -until the system ran out of RAM and began using swap. Each node has 48 -GB of RAM and the "nova-api" process would consume all of it within -minutes. Once this happened, the entire system would become unusably -slow until I restarted the nova-api service. - -So, I found myself wondering what changed in the EC2 API on Havana that -might cause this to happen. Was it a bug or a normal behavior that I now -need to work around? - -After digging into the nova (OpenStack Compute) code, I noticed two -areas in ``api/ec2/cloud.py`` potentially impacting my system: - -.. code-block:: python - - instances = self.compute_api.get_all(context, - search_opts=search_opts, - sort_dir='asc') - - sys_metas = self.compute_api.get_all_system_metadata( - context, search_filts=[{'key': ['EC2_client_token']}, - {'value': [client_token]}]) - -Since my database contained many records—over 1 million metadata records -and over 300,000 instance records in "deleted" or "errored" states—each -search took a long time. I decided to clean up the database by first -archiving a copy for backup and then performing some deletions using the -MySQL client. For example, I ran the following SQL command to remove -rows of instances deleted for over a year: - -.. code-block:: console - - mysql> delete from nova.instances where deleted=1 and terminated_at < (NOW() - INTERVAL 1 YEAR); - -Performance increased greatly after deleting the old records and my new -deployment continues to behave well. diff --git a/doc/ops-guide/source/app_resources.rst b/doc/ops-guide/source/app_resources.rst deleted file mode 100644 index 4a4df2df..00000000 --- a/doc/ops-guide/source/app_resources.rst +++ /dev/null @@ -1,62 +0,0 @@ -========= -Resources -========= - -OpenStack -~~~~~~~~~ - -- `Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise - Server 12 `_ - -- `Installation Guide for Red Hat Enterprise Linux 7, CentOS 7, and - Fedora 22 `_ - -- `Installation Guide for Ubuntu 14.04 (LTS) - Server `_ - -- `OpenStack Administrator Guide `_ - -- `OpenStack Cloud Computing Cookbook (Packt - Publishing) `_ - -Cloud (General) -~~~~~~~~~~~~~~~ - -- `“The NIST Definition of Cloud - Computing” `_ - -Python -~~~~~~ - -- `Dive Into Python (Apress) `_ - -Networking -~~~~~~~~~~ - -- `TCP/IP Illustrated, Volume 1: The Protocols, 2/E - (Pearson) `_ - -- `The TCP/IP Guide (No Starch - Press) `_ - -- `“A tcpdump Tutorial and - Primer” `_ - -Systems Administration -~~~~~~~~~~~~~~~~~~~~~~ - -- `UNIX and Linux Systems Administration Handbook (Prentice - Hall) `_ - -Virtualization -~~~~~~~~~~~~~~ - -- `The Book of Xen (No Starch - Press) `_ - -Configuration Management -~~~~~~~~~~~~~~~~~~~~~~~~ - -- `Puppet Labs Documentation `_ - -- `Pro Puppet (Apress) `_ diff --git a/doc/ops-guide/source/app_roadmaps.rst b/doc/ops-guide/source/app_roadmaps.rst deleted file mode 100644 index 3e9da61b..00000000 --- a/doc/ops-guide/source/app_roadmaps.rst +++ /dev/null @@ -1,435 +0,0 @@ -===================== -Working with Roadmaps -===================== - -The good news: OpenStack has unprecedented transparency when it comes to -providing information about what's coming up. The bad news: each release -moves very quickly. The purpose of this appendix is to highlight some of -the useful pages to track, and take an educated guess at what is coming -up in the next release and perhaps further afield. - -OpenStack follows a six month release cycle, typically releasing in -April/May and October/November each year. At the start of each cycle, -the community gathers in a single location for a design summit. At the -summit, the features for the coming releases are discussed, prioritized, -and planned. The below figure shows an example release cycle, with dates -showing milestone releases, code freeze, and string freeze dates, along -with an example of when the summit occurs. Milestones are interim releases -within the cycle that are available as packages for download and -testing. Code freeze is putting a stop to adding new features to the -release. String freeze is putting a stop to changing any strings within -the source code. - -.. image:: figures/osog_ac01.png - :width: 100% - - -Information Available to You -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -There are several good sources of information available that you can use -to track your OpenStack development desires.OpenStack community working -with roadmaps information available - -Release notes are maintained on the OpenStack wiki, and also shown here: - -.. list-table:: - :widths: 25 25 25 25 - :header-rows: 1 - - * - Series - - Status - - Releases - - Date - * - Liberty - - `Under Development - `_ - - 2015.2 - - Oct, 2015 - * - Kilo - - `Current stable release, security-supported - `_ - - `2015.1 `_ - - Apr 30, 2015 - * - Juno - - `Security-supported - `_ - - `2014.2 `_ - - Oct 16, 2014 - * - Icehouse - - `End-of-life - `_ - - `2014.1 `_ - - Apr 17, 2014 - * - - - - - `2014.1.1 `_ - - Jun 9, 2014 - * - - - - - `2014.1.2 `_ - - Aug 8, 2014 - * - - - - - `2014.1.3 `_ - - Oct 2, 2014 - * - Havana - - End-of-life - - `2013.2 `_ - - Apr 4, 2013 - * - - - - - `2013.2.1 `_ - - Dec 16, 2013 - * - - - - - `2013.2.2 `_ - - Feb 13, 2014 - * - - - - - `2013.2.3 `_ - - Apr 3, 2014 - * - - - - - `2013.2.4 `_ - - Sep 22, 2014 - * - - - - - `2013.2.1 `_ - - Dec 16, 2013 - * - Grizzly - - End-of-life - - `2013.1 `_ - - Apr 4, 2013 - * - - - - - `2013.1.1 `_ - - May 9, 2013 - * - - - - - `2013.1.2 `_ - - Jun 6, 2013 - * - - - - - `2013.1.3 `_ - - Aug 8, 2013 - * - - - - - `2013.1.4 `_ - - Oct 17, 2013 - * - - - - - `2013.1.5 `_ - - Mar 20, 2015 - * - Folsom - - End-of-life - - `2012.2 `_ - - Sep 27, 2012 - * - - - - - `2012.2.1 `_ - - Nov 29, 2012 - * - - - - - `2012.2.2 `_ - - Dec 13, 2012 - * - - - - - `2012.2.3 `_ - - Jan 31, 2013 - * - - - - - `2012.2.4 `_ - - Apr 11, 2013 - * - Essex - - End-of-life - - `2012.1 `_ - - Apr 5, 2012 - * - - - - - `2012.1.1 `_ - - Jun 22, 2012 - * - - - - - `2012.1.2 `_ - - Aug 10, 2012 - * - - - - - `2012.1.3 `_ - - Oct 12, 2012 - * - Diablo - - Deprecated - - `2011.3 `_ - - Sep 22, 2011 - * - - - - - `2011.3.1 `_ - - Jan 19, 2012 - * - Cactus - - Deprecated - - `2011.2 `_ - - Apr 15, 2011 - * - Bexar - - Deprecated - - `2011.1 `_ - - Feb 3, 2011 - * - Austin - - Deprecated - - `2010.1 `_ - - Oct 21, 2010 - -Here are some other resources: - -- `A breakdown of current features under development, with their target - milestone `_ - -- `A list of all features, including those not yet under - development `_ - -- `Rough-draft design discussions ("etherpads") from the last design - summit `_ - -- `List of individual code changes under - review `_ - -Influencing the Roadmap -~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack truly welcomes your ideas (and contributions) and highly -values feedback from real-world users of the software. By learning a -little about the process that drives feature development, you can -participate and perhaps get the additions you desire. - -Feature requests typically start their life in Etherpad, a collaborative -editing tool, which is used to take coordinating notes at a design -summit session specific to the feature. This then leads to the creation -of a blueprint on the Launchpad site for the particular project, which -is used to describe the feature more formally. Blueprints are then -approved by project team members, and development can begin. - -Therefore, the fastest way to get your feature request up for -consideration is to create an Etherpad with your ideas and propose a -session to the design summit. If the design summit has already passed, -you may also create a blueprint directly. Read this `blog post about how -to work with blueprints -`_ -the perspective of Victoria Martínez, a developer intern. - -The roadmap for the next release as it is developed can be seen at -`Releases `_. - -To determine the potential features going in to future releases, or to -look at features implemented previously, take a look at the existing -blueprints such as  `OpenStack Compute (nova) -Blueprints `_, `OpenStack -Identity (keystone) -Blueprints `_, and release -notes. - -Aside from the direct-to-blueprint pathway, there is another very -well-regarded mechanism to influence the development roadmap: the user -survey. Found at http://openstack.org/user-survey, it allows you to -provide details of your deployments and needs, anonymously by default. -Each cycle, the user committee analyzes the results and produces a -report, including providing specific information to the technical -committee and project team leads. - -Aspects to Watch -~~~~~~~~~~~~~~~~ - -You want to keep an eye on the areas improving within OpenStack. The -best way to "watch" roadmaps for each project is to look at the -blueprints that are being approved for work on milestone releases. You -can also learn from PTL webinars that follow the OpenStack summits twice -a year. - -Driver Quality Improvements ---------------------------- - -A major quality push has occurred across drivers and plug-ins in Block -Storage, Compute, and Networking. Particularly, developers of Compute -and Networking drivers that require proprietary or hardware products are -now required to provide an automated external testing system for use -during the development process. - -Easier Upgrades ---------------- - -One of the most requested features since OpenStack began (for components -other than Object Storage, which tends to "just work"): easier upgrades. -In all recent releases internal messaging communication is versioned, -meaning services can theoretically drop back to backward-compatible -behavior. This allows you to run later versions of some components, -while keeping older versions of others. - -In addition, database migrations are now tested with the Turbo Hipster -tool. This tool tests database migration performance on copies of -real-world user databases. - -These changes have facilitated the first proper OpenStack upgrade guide, -found in :doc:`ops_upgrades`, and will continue to improve in the next -release. - -Deprecation of Nova Network ---------------------------- - -With the introduction of the full software-defined networking stack -provided by OpenStack Networking (neutron) in the Folsom release, -development effort on the initial networking code that remains part of -the Compute component has gradually lessened. While many still use -``nova-network`` in production, there has been a long-term plan to -remove the code in favor of the more flexible and full-featured -OpenStack Networking. - -An attempt was made to deprecate ``nova-network`` during the Havana -release, which was aborted due to the lack of equivalent functionality -(such as the FlatDHCP multi-host high-availability mode mentioned in -this guide), lack of a migration path between versions, insufficient -testing, and simplicity when used for the more straightforward use cases -``nova-network`` traditionally supported. Though significant effort has -been made to address these concerns, ``nova-network`` was not be -deprecated in the Juno release. In addition, to a limited degree, -patches to ``nova-network`` have again begin to be accepted, such as -adding a per-network settings feature and SR-IOV support in Juno. - -This leaves you with an important point of decision when designing your -cloud. OpenStack Networking is robust enough to use with a small number -of limitations (performance issues in some scenarios, only basic high -availability of layer 3 systems) and provides many more features than -``nova-network``. However, if you do not have the more complex use cases -that can benefit from fuller software-defined networking capabilities, -or are uncomfortable with the new concepts introduced, ``nova-network`` -may continue to be a viable option for the next 12 months. - -Similarly, if you have an existing cloud and are looking to upgrade from -``nova-network`` to OpenStack Networking, you should have the option to -delay the upgrade for this period of time. However, each release of -OpenStack brings significant new innovation, and regardless of your use -of networking methodology, it is likely best to begin planning for an -upgrade within a reasonable timeframe of each release. - -As mentioned, there's currently no way to cleanly migrate from -``nova-network`` to neutron. We recommend that you keep a migration in -mind and what that process might involve for when a proper migration -path is released. - -Distributed Virtual Router -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -One of the long-time complaints surrounding OpenStack Networking was the -lack of high availability for the layer 3 components. The Juno release -introduced Distributed Virtual Router (DVR), which aims to solve this -problem. - -Early indications are that it does do this well for a base set of -scenarios, such as using the ML2 plug-in with Open vSwitch, one flat -external network and VXLAN tenant networks. However, it does appear that -there are problems with the use of VLANs, IPv6, Floating IPs, high -north-south traffic scenarios and large numbers of compute nodes. It is -expected these will improve significantly with the next release, but bug -reports on specific issues are highly desirable. - -Replacement of Open vSwitch Plug-in with Modular Layer 2 -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The Modular Layer 2 plug-in is a framework allowing OpenStack Networking -to simultaneously utilize the variety of layer-2 networking technologies -found in complex real-world data centers. It currently works with the -existing Open vSwitch, Linux Bridge, and Hyper-V L2 agents and is -intended to replace and deprecate the monolithic plug-ins associated -with those L2 agents. - -New API Versions -~~~~~~~~~~~~~~~~ - -The third version of the Compute API was broadly discussed and worked on -during the Havana and Icehouse release cycles. Current discussions -indicate that the V2 API will remain for many releases, and the next -iteration of the API will be denoted v2.1 and have similar properties to -the existing v2.0, rather than an entirely new v3 API. This is a great -time to evaluate all API and provide comments while the next generation -APIs are being defined. A new working group was formed specifically to -`improve OpenStack APIs `_ -and create design guidelines, which you are welcome to join. - -OpenStack on OpenStack (TripleO) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This project continues to improve and you may consider using it for -greenfield deployments, though according to the latest user survey -results it remains to see widespread uptake. - -Data processing service for OpenStack (sahara) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -A much-requested answer to big data problems, a dedicated team has been -making solid progress on a Hadoop-as-a-Service project. - -Bare metal Deployment (ironic) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The bare-metal deployment has been widely lauded, and development -continues. The Juno release brought the OpenStack Bare metal drive into -the Compute project, and it was aimed to deprecate the existing -bare-metal driver in Kilo. If you are a current user of the bare metal -driver, a particular blueprint to follow is `Deprecate the bare metal -driver -`_ - -Database as a Service (trove) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The OpenStack community has had a database-as-a-service tool in -development for some time, and we saw the first integrated release of it -in Icehouse. From its release it was able to deploy database servers out -of the box in a highly available way, initially supporting only MySQL. -Juno introduced support for Mongo (including clustering), PostgreSQL and -Couchbase, in addition to replication functionality for MySQL. In Kilo, -more advanced clustering capability was delivered, in addition to better -integration with other OpenStack components such as Networking. - -Message Service (zaqar) -~~~~~~~~~~~~~~~~~~~~~~~ - -A service to provide queues of messages and notifications was released. - -DNS service (designate) -~~~~~~~~~~~~~~~~~~~~~~~ - -A long requested service, to provide the ability to manipulate DNS -entries associated with OpenStack resources has gathered a following. -The designate project was also released. - -Scheduler Improvements -~~~~~~~~~~~~~~~~~~~~~~ - -Both Compute and Block Storage rely on schedulers to determine where to -place virtual machines or volumes. In Havana, the Compute scheduler -underwent significant improvement, while in Icehouse it was the -scheduler in Block Storage that received a boost. Further down the -track, an effort started this cycle that aims to create a holistic -scheduler covering both will come to fruition. Some of the work that was -done in Kilo can be found under the `Gantt -project `_. - -Block Storage Improvements --------------------------- - -Block Storage is considered a stable project, with wide uptake and a -long track record of quality drivers. The team has discussed many areas -of work at the summits, including better error reporting, automated -discovery, and thin provisioning features. - -Toward a Python SDK -------------------- - -Though many successfully use the various python-\*client code as an -effective SDK for interacting with OpenStack, consistency between the -projects and documentation availability waxes and wanes. To combat this, -an `effort to improve the -experience `_ has -started. Cross-project development efforts in OpenStack have a checkered -history, such as the `unified client -project `_ having -several false starts. However, the early signs for the SDK project are -promising, and we expect to see results during the Juno cycle. diff --git a/doc/ops-guide/source/app_usecases.rst b/doc/ops-guide/source/app_usecases.rst deleted file mode 100644 index a10da456..00000000 --- a/doc/ops-guide/source/app_usecases.rst +++ /dev/null @@ -1,192 +0,0 @@ -========= -Use Cases -========= - -This appendix contains a small selection of use cases from the -community, with more technical detail than usual. Further examples can -be found on the `OpenStack website `_. - -NeCTAR -~~~~~~ - -Who uses it: researchers from the Australian publicly funded research -sector. Use is across a wide variety of disciplines, with the purpose of -instances ranging from running simple web servers to using hundreds of -cores for high-throughput computing. - -Deployment ----------- - -Using OpenStack Compute cells, the NeCTAR Research Cloud spans eight -sites with approximately 4,000 cores per site. - -Each site runs a different configuration, as a resource cells in an -OpenStack Compute cells setup. Some sites span multiple data centers, -some use off compute node storage with a shared file system, and some -use on compute node storage with a non-shared file system. Each site -deploys the Image service with an Object Storage back end. A central -Identity, dashboard, and Compute API service are used. A login to the -dashboard triggers a SAML login with Shibboleth, which creates an -account in the Identity service with an SQL back end. An Object Storage -Global Cluster is used across several sites. - -Compute nodes have 24 to 48 cores, with at least 4 GB of RAM per core -and approximately 40 GB of ephemeral storage per core. - -All sites are based on Ubuntu 14.04, with KVM as the hypervisor. The -OpenStack version in use is typically the current stable version, with 5 -to 10 percent back-ported code from trunk and modifications. - -Resources ---------- - -- `OpenStack.org case - study `_ - -- `NeCTAR-RC GitHub `_ - -- `NeCTAR website `_ - -MIT CSAIL -~~~~~~~~~ - -Who uses it: researchers from the MIT Computer Science and Artificial -Intelligence Lab. - -Deployment ----------- - -The CSAIL cloud is currently 64 physical nodes with a total of 768 -physical cores and 3,456 GB of RAM. Persistent data storage is largely -outside the cloud on NFS, with cloud resources focused on compute -resources. There are more than 130 users in more than 40 projects, -typically running 2,000–2,500 vCPUs in 300 to 400 instances. - -We initially deployed on Ubuntu 12.04 with the Essex release of -OpenStack using FlatDHCP multi-host networking. - -The software stack is still Ubuntu 12.04 LTS, but now with OpenStack -Havana from the Ubuntu Cloud Archive. KVM is the hypervisor, deployed -using `FAI `_ and Puppet for configuration -management. The FAI and Puppet combination is used lab-wide, not only -for OpenStack. There is a single cloud controller node, which also acts -as network controller, with the remainder of the server hardware -dedicated to compute nodes. - -Host aggregates and instance-type extra specs are used to provide two -different resource allocation ratios. The default resource allocation -ratios we use are 4:1 CPU and 1.5:1 RAM. Compute-intensive workloads use -instance types that require non-oversubscribed hosts where ``cpu_ratio`` -and ``ram_ratio`` are both set to 1.0. Since we have hyper-threading -enabled on our compute nodes, this provides one vCPU per CPU thread, or -two vCPUs per physical core. - -With our upgrade to Grizzly in August 2013, we moved to OpenStack -Networking, neutron (quantum at the time). Compute nodes have -two-gigabit network interfaces and a separate management card for IPMI -management. One network interface is used for node-to-node -communications. The other is used as a trunk port for OpenStack managed -VLANs. The controller node uses two bonded 10g network interfaces for -its public IP communications. Big pipes are used here because images are -served over this port, and it is also used to connect to iSCSI storage, -back-ending the image storage and database. The controller node also has -a gigabit interface that is used in trunk mode for OpenStack managed -VLAN traffic. This port handles traffic to the dhcp-agent and -metadata-proxy. - -We approximate the older ``nova-network`` multi-host HA setup by using -"provider VLAN networks" that connect instances directly to existing -publicly addressable networks and use existing physical routers as their -default gateway. This means that if our network controller goes down, -running instances still have their network available, and no single -Linux host becomes a traffic bottleneck. We are able to do this because -we have a sufficient supply of IPv4 addresses to cover all of our -instances and thus don't need NAT and don't use floating IP addresses. -We provide a single generic public network to all projects and -additional existing VLANs on a project-by-project basis as needed. -Individual projects are also allowed to create their own private GRE -based networks. - -Resources ---------- - -- `CSAIL homepage `_ - -DAIR -~~~~ - -Who uses it: DAIR is an integrated virtual environment that leverages -the CANARIE network to develop and test new information communication -technology (ICT) and other digital technologies. It combines such -digital infrastructure as advanced networking and cloud computing and -storage to create an environment for developing and testing innovative -ICT applications, protocols, and services; performing at-scale -experimentation for deployment; and facilitating a faster time to -market. - -Deployment ----------- - -DAIR is hosted at two different data centers across Canada: one in -Alberta and the other in Quebec. It consists of a cloud controller at -each location, although, one is designated the "master" controller that -is in charge of central authentication and quotas. This is done through -custom scripts and light modifications to OpenStack. DAIR is currently -running Havana. - -For Object Storage, each region has a swift environment. - -A NetApp appliance is used in each region for both block storage and -instance storage. There are future plans to move the instances off the -NetApp appliance and onto a distributed file system such as :term:`Ceph` or -GlusterFS. - -VlanManager is used extensively for network management. All servers have -two bonded 10GbE NICs that are connected to two redundant switches. DAIR -is set up to use single-node networking where the cloud controller is -the gateway for all instances on all compute nodes. Internal OpenStack -traffic (for example, storage traffic) does not go through the cloud -controller. - -Resources ---------- - -- `DAIR homepage `__ - -CERN -~~~~ - -Who uses it: researchers at CERN (European Organization for Nuclear -Research) conducting high-energy physics research. - -Deployment ----------- - -The environment is largely based on Scientific Linux 6, which is Red Hat -compatible. We use KVM as our primary hypervisor, although tests are -ongoing with Hyper-V on Windows Server 2008. - -We use the Puppet Labs OpenStack modules to configure Compute, Image -service, Identity, and dashboard. Puppet is used widely for instance -configuration, and Foreman is used as a GUI for reporting and instance -provisioning. - -Users and groups are managed through Active Directory and imported into -the Identity service using LDAP. CLIs are available for nova and -Euca2ools to do this. - -There are three clouds currently running at CERN, totaling about 4,700 -compute nodes, with approximately 120,000 cores. The CERN IT cloud aims -to expand to 300,000 cores by 2015. - -Resources ---------- - -- `“OpenStack in Production: A tale of 3 OpenStack - Clouds” `_ - -- `“Review of CERN Data Centre - Infrastructure” `_ - -- `“CERN Cloud Infrastructure User - Guide” `_ diff --git a/doc/ops-guide/source/arch_cloud_controller.rst b/doc/ops-guide/source/arch_cloud_controller.rst deleted file mode 100644 index ad9ab206..00000000 --- a/doc/ops-guide/source/arch_cloud_controller.rst +++ /dev/null @@ -1,403 +0,0 @@ -==================================================== -Designing for Cloud Controllers and Cloud Management -==================================================== - -OpenStack is designed to be massively horizontally scalable, which -allows all services to be distributed widely. However, to simplify this -guide, we have decided to discuss services of a more central nature, -using the concept of a *cloud controller*. A cloud controller is just a -conceptual simplification. In the real world, you design an architecture -for your cloud controller that enables high availability so that if any -node fails, another can take over the required tasks. In reality, cloud -controller tasks are spread out across more than a single node. - -The cloud controller provides the central management system for -OpenStack deployments. Typically, the cloud controller manages -authentication and sends messaging to all the systems through a message -queue. - -For many deployments, the cloud controller is a single node. However, to -have high availability, you have to take a few considerations into -account, which we'll cover in this chapter. - -The cloud controller manages the following services for the cloud: - -Databases - Tracks current information about users and instances, for example, - in a database, typically one database instance managed per service - -Message queue services - All :term:`Advanced Message Queuing Protocol (AMQP)` messages for - services are received and sent according to the queue broker - -Conductor services - Proxy requests to a database - -Authentication and authorization for identity management - Indicates which users can do what actions on certain cloud - resources; quota management is spread out among services, - howeverauthentication - -Image-management services - Stores and serves images with metadata on each, for launching in the - cloud - -Scheduling services - Indicates which resources to use first; for example, spreading out - where instances are launched based on an algorithm - -User dashboard - Provides a web-based front end for users to consume OpenStack cloud - services - -API endpoints - Offers each service's REST API access, where the API endpoint - catalog is managed by the Identity service - -For our example, the cloud controller has a collection of ``nova-*`` -components that represent the global state of the cloud; talks to -services such as authentication; maintains information about the cloud -in a database; communicates to all compute nodes and storage -:term:`workers ` through a queue; and provides API access. -Each service running on a designated cloud controller may be broken out -into separate nodes for scalability or availability. - -As another example, you could use pairs of servers for a collective -cloud controller—one active, one standby—for redundant nodes providing a -given set of related services, such as: - -- Front end web for API requests, the scheduler for choosing which - compute node to boot an instance on, Identity services, and the - dashboard - -- Database and message queue server (such as MySQL, RabbitMQ) - -- Image service for the image management - -Now that you see the myriad designs for controlling your cloud, read -more about the further considerations to help with your design -decisions. - -Hardware Considerations -~~~~~~~~~~~~~~~~~~~~~~~ - -A cloud controller's hardware can be the same as a compute node, though -you may want to further specify based on the size and type of cloud that -you run. - -It's also possible to use virtual machines for all or some of the -services that the cloud controller manages, such as the message queuing. -In this guide, we assume that all services are running directly on the -cloud controller. - -The table below contains common considerations to review when sizing hardware -for the cloud controller design. - -.. list-table:: Cloud controller hardware sizing considerations - :widths: 50 50 - :header-rows: 1 - - * - Consideration - - Ramification - * - How many instances will run at once? - - Size your database server accordingly, and scale out beyond one cloud - controller if many instances will report status at the same time and - scheduling where a new instance starts up needs computing power. - * - How many compute nodes will run at once? - - Ensure that your messaging queue handles requests successfully and size - accordingly. - * - How many users will access the API? - - If many users will make multiple requests, make sure that the CPU load - for the cloud controller can handle it. - * - How many users will access the dashboard versus the REST API directly? - - The dashboard makes many requests, even more than the API access, so - add even more CPU if your dashboard is the main interface for your users. - * - How many ``nova-api`` services do you run at once for your cloud? - - You need to size the controller with a core per service. - * - How long does a single instance run? - - Starting instances and deleting instances is demanding on the compute - node but also demanding on the controller node because of all the API - queries and scheduling needs. - * - Does your authentication system also verify externally? - - External systems such as LDAP or Active Directory require network - connectivity between the cloud controller and an external authentication - system. Also ensure that the cloud controller has the CPU power to keep - up with requests. - - -Separation of Services -~~~~~~~~~~~~~~~~~~~~~~ - -While our example contains all central services in a single location, it -is possible and indeed often a good idea to separate services onto -different physical servers. The table below is a list of deployment -scenarios we've seen and their justifications. - -.. list-table:: Deployment scenarios - :widths: 50 50 - :header-rows: 1 - - * - Scenario - - Justification - * - Run ``glance-*`` servers on the ``swift-proxy`` server. - - This deployment felt that the spare I/O on the Object Storage proxy - server was sufficient and that the Image Delivery portion of glance - benefited from being on physical hardware and having good connectivity - to the Object Storage back end it was using. - * - Run a central dedicated database server. - - This deployment used a central dedicated server to provide the databases - for all services. This approach simplified operations by isolating - database server updates and allowed for the simple creation of slave - database servers for failover. - * - Run one VM per service. - - This deployment ran central services on a set of servers running KVM. - A dedicated VM was created for each service (``nova-scheduler``, - rabbitmq, database, etc). This assisted the deployment with scaling - because administrators could tune the resources given to each virtual - machine based on the load it received (something that was not well - understood during installation). - * - Use an external load balancer. - - This deployment had an expensive hardware load balancer in its - organization. It ran multiple ``nova-api`` and ``swift-proxy`` - servers on different physical servers and used the load balancer - to switch between them. - -One choice that always comes up is whether to virtualize. Some services, -such as ``nova-compute``, ``swift-proxy`` and ``swift-object`` servers, -should not be virtualized. However, control servers can often be happily -virtualized—the performance penalty can usually be offset by simply -running more of the service. - -Database -~~~~~~~~ - -OpenStack Compute uses an SQL database to store and retrieve stateful -information. MySQL is the popular database choice in the OpenStack -community. - -Loss of the database leads to errors. As a result, we recommend that you -cluster your database to make it failure tolerant. Configuring and -maintaining a database cluster is done outside OpenStack and is -determined by the database software you choose to use in your cloud -environment. MySQL/Galera is a popular option for MySQL-based databases. - -Message Queue -~~~~~~~~~~~~~ - -Most OpenStack services communicate with each other using the *message -queue*.messages design considerationsdesign considerations message -queues For example, Compute communicates to block storage services and -networking services through the message queue. Also, you can optionally -enable notifications for any service. RabbitMQ, Qpid, and Zeromq are all -popular choices for a message-queue service. In general, if the message -queue fails or becomes inaccessible, the cluster grinds to a halt and -ends up in a read-only state, with information stuck at the point where -the last message was sent. Accordingly, we recommend that you cluster -the message queue. Be aware that clustered message queues can be a pain -point for many OpenStack deployments. While RabbitMQ has native -clustering support, there have been reports of issues when running it at -a large scale. While other queuing solutions are available, such as Zeromq -and Qpid, Zeromq does not offer stateful queues. Qpid is the messaging -system of choice for Red Hat and its derivatives. Qpid does not have -native clustering capabilities and requires a supplemental service, such -as Pacemaker or Corsync. For your message queue, you need to determine -what level of data loss you are comfortable with and whether to use an -OpenStack project's ability to retry multiple MQ hosts in the event of a -failure, such as using Compute's ability to do so. - -Conductor Services -~~~~~~~~~~~~~~~~~~ - -In the previous version of OpenStack, all ``nova-compute`` services -required direct access to the database hosted on the cloud controller. -This was problematic for two reasons: security and performance. With -regard to security, if a compute node is compromised, the attacker -inherently has access to the database. With regard to performance, -``nova-compute`` calls to the database are single-threaded and blocking. -This creates a performance bottleneck because database requests are -fulfilled serially rather than in parallel. - -The conductor service resolves both of these issues by acting as a proxy -for the ``nova-compute`` service. Now, instead of ``nova-compute`` -directly accessing the database, it contacts the ``nova-conductor`` -service, and ``nova-conductor`` accesses the database on -``nova-compute``'s behalf. Since ``nova-compute`` no longer has direct -access to the database, the security issue is resolved. Additionally, -``nova-conductor`` is a nonblocking service, so requests from all -compute nodes are fulfilled in parallel. - -.. note:: - - If you are using ``nova-network`` and multi-host networking in your - cloud environment, ``nova-compute`` still requires direct access to - the database. - -The ``nova-conductor`` service is horizontally scalable. To make -``nova-conductor`` highly available and fault tolerant, just launch more -instances of the ``nova-conductor`` process, either on the same server -or across multiple servers. - -Application Programming Interface (API) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -All public access, whether direct, through a command-line client, or -through the web-based dashboard, uses the API service. Find the API -reference at http://api.openstack.org/. - -You must choose whether you want to support the Amazon EC2 compatibility -APIs, or just the OpenStack APIs. One issue you might encounter when -running both APIs is an inconsistent experience when referring to images -and instances. - -For example, the EC2 API refers to instances using IDs that contain -hexadecimal, whereas the OpenStack API uses names and digits. Similarly, -the EC2 API tends to rely on DNS aliases for contacting virtual -machines, as opposed to OpenStack, which typically lists IP -addresses. - -If OpenStack is not set up in the right way, it is simple to have -scenarios in which users are unable to contact their instances due to -having only an incorrect DNS alias. Despite this, EC2 compatibility can -assist users migrating to your cloud. - -As with databases and message queues, having more than one :term:`API server` -is a good thing. Traditional HTTP load-balancing techniques can be used to -achieve a highly available ``nova-api`` service. - -Extensions -~~~~~~~~~~ - -The `API -Specifications `_ define -the core actions, capabilities, and mediatypes of the OpenStack API. A -client can always depend on the availability of this core API, and -implementers are always required to support it in its entirety. -Requiring strict adherence to the core API allows clients to rely upon a -minimal level of functionality when interacting with multiple -implementations of the same API. - -The OpenStack Compute API is extensible. An extension adds capabilities -to an API beyond those defined in the core. The introduction of new -features, MIME types, actions, states, headers, parameters, and -resources can all be accomplished by means of extensions to the core -API. This allows the introduction of new features in the API without -requiring a version change and allows the introduction of -vendor-specific niche functionality. - -Scheduling -~~~~~~~~~~ - -The scheduling services are responsible for determining the compute or -storage node where a virtual machine or block storage volume should be -created. The scheduling services receive creation requests for these -resources from the message queue and then begin the process of -determining the appropriate node where the resource should reside. This -process is done by applying a series of user-configurable filters -against the available collection of nodes. - -There are currently two schedulers: ``nova-scheduler`` for virtual -machines and ``cinder-scheduler`` for block storage volumes. Both -schedulers are able to scale horizontally, so for high-availability -purposes, or for very large or high-schedule-frequency installations, -you should consider running multiple instances of each scheduler. The -schedulers all listen to the shared message queue, so no special load -balancing is required. - -Images -~~~~~~ - -The OpenStack Image service consists of two parts: ``glance-api`` and -``glance-registry``. The former is responsible for the delivery of -images; the compute node uses it to download images from the back end. -The latter maintains the metadata information associated with virtual -machine images and requires a database. - -The ``glance-api`` part is an abstraction layer that allows a choice of -back end. Currently, it supports: - -OpenStack Object Storage - Allows you to store images as objects. - -File system - Uses any traditional file system to store the images as files. - -S3 - Allows you to fetch images from Amazon S3. - -HTTP - Allows you to fetch images from a web server. You cannot write - images by using this mode. - -If you have an OpenStack Object Storage service, we recommend using this -as a scalable place to store your images. You can also use a file system -with sufficient performance or Amazon S3—unless you do not need the -ability to upload new images through OpenStack. - -Dashboard -~~~~~~~~~ - -The OpenStack dashboard (horizon) provides a web-based user interface to -the various OpenStack components. The dashboard includes an end-user -area for users to manage their virtual infrastructure and an admin area -for cloud operators to manage the OpenStack environment as a -whole. - -The dashboard is implemented as a Python web application that normally -runs in :term:`Apache` ``httpd``. Therefore, you may treat it the same as any -other web application, provided it can reach the API servers (including -their admin endpoints) over the network. - -Authentication and Authorization -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The concepts supporting OpenStack's authentication and authorization are -derived from well-understood and widely used systems of a similar -nature. Users have credentials they can use to authenticate, and they -can be a member of one or more groups (known as projects or tenants, -interchangeably). - -For example, a cloud administrator might be able to list all instances -in the cloud, whereas a user can see only those in his current group. -Resources quotas, such as the number of cores that can be used, disk -space, and so on, are associated with a project. - -OpenStack Identity provides authentication decisions and user attribute -information, which is then used by the other OpenStack services to -perform authorization. The policy is set in the ``policy.json`` file. -For information on how to configure these, see :doc:`ops_projects_users` - -OpenStack Identity supports different plug-ins for authentication -decisions and identity storage. Examples of these plug-ins include: - -- In-memory key-value Store (a simplified internal storage structure) - -- SQL database (such as MySQL or PostgreSQL) - -- Memcached (a distributed memory object caching system) - -- LDAP (such as OpenLDAP or Microsoft's Active Directory) - -Many deployments use the SQL database; however, LDAP is also a popular -choice for those with existing authentication infrastructure that needs -to be integrated. - -Network Considerations -~~~~~~~~~~~~~~~~~~~~~~ - -Because the cloud controller handles so many different services, it must -be able to handle the amount of traffic that hits it. For example, if -you choose to host the OpenStack Image service on the cloud controller, -the cloud controller should be able to support the transferring of the -images at an acceptable speed. - -As another example, if you choose to use single-host networking where -the cloud controller is the network gateway for all instances, then the -cloud controller must support the total amount of traffic that travels -between your cloud and the public Internet. - -We recommend that you use a fast NIC, such as 10 GB. You can also choose -to use two 10 GB NICs and bond them together. While you might not be -able to get a full bonded 20 GB speed, different transmission streams -use different NICs. For example, if the cloud controller transfers two -images, each image uses a different NIC and gets a full 10 GB of -bandwidth. diff --git a/doc/ops-guide/source/arch_compute_nodes.rst b/doc/ops-guide/source/arch_compute_nodes.rst deleted file mode 100644 index 3494da2a..00000000 --- a/doc/ops-guide/source/arch_compute_nodes.rst +++ /dev/null @@ -1,331 +0,0 @@ -============= -Compute Nodes -============= - -In this chapter, we discuss some of the choices you need to consider -when building out your compute nodes. Compute nodes form the resource -core of the OpenStack Compute cloud, providing the processing, memory, -network and storage resources to run instances. - -Choosing a CPU -~~~~~~~~~~~~~~ - -The type of CPU in your compute node is a very important choice. First, -ensure that the CPU supports virtualization by way of *VT-x* for Intel -chips and *AMD-v* for AMD chips. - -.. note:: - - Consult the vendor documentation to check for virtualization - support. For Intel, read `“Does my processor support Intel® Virtualization - Technology?” `_. - For AMD, read `AMD Virtualization - `_. - Note that your CPU may support virtualization but it may be - disabled. Consult your BIOS documentation for how to enable CPU - features. - -The number of cores that the CPU has also affects the decision. It's -common for current CPUs to have up to 12 cores. Additionally, if an -Intel CPU supports hyperthreading, those 12 cores are doubled to 24 -cores. If you purchase a server that supports multiple CPUs, the number -of cores is further multiplied. - -**Multithread Considerations** - -Hyper-Threading is Intel's proprietary simultaneous multithreading -implementation used to improve parallelization on their CPUs. You might -consider enabling Hyper-Threading to improve the performance of -multithreaded applications. - -Whether you should enable Hyper-Threading on your CPUs depends upon your -use case. For example, disabling Hyper-Threading can be beneficial in -intense computing environments. We recommend that you do performance -testing with your local workload with both Hyper-Threading on and off to -determine what is more appropriate in your case. - -Choosing a Hypervisor -~~~~~~~~~~~~~~~~~~~~~ - -A hypervisor provides software to manage virtual machine access to the -underlying hardware. The hypervisor creates, manages, and monitors -virtual machines. OpenStack Compute supports many hypervisors to various -degrees, including: - -- `KVM `_ - -- `LXC `_ - -- `QEMU `_ - -- `VMware - ESX/ESXi `_ - -- `Xen `_ - -- `Hyper-V `_ - -- `Docker `_ - -Probably the most important factor in your choice of hypervisor is your -current usage or experience. Aside from that, there are practical -concerns to do with feature parity, documentation, and the level of -community experience. - -For example, KVM is the most widely adopted hypervisor in the OpenStack -community. Besides KVM, more deployments run Xen, LXC, VMware, and -Hyper-V than the others listed. However, each of these are lacking some -feature support or the documentation on how to use them with OpenStack -is out of date. - -The best information available to support your choice is found on the -`Hypervisor Support Matrix -`_ -and in the `configuration reference -`_. - -.. note:: - - It is also possible to run multiple hypervisors in a single - deployment using host aggregates or cells. However, an individual - compute node can run only a single hypervisor at a time. - -Instance Storage Solutions -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -As part of the procurement for a compute cluster, you must specify some -storage for the disk on which the instantiated instance runs. There are -three main approaches to providing this temporary-style storage, and it -is important to understand the implications of the choice. - -They are: - -- Off compute node storage—shared file system - -- On compute node storage—shared file system - -- On compute node storage—nonshared file system - -In general, the questions you should ask when selecting storage are as -follows: - -- What is the platter count you can achieve? - -- Do more spindles result in better I/O despite network access? - -- Which one results in the best cost-performance scenario you're aiming - for? - -- How do you manage the storage operationally? - -Many operators use separate compute and storage hosts. Compute services -and storage services have different requirements, and compute hosts -typically require more CPU and RAM than storage hosts. Therefore, for a -fixed budget, it makes sense to have different configurations for your -compute nodes and your storage nodes. Compute nodes will be invested in -CPU and RAM, and storage nodes will be invested in block storage. - -However, if you are more restricted in the number of physical hosts you -have available for creating your cloud and you want to be able to -dedicate as many of your hosts as possible to running instances, it -makes sense to run compute and storage on the same machines. - -We'll discuss the three main approaches to instance storage in the next -few sections. - -Off Compute Node Storage—Shared File System -------------------------------------------- - -In this option, the disks storing the running instances are hosted in -servers outside of the compute nodes. - -If you use separate compute and storage hosts, you can treat your -compute hosts as "stateless." As long as you don't have any instances -currently running on a compute host, you can take it offline or wipe it -completely without having any effect on the rest of your cloud. This -simplifies maintenance for the compute hosts. - -There are several advantages to this approach: - -- If a compute node fails, instances are usually easily recoverable. - -- Running a dedicated storage system can be operationally simpler. - -- You can scale to any number of spindles. - -- It may be possible to share the external storage for other purposes. - -The main downsides to this approach are: - -- Depending on design, heavy I/O usage from some instances can affect - unrelated instances. - -- Use of the network can decrease performance. - -On Compute Node Storage—Shared File System ------------------------------------------- - -In this option, each compute node is specified with a significant amount -of disk space, but a distributed file system ties the disks from each -compute node into a single mount. - -The main advantage of this option is that it scales to external storage -when you require additional storage. - -However, this option has several downsides: - -- Running a distributed file system can make you lose your data - locality compared with nonshared storage. - -- Recovery of instances is complicated by depending on multiple hosts. - -- The chassis size of the compute node can limit the number of spindles - able to be used in a compute node. - -- Use of the network can decrease performance. - -On Compute Node Storage—Nonshared File System ---------------------------------------------- - -In this option, each compute node is specified with enough disks to -store the instances it hosts. - -There are two main reasons why this is a good idea: - -- Heavy I/O usage on one compute node does not affect instances on - other compute nodes. - -- Direct I/O access can increase performance. - -This has several downsides: - -- If a compute node fails, the instances running on that node are lost. - -- The chassis size of the compute node can limit the number of spindles - able to be used in a compute node. - -- Migrations of instances from one node to another are more complicated - and rely on features that may not continue to be developed. - -- If additional storage is required, this option does not scale. - -Running a shared file system on a storage system apart from the computes -nodes is ideal for clouds where reliability and scalability are the most -important factors. Running a shared file system on the compute nodes -themselves may be best in a scenario where you have to deploy to -preexisting servers for which you have little to no control over their -specifications. Running a nonshared file system on the compute nodes -themselves is a good option for clouds with high I/O requirements and -low concern for reliability. - -Issues with Live Migration --------------------------- - -We consider live migration an integral part of the operations of the -cloud. This feature provides the ability to seamlessly move instances -from one physical host to another, a necessity for performing upgrades -that require reboots of the compute hosts, but only works well with -shared storage. - -Live migration can also be done with nonshared storage, using a feature -known as *KVM live block migration*. While an earlier implementation of -block-based migration in KVM and QEMU was considered unreliable, there -is a newer, more reliable implementation of block-based live migration -as of QEMU 1.4 and libvirt 1.0.2 that is also compatible with OpenStack. -However, none of the authors of this guide have first-hand experience -using live block migration. - -Choice of File System ---------------------- - -If you want to support shared-storage live migration, you need to -configure a distributed file system. - -Possible options include: - -- NFS (default for Linux) - -- GlusterFS - -- MooseFS - -- Lustre - -We've seen deployments with all, and recommend that you choose the one -you are most familiar with operating. If you are not familiar with any -of these, choose NFS, as it is the easiest to set up and there is -extensive community knowledge about it. - -Overcommitting -~~~~~~~~~~~~~~ - -OpenStack allows you to overcommit CPU and RAM on compute nodes. This -allows you to increase the number of instances you can have running on -your cloud, at the cost of reducing the performance of the instances. -OpenStack Compute uses the following ratios by default: - -- CPU allocation ratio: 16:1 - -- RAM allocation ratio: 1.5:1 - -The default CPU allocation ratio of 16:1 means that the scheduler -allocates up to 16 virtual cores per physical core. For example, if a -physical node has 12 cores, the scheduler sees 192 available virtual -cores. With typical flavor definitions of 4 virtual cores per instance, -this ratio would provide 48 instances on a physical node. - -The formula for the number of virtual instances on a compute node is -*(OR*PC)/VC*, where: - -*OR* - CPU overcommit ratio (virtual cores per physical core) - -*PC* - Number of physical cores - -*VC* - Number of virtual cores per instance - -Similarly, the default RAM allocation ratio of 1.5:1 means that the -scheduler allocates instances to a physical node as long as the total -amount of RAM associated with the instances is less than 1.5 times the -amount of RAM available on the physical node. - -For example, if a physical node has 48 GB of RAM, the scheduler -allocates instances to that node until the sum of the RAM associated -with the instances reaches 72 GB (such as nine instances, in the case -where each instance has 8 GB of RAM). - -.. note:: - Regardless of the overcommit ratio, an instance can not be placed - on any physical node with fewer raw (pre-overcommit) resources than - the instance flavor requires. - -You must select the appropriate CPU and RAM allocation ratio for your -particular use case. - -Logging -~~~~~~~ - -Logging is detailed more fully in :doc:`ops_logging_monitoring`. However, -it is an important design consideration to take into account before -commencing operations of your cloud. - -OpenStack produces a great deal of useful logging information, however; -but for the information to be useful for operations purposes, you should -consider having a central logging server to send logs to, and a log -parsing/analysis system (such as logstash). - -Networking -~~~~~~~~~~ - -Networking in OpenStack is a complex, multifaceted challenge. See -:doc:`arch_network_design`. - -Conclusion -~~~~~~~~~~ - -Compute nodes are the workhorse of your cloud and the place where your -users' applications will run. They are likely to be affected by your -decisions on what to deploy and how you deploy it. Their requirements -should be reflected in the choices you make. diff --git a/doc/ops-guide/source/arch_example_neutron.rst b/doc/ops-guide/source/arch_example_neutron.rst deleted file mode 100644 index b134a98d..00000000 --- a/doc/ops-guide/source/arch_example_neutron.rst +++ /dev/null @@ -1,556 +0,0 @@ -=========================================== -Example Architecture — OpenStack Networking -=========================================== - -This chapter provides an example architecture using OpenStack -Networking, also known as the Neutron project, in a highly available -environment. - -Overview -~~~~~~~~ - -A highly-available environment can be put into place if you require an -environment that can scale horizontally, or want your cloud to continue -to be operational in case of node failure. This example architecture has -been written based on the current default feature set of OpenStack -Havana, with an emphasis on high availability. - -Components ----------- - -.. list-table:: - :widths: 50 50 - :header-rows: 1 - - * - Component - - Details - * - OpenStack release - - Havana - * - Host operating system - - Red Hat Enterprise Linux 6.5 - * - OpenStack package repository - - `Red Hat Distributed OpenStack (RDO) `_ - * - Hypervisor - - KVM - * - Database - - MySQL - * - Message queue - - Qpid - * - Networking service - - OpenStack Networking - * - Tenant Network Separation - - VLAN - * - Image service back end - - GlusterFS - * - Identity driver - - SQL - * - Block Storage back end - - GlusterFS - -Rationale ---------- - -This example architecture has been selected based on the current default -feature set of OpenStack Havana, with an emphasis on high availability. -This architecture is currently being deployed in an internal Red Hat -OpenStack cloud and used to run hosted and shared services, which by -their nature must be highly available. - -This architecture's components have been selected for the following -reasons: - -Red Hat Enterprise Linux - You must choose an operating system that can run on all of the - physical nodes. This example architecture is based on Red Hat - Enterprise Linux, which offers reliability, long-term support, - certified testing, and is hardened. Enterprise customers, now moving - into OpenStack usage, typically require these advantages. - -RDO - The Red Hat Distributed OpenStack package offers an easy way to - download the most current OpenStack release that is built for the - Red Hat Enterprise Linux platform. - -KVM - KVM is the supported hypervisor of choice for Red Hat Enterprise - Linux (and included in distribution). It is feature complete and - free from licensing charges and restrictions. - -MySQL - MySQL is used as the database back end for all databases in the - OpenStack environment. MySQL is the supported database of choice for - Red Hat Enterprise Linux (and included in distribution); the - database is open source, scalable, and handles memory well. - -Qpid - Apache Qpid offers 100 percent compatibility with the - :term:`Advanced Message Queuing Protocol (AMQP)` Standard, and its - broker is available for both C++ and Java. - -OpenStack Networking - OpenStack Networking offers sophisticated networking functionality, - including Layer 2 (L2) network segregation and provider networks. - -VLAN - Using a virtual local area network offers broadcast control, - security, and physical layer transparency. If needed, use VXLAN to - extend your address space. - -GlusterFS - GlusterFS offers scalable storage. As your environment grows, you - can continue to add more storage nodes (instead of being restricted, - for example, by an expensive storage array). - -Detailed Description -~~~~~~~~~~~~~~~~~~~~ - -Node types ----------- - -This section gives you a breakdown of the different nodes that make up -the OpenStack environment. A node is a physical machine that is -provisioned with an operating system, and running a defined software -stack on top of it. The table below provides node descriptions and -specifications. - -.. list-table:: Node types - :widths: 33 33 33 - :header-rows: 1 - - * - Type - - Description - - Example hardware - * - Controller - - Controller nodes are responsible for running the management software - services needed for the OpenStack environment to function. - These nodes: - - * Provide the front door that people access as well as the API - services that all other components in the environment talk to. - * Run a number of services in a highly available fashion, - utilizing Pacemaker and HAProxy to provide a virtual IP and - load-balancing functions so all controller nodes are being used. - * Supply highly available "infrastructure" services, - such as MySQL and Qpid, that underpin all the services. - * Provide what is known as "persistent storage" through services - run on the host as well. This persistent storage is backed onto - the storage nodes for reliability. - - See :ref:`controller_node`. - - Model: Dell R620 - - CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz - - Memory: 32 GB - - Disk: two 300 GB 10000 RPM SAS Disks - - Network: two 10G network ports - * - Compute - - Compute nodes run the virtual machine instances in OpenStack. They: - - * Run the bare minimum of services needed to facilitate these - instances. - * Use local storage on the node for the virtual machines so that - no VM migration or instance recovery at node failure is possible. - - See :ref:`compute_node`. - - Model: Dell R620 - - CPU: 2x Intel® Xeon® CPU E5-2650 0 @ 2.00 GHz - - Memory: 128 GB - - Disk: two 600 GB 10000 RPM SAS Disks - - Network: four 10G network ports (For future proofing expansion) - * - Storage - - Storage nodes store all the data required for the environment, - including disk images in the Image service library, and the - persistent storage volumes created by the Block Storage service. - Storage nodes use GlusterFS technology to keep the data highly - available and scalable. - - See :ref:`storage_node`. - - Model: Dell R720xd - - CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz - - Memory: 64 GB - - Disk: two 500 GB 7200 RPM SAS Disks and twenty-four 600 GB - 10000 RPM SAS Disks - - Raid Controller: PERC H710P Integrated RAID Controller, 1 GB NV Cache - - Network: two 10G network ports - * - Network - - Network nodes are responsible for doing all the virtual networking - needed for people to create public or private networks and uplink - their virtual machines into external networks. Network nodes: - - * Form the only ingress and egress point for instances running - on top of OpenStack. - * Run all of the environment's networking services, with the - exception of the networking API service (which runs on the - controller node). - - See :ref:`network_node`. - - Model: Dell R620 - - CPU: 1x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz - - Memory: 32 GB - - Disk: two 300 GB 10000 RPM SAS Disks - - Network: five 10G network ports - * - Utility - - Utility nodes are used by internal administration staff only to - provide a number of basic system administration functions needed - to get the environment up and running and to maintain the hardware, - OS, and software on which it runs. - - These nodes run services such as provisioning, configuration - management, monitoring, or GlusterFS management software. - They are not required to scale, although these machines are - usually backed up. - - Model: Dell R620 - - CPU: 2x Intel® Xeon® CPU E5-2620 0 @ 2.00 GHz - - Memory: 32 GB - - Disk: two 500 GB 7200 RPM SAS Disks - - Network: two 10G network ports - - -.. _networking_layout: - -Networking layout ------------------ - -The network contains all the management devices for all hardware in the -environment (for example, by including Dell iDrac7 devices for the -hardware nodes, and management interfaces for network switches). The -network is accessed by internal staff only when diagnosing or recovering -a hardware issue. - -OpenStack internal network --------------------------- - -This network is used for OpenStack management functions and traffic, -including services needed for the provisioning of physical nodes -(``pxe``, ``tftp``, ``kickstart``), traffic between various OpenStack -node types using OpenStack APIs and messages (for example, -``nova-compute`` talking to ``keystone`` or ``cinder-volume`` talking to -``nova-api``), and all traffic for storage data to the storage layer -underneath by the Gluster protocol. All physical nodes have at least one -network interface (typically ``eth0``) in this network. This network is -only accessible from other VLANs on port 22 (for ``ssh`` access to -manage machines). - -Public Network --------------- - -This network is a combination of: - -- IP addresses for public-facing interfaces on the controller nodes - (which end users will access the OpenStack services) - -- A range of publicly routable, IPv4 network addresses to be used by - OpenStack Networking for floating IPs. You may be restricted in your - access to IPv4 addresses; a large range of IPv4 addresses is not - necessary. - -- Routers for private networks created within OpenStack. - -This network is connected to the controller nodes so users can access -the OpenStack interfaces, and connected to the network nodes to provide -VMs with publicly routable traffic functionality. The network is also -connected to the utility machines so that any utility services that need -to be made public (such as system monitoring) can be accessed. - -VM traffic network ------------------- - -This is a closed network that is not publicly routable and is simply -used as a private, internal network for traffic between virtual machines -in OpenStack, and between the virtual machines and the network nodes -that provide l3 routes out to the public network (and floating IPs for -connections back in to the VMs). Because this is a closed network, we -are using a different address space to the others to clearly define the -separation. Only Compute and OpenStack Networking nodes need to be -connected to this network. - -Node connectivity -~~~~~~~~~~~~~~~~~ - -The following section details how the nodes are connected to the -different networks (see :ref:`networking_layout`) and -what other considerations need to take place (for example, bonding) when -connecting nodes to the networks. - -Initial deployment ------------------- - -Initially, the connection setup should revolve around keeping the -connectivity simple and straightforward in order to minimize deployment -complexity and time to deploy. The deployment shown below aims to have 1 × 10G -connectivity available to all compute nodes, while still leveraging bonding on -appropriate nodes for maximum performance. - -.. figure:: figures/osog_0101.png - :alt: Basic node deployment - :width: 100% - - Basic node deployment - - -Connectivity for maximum performance ------------------------------------- - -If the networking performance of the basic layout is not enough, you can -move to the design below, which provides 2 × 10G network -links to all instances in the environment as well as providing more -network bandwidth to the storage layer. - -.. figure:: figures/osog_0102.png - :alt: Performance node deployment - :width: 100% - - Performance node deployment - - -Node diagrams -~~~~~~~~~~~~~ - -The following diagrams include logical -information about the different types of nodes, indicating what services -will be running on top of them and how they interact with each other. -The diagrams also illustrate how the availability and scalability of -services are achieved. - -.. _controller_node: - -.. figure:: figures/osog_0103.png - :alt: Controller node - :width: 100% - - Controller node - -.. _compute_node: - -.. figure:: figures/osog_0104.png - :alt: Compute node - :width: 100% - - Compute node - -.. _network_node: - -.. figure:: figures/osog_0105.png - :alt: Network node - :width: 100% - - Network node - -.. _storage_node: - -.. figure:: figures/osog_0106.png - :alt: Storage node - :width: 100% - - Storage node - - -Example Component Configuration -------------------------------- - -The following tables include example configuration -and considerations for both third-party and OpenStack components: - -.. list-table:: Table: Third-party component configuration - :widths: 25 25 25 25 - :header-rows: 1 - - * - Component - - Tuning - - Availability - - Scalability - * - MySQL - - ``binlog-format = row`` - - Master/master replication. However, both nodes are not used at the - same time. Replication keeps all nodes as close to being up to date - as possible (although the asynchronous nature of the replication means - a fully consistent state is not possible). Connections to the database - only happen through a Pacemaker virtual IP, ensuring that most problems - that occur with master-master replication can be avoided. - - Not heavily considered. Once load on the MySQL server increases enough - that scalability needs to be considered, multiple masters or a - master/slave setup can be used. - * - Qpid - - ``max-connections=1000`` ``worker-threads=20`` ``connection-backlog=10``, - sasl security enabled with SASL-BASIC authentication - - Qpid is added as a resource to the Pacemaker software that runs on - Controller nodes where Qpid is situated. This ensures only one Qpid - instance is running at one time, and the node with the Pacemaker - virtual IP will always be the node running Qpid. - - Not heavily considered. However, Qpid can be changed to run on all - controller nodes for scalability and availability purposes, - and removed from Pacemaker. - * - HAProxy - - ``maxconn 3000`` - - HAProxy is a software layer-7 load balancer used to front door all - clustered OpenStack API components and do SSL termination. - HAProxy can be added as a resource to the Pacemaker software that - runs on the Controller nodes where HAProxy is situated. - This ensures that only one HAProxy instance is running at one time, - and the node with the Pacemaker virtual IP will always be the node - running HAProxy. - - Not considered. HAProxy has small enough performance overheads that - a single instance should scale enough for this level of workload. - If extra scalability is needed, ``keepalived`` or other Layer-4 - load balancing can be introduced to be placed in front of multiple - copies of HAProxy. - * - Memcached - - ``MAXCONN="8192" CACHESIZE="30457"`` - - Memcached is a fast in-memory key-value cache software that is used - by OpenStack components for caching data and increasing performance. - Memcached runs on all controller nodes, ensuring that should one go - down, another instance of Memcached is available. - - Not considered. A single instance of Memcached should be able to - scale to the desired workloads. If scalability is desired, HAProxy - can be placed in front of Memcached (in raw ``tcp`` mode) to utilize - multiple Memcached instances for scalability. However, this might - cause cache consistency issues. - * - Pacemaker - - Configured to use ``corosync`` and ``cman`` as a cluster communication - stack/quorum manager, and as a two-node cluster. - - Pacemaker is the clustering software used to ensure the availability - of services running on the controller and network nodes: - - * Because Pacemaker is cluster software, the software itself handles - its own availability, leveraging ``corosync`` and ``cman`` - underneath. - * If you use the GlusterFS native client, no virtual IP is needed, - since the client knows all about nodes after initial connection - and automatically routes around failures on the client side. - * If you use the NFS or SMB adaptor, you will need a virtual IP on - which to mount the GlusterFS volumes. - - If more nodes need to be made cluster aware, Pacemaker can scale to - 64 nodes. - * - GlusterFS - - ``glusterfs`` performance profile "virt" enabled on all volumes. - Volumes are setup in two-node replication. - - Glusterfs is a clustered file system that is run on the storage - nodes to provide persistent scalable data storage in the environment. - Because all connections to gluster use the ``gluster`` native mount - points, the ``gluster`` instances themselves provide availability - and failover functionality. - - The scalability of GlusterFS storage can be achieved by adding in - more storage volumes. - -| - -.. list-table:: Table: OpenStack component configuration - :widths: 20 20 20 20 20 - :header-rows: 1 - - * - Component - - Node type - - Tuning - - Availability - - Scalability - * - Dashboard (horizon) - - Controller - - Configured to use Memcached as a session store, ``neutron`` - support is enabled, ``can_set_mount_point = False`` - - The dashboard is run on all controller nodes, ensuring at least one - instance will be available in case of node failure. - It also sits behind HAProxy, which detects when the software fails - and routes requests around the failing instance. - - The dashboard is run on all controller nodes, so scalability can be - achieved with additional controller nodes. HAProxy allows scalability - for the dashboard as more nodes are added. - * - Identity (keystone) - - Controller - - Configured to use Memcached for caching and PKI for tokens. - - Identity is run on all controller nodes, ensuring at least one - instance will be available in case of node failure. - Identity also sits behind HAProxy, which detects when the software - fails and routes requests around the failing instance. - - Identity is run on all controller nodes, so scalability can be - achieved with additional controller nodes. - HAProxy allows scalability for Identity as more nodes are added. - * - Image service (glance) - - Controller - - ``/var/lib/glance/images`` is a GlusterFS native mount to a Gluster - volume off the storage layer. - - The Image service is run on all controller nodes, ensuring at least - one instance will be available in case of node failure. - It also sits behind HAProxy, which detects when the software fails - and routes requests around the failing instance. - - The Image service is run on all controller nodes, so scalability - can be achieved with additional controller nodes. HAProxy allows - scalability for the Image service as more nodes are added. - * - Compute (nova) - - Controller, Compute - - Configured to use Qpid, ``qpid_heartbeat = `` ``10``,configured to - use Memcached for caching, configured to use ``libvirt``, configured - to use ``neutron``. - - Configured ``nova-consoleauth`` to use Memcached for session - management (so that it can have multiple copies and run in a - load balancer). - - The nova API, scheduler, objectstore, cert, consoleauth, conductor, - and vncproxy services are run on all controller nodes, ensuring at - least one instance will be available in case of node failure. - Compute is also behind HAProxy, which detects when the software - fails and routes requests around the failing instance. - - Nova-compute and nova-conductor services, which run on the compute - nodes, are only needed to run services on that node, so availability - of those services is coupled tightly to the nodes that are available. - As long as a compute node is up, it will have the needed services - running on top of it. - - The nova API, scheduler, objectstore, cert, consoleauth, conductor, - and vncproxy services are run on all controller nodes, so scalability - can be achieved with additional controller nodes. HAProxy allows - scalability for Compute as more nodes are added. The scalability - of services running on the compute nodes (compute, conductor) is - achieved linearly by adding in more compute nodes. - * - Block Storage (cinder) - - Controller - - Configured to use Qpid, ``qpid_heartbeat = ``\ ``10``,configured to - use a Gluster volume from the storage layer as the back end for - Block Storage, using the Gluster native client. - - Block Storage API, scheduler, and volume services are run on all - controller nodes, ensuring at least one instance will be available - in case of node failure. Block Storage also sits behind HAProxy, - which detects if the software fails and routes requests around the - failing instance. - - Block Storage API, scheduler and volume services are run on all - controller nodes, so scalability can be achieved with additional - controller nodes. HAProxy allows scalability for Block Storage as - more nodes are added. - * - OpenStack Networking (neutron) - - Controller, Compute, Network - - Configured to use QPID, ``qpid_heartbeat = 10``, kernel namespace - support enabled, ``tenant_network_type = vlan``, - ``allow_overlapping_ips = true``, ``tenant_network_type = vlan``, - ``bridge_uplinks = br-ex:em2``, ``bridge_mappings = physnet1:br-ex`` - - The OpenStack Networking service is run on all controller nodes, - ensuring at least one instance will be available in case of node - failure. It also sits behind HAProxy, which detects if the software - fails and routes requests around the failing instance. - - The OpenStack Networking server service is run on all controller - nodes, so scalability can be achieved with additional controller - nodes. HAProxy allows scalability for OpenStack Networking as more - nodes are added. Scalability of services running on the network - nodes is not currently supported by OpenStack Networking, so they - are not be considered. One copy of the services should be sufficient - to handle the workload. Scalability of the ``ovs-agent`` running on - compute nodes is achieved by adding in more compute nodes as - necessary. diff --git a/doc/ops-guide/source/arch_example_nova_network.rst b/doc/ops-guide/source/arch_example_nova_network.rst deleted file mode 100644 index a2517b3f..00000000 --- a/doc/ops-guide/source/arch_example_nova_network.rst +++ /dev/null @@ -1,259 +0,0 @@ -=============================================== -Example Architecture — Legacy Networking (nova) -=============================================== - -This particular example architecture has been upgraded from :term:`Grizzly` to -:term:`Havana` and tested in production environments where many public IP -addresses are available for assignment to multiple instances. You can -find a second example architecture that uses OpenStack Networking -(neutron) after this section. Each example offers high availability, -meaning that if a particular node goes down, another node with the same -configuration can take over the tasks so that the services continue to -be available. - -Overview -~~~~~~~~ - -The simplest architecture you can build upon for Compute has a single -cloud controller and multiple compute nodes. The simplest architecture -for Object Storage has five nodes: one for identifying users and -proxying requests to the API, then four for storage itself to provide -enough replication for eventual consistency. This example architecture -does not dictate a particular number of nodes, but shows the thinking -and considerations that went into choosing this architecture including -the features offered. - -Components -~~~~~~~~~~ - -.. list-table:: - :widths: 50 50 - :header-rows: 1 - - * - Component - - Details - * - OpenStack release - - Havana - * - Host operating system - - Ubuntu 12.04 LTS or Red Hat Enterprise Linux 6.5, - including derivatives such as CentOS and Scientific Linux - * - OpenStack package repository - - `Ubuntu Cloud Archive `_ - or `RDO `_ - * - Hypervisor - - KVM - * - Database - - MySQL\* - * - Message queue - - RabbitMQ for Ubuntu; Qpid for Red Hat Enterprise Linux and derivatives - * - Networking service - - ``nova-network`` - * - Network manager - - FlatDHCP - * - Single ``nova-network`` or multi-host? - - multi-host\* - * - Image service (glance) back end - - file - * - Identity (keystone) driver - - SQL - * - Block Storage (cinder) back end - - LVM/iSCSI - * - Live Migration back end - - Shared storage using NFS\* - * - Object storage - - OpenStack Object Storage (swift) - -An asterisk (\*) indicates when the example architecture deviates from -the settings of a default installation. We'll offer explanations for -those deviations next. - -.. note:: - - The following features of OpenStack are supported by the example - architecture documented in this guide, but are optional: - - - :term:`Dashboard`: You probably want to offer a dashboard, but your - users may be more interested in API access only. - - - Block storage: You don't have to offer users block storage if - their use case only needs ephemeral storage on compute nodes, for - example. - - - Floating IP address: Floating IP addresses are public IP - addresses that you allocate from a predefined pool to assign to - virtual machines at launch. Floating IP address ensure that the - public IP address is available whenever an instance is booted. - Not every organization can offer thousands of public floating IP - addresses for thousands of instances, so this feature is - considered optional. - - - Live migration: If you need to move running virtual machine - instances from one host to another with little or no service - interruption, you would enable live migration, but it is - considered optional. - - - Object storage: You may choose to store machine images on a file - system rather than in object storage if you do not have the extra - hardware for the required replication and redundancy that - OpenStack Object Storage offers. - -Rationale -~~~~~~~~~ - -This example architecture has been selected based on the current default -feature set of OpenStack Havana, with an emphasis on stability. We -believe that many clouds that currently run OpenStack in production have -made similar choices. - -You must first choose the operating system that runs on all of the -physical nodes. While OpenStack is supported on several distributions of -Linux, we used *Ubuntu 12.04 LTS (Long Term Support)*, which is used by -the majority of the development community, has feature completeness -compared with other distributions and has clear future support plans. - -We recommend that you do not use the default Ubuntu OpenStack install -packages and instead use the `Ubuntu Cloud -Archive `__. The Cloud -Archive is a package repository supported by Canonical that allows you -to upgrade to future OpenStack releases while remaining on Ubuntu 12.04. - -*KVM* as a :term:`hypervisor` complements the choice of Ubuntu—being a -matched pair in terms of support, and also because of the significant degree -of attention it garners from the OpenStack development community (including -the authors, who mostly use KVM). It is also feature complete, free from -licensing charges and restrictions. - -*MySQL* follows a similar trend. Despite its recent change of ownership, -this database is the most tested for use with OpenStack and is heavily -documented. We deviate from the default database, *SQLite*, because -SQLite is not an appropriate database for production usage. - -The choice of *RabbitMQ* over other -:term:`AMQP ` compatible options -that are gaining support in OpenStack, such as ZeroMQ and Qpid, is due to its -ease of use and significant testing in production. It also is the only -option that supports features such as Compute cells. We recommend -clustering with RabbitMQ, as it is an integral component of the system -and fairly simple to implement due to its inbuilt nature. - -As discussed in previous chapters, there are several options for -networking in OpenStack Compute. We recommend *FlatDHCP* and to use -*Multi-Host* networking mode for high availability, running one -``nova-network`` daemon per OpenStack compute host. This provides a -robust mechanism for ensuring network interruptions are isolated to -individual compute hosts, and allows for the direct use of hardware -network gateways. - -*Live Migration* is supported by way of shared storage, with *NFS* as -the distributed file system. - -Acknowledging that many small-scale deployments see running Object -Storage just for the storage of virtual machine images as too costly, we -opted for the file back end in the OpenStack :term:`Image service` (Glance). -If your cloud will include Object Storage, you can easily add it as a back -end. - -We chose the *SQL back end for Identity* over others, such as LDAP. This -back end is simple to install and is robust. The authors acknowledge -that many installations want to bind with existing directory services -and caution careful understanding of the `array of options available -`_. - -Block Storage (cinder) is installed natively on external storage nodes -and uses the *LVM/iSCSI plug-in*. Most Block Storage plug-ins are tied -to particular vendor products and implementations limiting their use to -consumers of those hardware platforms, but LVM/iSCSI is robust and -stable on commodity hardware. - -While the cloud can be run without the *OpenStack Dashboard*, we -consider it to be indispensable, not just for user interaction with the -cloud, but also as a tool for operators. Additionally, the dashboard's -use of Django makes it a flexible framework for extension. - -Why not use OpenStack Networking? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This example architecture does not use OpenStack Networking, because it -does not yet support multi-host networking and our organizations -(university, government) have access to a large range of -publicly-accessible IPv4 addresses. - -Why use multi-host networking? -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In a default OpenStack deployment, there is a single ``nova-network`` -service that runs within the cloud (usually on the cloud controller) -that provides services such as -:term:`network address translation ` (NAT), :term:`DHCP`, -and :term:`DNS` to the guest instances. If the single node that runs the -``nova-network`` service goes down, you cannot access your instances, -and the instances cannot access the Internet. The single node that runs -the ``nova-network`` service can become a bottleneck if excessive -network traffic comes in and goes out of the cloud. - -.. note:: - - `Multi-host `_ - is a high-availability option for the network configuration, where - the ``nova-network`` service is run on every compute node instead of - running on only a single node. - -Detailed Description --------------------- - -The reference architecture consists of multiple compute nodes, a cloud -controller, an external NFS storage server for instance storage, and an -OpenStack Block Storage server for volume storage.legacy networking -(nova) detailed description A network time service (:term:`Network Time -Protocol `, or NTP) synchronizes time on all the nodes. FlatDHCPManager in -multi-host mode is used for the networking. A logical diagram for this -example architecture shows which services are running on each node: - -.. image:: figures/osog_01in01.png - :width: 100% - -| - -The cloud controller runs the dashboard, the API services, the database -(MySQL), a message queue server (RabbitMQ), the scheduler for choosing -compute resources (``nova-scheduler``), Identity services (keystone, -``nova-consoleauth``), Image services (``glance-api``, -``glance-registry``), services for console access of guests, and Block -Storage services, including the scheduler for storage resources -(``cinder-api`` and ``cinder-scheduler``). - -Compute nodes are where the computing resources are held, and in our -example architecture, they run the hypervisor (KVM), libvirt (the driver -for the hypervisor, which enables live migration from node to node), -``nova-compute``, ``nova-api-metadata`` (generally only used when -running in multi-host mode, it retrieves instance-specific metadata), -``nova-vncproxy``, and ``nova-network``. - -The network consists of two switches, one for the management or private -traffic, and one that covers public access, including floating IPs. To -support this, the cloud controller and the compute nodes have two -network cards. The OpenStack Block Storage and NFS storage servers only -need to access the private network and therefore only need one network -card, but multiple cards run in a bonded configuration are recommended -if possible. Floating IP access is direct to the Internet, whereas Flat -IP access goes through a NAT. To envision the network traffic, use this -diagram: - -.. image:: figures/osog_01in02.png - :width: 100% - -| - -Optional Extensions -------------------- - -You can extend this reference architecture aslegacy networking (nova) -optional extensions follows: - -- Add additional cloud controllers (see :doc:`ops_maintenance`). - -- Add an OpenStack Storage service (see the Object Storage chapter in - the *OpenStack Installation Guide* for your distribution). - -- Add additional OpenStack Block Storage hosts (see - :doc:`ops_maintenance`). diff --git a/doc/ops-guide/source/arch_example_thoughts.rst b/doc/ops-guide/source/arch_example_thoughts.rst deleted file mode 100644 index b9589d61..00000000 --- a/doc/ops-guide/source/arch_example_thoughts.rst +++ /dev/null @@ -1,11 +0,0 @@ -========================================= -Parting Thoughts on Architecture Examples -========================================= - -With so many considerations and options available, our hope is to -provide a few clearly-marked and tested paths for your OpenStack -exploration. If you're looking for additional ideas, check out -:doc:`app_usecases`, the -`OpenStack Installation Guides `_, or the -`OpenStack User Stories -page `_. diff --git a/doc/ops-guide/source/arch_examples.rst b/doc/ops-guide/source/arch_examples.rst deleted file mode 100644 index f91bcb5f..00000000 --- a/doc/ops-guide/source/arch_examples.rst +++ /dev/null @@ -1,30 +0,0 @@ -===================== -Architecture Examples -===================== - -To understand the possibilities that OpenStack offers, it's best to -start with basic architecture that has been tested in production -environments. We offer two examples with basic pivots on the base -operating system (Ubuntu and Red Hat Enterprise Linux) and the -networking architecture. There are other differences between these two -examples and this guide provides reasons for each choice made. - -Because OpenStack is highly configurable, with many different back ends -and network configuration options, it is difficult to write -documentation that covers all possible OpenStack deployments. Therefore, -this guide defines examples of architecture to simplify the task of -documenting, as well as to provide the scope for this guide. Both of the -offered architecture examples are currently running in production and -serving users. - -.. note:: - - As always, refer to the :doc:`common/glossary` if you are unclear - about any of the terminology mentioned in architecture examples. - -.. toctree:: - :maxdepth: 2 - - arch_example_nova_network.rst - arch_example_neutron.rst - arch_example_thoughts.rst diff --git a/doc/ops-guide/source/arch_network_design.rst b/doc/ops-guide/source/arch_network_design.rst deleted file mode 100644 index 5efc675b..00000000 --- a/doc/ops-guide/source/arch_network_design.rst +++ /dev/null @@ -1,290 +0,0 @@ -============== -Network Design -============== - -OpenStack provides a rich networking environment, and this chapter -details the requirements and options to deliberate when designing your -cloud. - -.. warning:: - - If this is the first time you are deploying a cloud infrastructure - in your organization, after reading this section, your first - conversations should be with your networking team. Network usage in - a running cloud is vastly different from traditional network - deployments and has the potential to be disruptive at both a - connectivity and a policy level. - -For example, you must plan the number of IP addresses that you need for -both your guest instances as well as management infrastructure. -Additionally, you must research and discuss cloud network connectivity -through proxy servers and firewalls. - -In this chapter, we'll give some examples of network implementations to -consider and provide information about some of the network layouts that -OpenStack uses. Finally, we have some brief notes on the networking -services that are essential for stable operation. - -Management Network -~~~~~~~~~~~~~~~~~~ - -A :term:`management network` (a separate network for use by your cloud -operators) typically consists of a separate switch and separate NICs -(network interface cards), and is a recommended option. This segregation -prevents system administration and the monitoring of system access from -being disrupted by traffic generated by guests. - -Consider creating other private networks for communication between -internal components of OpenStack, such as the message queue and -OpenStack Compute. Using a virtual local area network (VLAN) works well -for these scenarios because it provides a method for creating multiple -virtual networks on a physical network. - -Public Addressing Options -~~~~~~~~~~~~~~~~~~~~~~~~~ - -There are two main types of IP addresses for guest virtual machines: -fixed IPs and floating IPs. Fixed IPs are assigned to instances on boot, -whereas floating IP addresses can change their association between -instances by action of the user. Both types of IP addresses can be -either public or private, depending on your use case. - -Fixed IP addresses are required, whereas it is possible to run OpenStack -without floating IPs. One of the most common use cases for floating IPs -is to provide public IP addresses to a private cloud, where there are a -limited number of IP addresses available. Another is for a public cloud -user to have a "static" IP address that can be reassigned when an -instance is upgraded or moved. - -Fixed IP addresses can be private for private clouds, or public for -public clouds. When an instance terminates, its fixed IP is lost. It is -worth noting that newer users of cloud computing may find their -ephemeral nature frustrating. - -IP Address Planning -~~~~~~~~~~~~~~~~~~~ - -An OpenStack installation can potentially have many subnets (ranges of -IP addresses) and different types of services in each. An IP address -plan can assist with a shared understanding of network partition -purposes and scalability. Control services can have public and private -IP addresses, and as noted above, there are a couple of options for an -instance's public addresses. - -An IP address plan might be broken down into the following sections: - -Subnet router - Packets leaving the subnet go via this address, which could be a - dedicated router or a ``nova-network`` service. - -Control services public interfaces - Public access to ``swift-proxy``, ``nova-api``, ``glance-api``, and - horizon come to these addresses, which could be on one side of a - load balancer or pointing at individual machines. - -Object Storage cluster internal communications - Traffic among object/account/container servers and between these and - the proxy server's internal interface uses this private network. - -Compute and storage communications - If ephemeral or block storage is external to the compute node, this - network is used. - -Out-of-band remote management - If a dedicated remote access controller chip is included in servers, - often these are on a separate network. - -In-band remote management - Often, an extra (such as 1 GB) interface on compute or storage nodes - is used for system administrators or monitoring tools to access the - host instead of going through the public interface. - -Spare space for future growth - Adding more public-facing control services or guest instance IPs - should always be part of your plan. - -For example, take a deployment that has both OpenStack Compute and -Object Storage, with private ranges 172.22.42.0/24 and 172.22.87.0/26 -available. One way to segregate the space might be as follows: - -:: - - 172.22.42.0/24: - 172.22.42.1 - 172.22.42.3 - subnet routers - 172.22.42.4 - 172.22.42.20 - spare for networks - 172.22.42.21 - 172.22.42.104 - Compute node remote access controllers - (inc spare) - 172.22.42.105 - 172.22.42.188 - Compute node management interfaces (inc spare) - 172.22.42.189 - 172.22.42.208 - Swift proxy remote access controllers - (inc spare) - 172.22.42.209 - 172.22.42.228 - Swift proxy management interfaces (inc spare) - 172.22.42.229 - 172.22.42.252 - Swift storage servers remote access controllers - (inc spare) - 172.22.42.253 - 172.22.42.254 - spare - 172.22.87.0/26: - 172.22.87.1 - 172.22.87.3 - subnet routers - 172.22.87.4 - 172.22.87.24 - Swift proxy server internal interfaces - (inc spare) - 172.22.87.25 - 172.22.87.63 - Swift object server internal interfaces - (inc spare) - -A similar approach can be taken with public IP addresses, taking note -that large, flat ranges are preferred for use with guest instance IPs. -Take into account that for some OpenStack networking options, a public -IP address in the range of a guest instance public IP address is -assigned to the ``nova-compute`` host. - -Network Topology -~~~~~~~~~~~~~~~~ - -OpenStack Compute with ``nova-network`` provides predefined network -deployment models, each with its own strengths and weaknesses. The -selection of a network manager changes your network topology, so the -choice should be made carefully. You also have a choice between the -tried-and-true legacy ``nova-network`` settings or the neutron project -for OpenStack Networking. Both offer networking for launched instances -with different implementations and requirements. - -For OpenStack Networking with the neutron project, typical -configurations are documented with the idea that any setup you can -configure with real hardware you can re-create with a software-defined -equivalent. Each tenant can contain typical network elements such as -routers, and services such as :term:`DHCP`. - -The following table describes the networking deployment options for both -legacy ``nova-network`` options and an equivalent neutron -configuration. - -.. list-table:: Networking deployment options - :widths: 25 25 25 25 - :header-rows: 1 - - * - Network deployment model - - Strengths - - Weaknesses - - Neutron equivalent - * - Flat - - Extremely simple topology. No DHCP overhead. - - Requires file injection into the instance to configure network - interfaces. - - Configure a single bridge as the integration bridge (br-int) and - connect it to a physical network interface with the Modular Layer 2 - (ML2) plug-in, which uses Open vSwitch by default. - * - FlatDHCP - - Relatively simple to deploy. Standard networking. Works with all guest - operating systems. - - Requires its own DHCP broadcast domain. - - Configure DHCP agents and routing agents. Network Address Translation - (NAT) performed outside of compute nodes, typically on one or more - network nodes. - * - VlanManager - - Each tenant is isolated to its own VLANs. - - More complex to set up. Requires its own DHCP broadcast domain. - Requires many VLANs to be trunked onto a single port. Standard VLAN - number limitation. Switches must support 802.1q VLAN tagging. - - Isolated tenant networks implement some form of isolation of layer 2 - traffic between distinct networks. VLAN tagging is key concept, where - traffic is “tagged” with an ordinal identifier for the VLAN. Isolated - network implementations may or may not include additional services like - DHCP, NAT, and routing. - * - FlatDHCP Multi-host with high availability (HA) - - Networking failure is isolated to the VMs running on the affected - hypervisor. DHCP traffic can be isolated within an individual host. - Network traffic is distributed to the compute nodes. - - More complex to set up. Compute nodes typically need IP addresses - accessible by external networks. Options must be carefully configured - for live migration to work with networking services. - - Configure neutron with multiple DHCP and layer-3 agents. Network nodes - are not able to failover to each other, so the controller runs - networking services, such as DHCP. Compute nodes run the ML2 plug-in - with support for agents such as Open vSwitch or Linux Bridge. - -Both ``nova-network`` and neutron services provide similar capabilities, -such as VLAN between VMs. You also can provide multiple NICs on VMs with -either service. Further discussion follows. - -VLAN Configuration Within OpenStack VMs ---------------------------------------- - -VLAN configuration can be as simple or as complicated as desired. The -use of VLANs has the benefit of allowing each project its own subnet and -broadcast segregation from other projects. To allow OpenStack to -efficiently use VLANs, you must allocate a VLAN range (one for each -project) and turn each compute node switch port into a trunk -port. - -For example, if you estimate that your cloud must support a maximum of -100 projects, pick a free VLAN range that your network infrastructure is -currently not using (such as VLAN 200–299). You must configure OpenStack -with this range and also configure your switch ports to allow VLAN -traffic from that range. - -Multi-NIC Provisioning ----------------------- - -OpenStack Networking with ``neutron`` and OpenStack Compute with -``nova-network`` have the ability to assign multiple NICs to instances. For -``nova-network`` this can be done on a per-request basis, with each -additional NIC using up an entire subnet or VLAN, reducing the total -number of supported projects. - -Multi-Host and Single-Host Networking -------------------------------------- - -The ``nova-network`` service has the ability to operate in a multi-host -or single-host mode. Multi-host is when each compute node runs a copy of -``nova-network`` and the instances on that compute node use the compute -node as a gateway to the Internet. The compute nodes also host the -floating IPs and security groups for instances on that node. Single-host -is when a central server—for example, the cloud controller—runs the -``nova-network`` service. All compute nodes forward traffic from the -instances to the cloud controller. The cloud controller then forwards -traffic to the Internet. The cloud controller hosts the floating IPs and -security groups for all instances on all compute nodes in the -cloud. - -There are benefits to both modes. Single-node has the downside of a -single point of failure. If the cloud controller is not available, -instances cannot communicate on the network. This is not true with -multi-host, but multi-host requires that each compute node has a public -IP address to communicate on the Internet. If you are not able to obtain -a significant block of public IP addresses, multi-host might not be an -option. - -Services for Networking -~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack, like any network application, has a number of standard -considerations to apply, such as NTP and DNS. - -NTP ---- - -Time synchronization is a critical element to ensure continued operation -of OpenStack components. Correct time is necessary to avoid errors in -instance scheduling, replication of objects in the object store, and -even matching log timestamps for debugging. - -All servers running OpenStack components should be able to access an -appropriate NTP server. You may decide to set up one locally or use the -public pools available from the `Network Time Protocol -project `_. - -DNS ---- - -OpenStack does not currently provide DNS services, aside from the -dnsmasq daemon, which resides on ``nova-network`` hosts. You could -consider providing a dynamic DNS service to allow instances to update a -DNS entry with new IP addresses. You can also consider making a generic -forward and reverse DNS mapping for instances' IP addresses, such as -vm-203-0-113-123.example.com. - -Conclusion -~~~~~~~~~~ - -Armed with your IP address layout and numbers and knowledge about the -topologies and services you can use, it's now time to prepare the -network for your installation. Be sure to also check out the `OpenStack -Security Guide `_ for tips on securing -your network. We wish you a good relationship with your networking team! diff --git a/doc/ops-guide/source/arch_provision.rst b/doc/ops-guide/source/arch_provision.rst deleted file mode 100644 index 0859b594..00000000 --- a/doc/ops-guide/source/arch_provision.rst +++ /dev/null @@ -1,252 +0,0 @@ -=========================== -Provisioning and Deployment -=========================== - -A critical part of a cloud's scalability is the amount of effort that it -takes to run your cloud. To minimize the operational cost of running -your cloud, set up and use an automated deployment and configuration -infrastructure with a configuration management system, such as :term:`Puppet` -or :term:`Chef`. Combined, these systems greatly reduce manual effort and the -chance for operator error. - -This infrastructure includes systems to automatically install the -operating system's initial configuration and later coordinate the -configuration of all services automatically and centrally, which reduces -both manual effort and the chance for error. Examples include Ansible, -CFEngine, Chef, Puppet, and Salt. You can even use OpenStack to deploy -OpenStack, named TripleO (OpenStack On OpenStack). - -Automated Deployment -~~~~~~~~~~~~~~~~~~~~ - -An automated deployment system installs and configures operating systems -on new servers, without intervention, after the absolute minimum amount -of manual work, including physical racking, MAC-to-IP assignment, and -power configuration. Typically, solutions rely on wrappers around PXE -boot and TFTP servers for the basic operating system install and then -hand off to an automated configuration management system. - -Both Ubuntu and Red Hat Enterprise Linux include mechanisms for -configuring the operating system, including preseed and kickstart, that -you can use after a network boot. Typically, these are used to bootstrap -an automated configuration system. Alternatively, you can use an -image-based approach for deploying the operating system, such as -systemimager. You can use both approaches with a virtualized -infrastructure, such as when you run VMs to separate your control -services and physical infrastructure. - -When you create a deployment plan, focus on a few vital areas because -they are very hard to modify post deployment. The next two sections talk -about configurations for: - -- Disk partitioning and disk array setup for scalability - -- Networking configuration just for PXE booting - -Disk Partitioning and RAID --------------------------- - -At the very base of any operating system are the hard drives on which -the operating system (OS) is installed. - -You must complete the following configurations on the server's hard -drives: - -- Partitioning, which provides greater flexibility for layout of - operating system and swap space, as described below. - -- Adding to a RAID array (RAID stands for redundant array of - independent disks), based on the number of disks you have available, - so that you can add capacity as your cloud grows. Some options are - described in more detail below. - -The simplest option to get started is to use one hard drive with two -partitions: - -- File system to store files and directories, where all the data lives, - including the root partition that starts and runs the system. - -- Swap space to free up memory for processes, as an independent area of - the physical disk used only for swapping and nothing else. - -RAID is not used in this simplistic one-drive setup because generally -for production clouds, you want to ensure that if one disk fails, -another can take its place. Instead, for production, use more than one -disk. The number of disks determine what types of RAID arrays to build. - -We recommend that you choose one of the following multiple disk options: - -Option 1 - Partition all drives in the same way in a horizontal fashion, as - shown in :ref:`partition_setup`. - - With this option, you can assign different partitions to different - RAID arrays. You can allocate partition 1 of disk one and two to the - ``/boot`` partition mirror. You can make partition 2 of all disks - the root partition mirror. You can use partition 3 of all disks for - a ``cinder-volumes`` LVM partition running on a RAID 10 array. - - .. _partition_setup: - - .. figure:: figures/osog_0201.png - - Figure. Partition setup of drives - - While you might end up with unused partitions, such as partition 1 - in disk three and four of this example, this option allows for - maximum utilization of disk space. I/O performance might be an issue - as a result of all disks being used for all tasks. - -Option 2 - Add all raw disks to one large RAID array, either hardware or - software based. You can partition this large array with the boot, - root, swap, and LVM areas. This option is simple to implement and - uses all partitions. However, disk I/O might suffer. - -Option 3 - Dedicate entire disks to certain partitions. For example, you could - allocate disk one and two entirely to the boot, root, and swap - partitions under a RAID 1 mirror. Then, allocate disk three and four - entirely to the LVM partition, also under a RAID 1 mirror. Disk I/O - should be better because I/O is focused on dedicated tasks. However, - the LVM partition is much smaller. - -.. note:: - - You may find that you can automate the partitioning itself. For - example, MIT uses `Fully Automatic Installation - (FAI) `_ to do the initial PXE-based - partition and then install using a combination of min/max and - percentage-based partitioning. - -As with most architecture choices, the right answer depends on your -environment. If you are using existing hardware, you know the disk -density of your servers and can determine some decisions based on the -options above. If you are going through a procurement process, your -user's requirements also help you determine hardware purchases. Here are -some examples from a private cloud providing web developers custom -environments at AT&T. This example is from a specific deployment, so -your existing hardware or procurement opportunity may vary from this. -AT&T uses three types of hardware in its deployment: - -- Hardware for controller nodes, used for all stateless OpenStack API - services. About 32–64 GB memory, small attached disk, one processor, - varied number of cores, such as 6–12. - -- Hardware for compute nodes. Typically 256 or 144 GB memory, two - processors, 24 cores. 4–6 TB direct attached storage, typically in a - RAID 5 configuration. - -- Hardware for storage nodes. Typically for these, the disk space is - optimized for the lowest cost per GB of storage while maintaining - rack-space efficiency. - -Again, the right answer depends on your environment. You have to make -your decision based on the trade-offs between space utilization, -simplicity, and I/O performance. - -Network Configuration ---------------------- - -Network configuration is a very large topic that spans multiple areas of -this book. For now, make sure that your servers can PXE boot and -successfully communicate with the deployment server. - -For example, you usually cannot configure NICs for VLANs when PXE -booting. Additionally, you usually cannot PXE boot with bonded NICs. If -you run into this scenario, consider using a simple 1 GB switch in a -private network on which only your cloud communicates. - -Automated Configuration -~~~~~~~~~~~~~~~~~~~~~~~ - -The purpose of automatic configuration management is to establish and -maintain the consistency of a system without using human intervention. -You want to maintain consistency in your deployments so that you can -have the same cloud every time, repeatably. Proper use of automatic -configuration-management tools ensures that components of the cloud -systems are in particular states, in addition to simplifying deployment, -and configuration change propagation. - -These tools also make it possible to test and roll back changes, as they -are fully repeatable. Conveniently, a large body of work has been done -by the OpenStack community in this space. Puppet, a configuration -management tool, even provides official modules for OpenStack projects -in an OpenStack infrastructure system known as `Puppet -OpenStack `_. Chef -configuration management is provided within -https://git.openstack.org/cgit/openstack/openstack-chef-repo. Additional -configuration management systems include Juju, Ansible, and Salt. Also, -PackStack is a command-line utility for Red Hat Enterprise Linux and -derivatives that uses Puppet modules to support rapid deployment of -OpenStack on existing servers over an SSH connection. - -An integral part of a configuration-management system is the item that -it controls. You should carefully consider all of the items that you -want, or do not want, to be automatically managed. For example, you may -not want to automatically format hard drives with user data. - -Remote Management -~~~~~~~~~~~~~~~~~ - -In our experience, most operators don't sit right next to the servers -running the cloud, and many don't necessarily enjoy visiting the data -center. OpenStack should be entirely remotely configurable, but -sometimes not everything goes according to plan. - -In this instance, having an out-of-band access into nodes running -OpenStack components is a boon. The IPMI protocol is the de facto -standard here, and acquiring hardware that supports it is highly -recommended to achieve that lights-out data center aim. - -In addition, consider remote power control as well. While IPMI usually -controls the server's power state, having remote access to the PDU that -the server is plugged into can really be useful for situations when -everything seems wedged. - -Parting Thoughts for Provisioning and Deploying OpenStack -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can save time by understanding the use cases for the cloud you want -to create. Use cases for OpenStack are varied. Some include object -storage only; others require preconfigured compute resources to speed -development-environment set up; and others need fast provisioning of -compute resources that are already secured per tenant with private -networks. Your users may have need for highly redundant servers to make -sure their legacy applications continue to run. Perhaps a goal would be -to architect these legacy applications so that they run on multiple -instances in a cloudy, fault-tolerant way, but not make it a goal to add -to those clusters over time. Your users may indicate that they need -scaling considerations because of heavy Windows server use. - -You can save resources by looking at the best fit for the hardware you -have in place already. You might have some high-density storage hardware -available. You could format and repurpose those servers for OpenStack -Object Storage. All of these considerations and input from users help -you build your use case and your deployment plan. - -.. note:: - - For further research about OpenStack deployment, investigate the - supported and documented preconfigured, prepackaged installers for - OpenStack from companies such as - `Canonical `_, - `Cisco `_, - `Cloudscaling `_, - `IBM `_, - `Metacloud `_, - `Mirantis `_, - `Piston `_, - `Rackspace `_, `Red - Hat `_, - `SUSE `_, and - `SwiftStack `_. - -Conclusion -~~~~~~~~~~ - -The decisions you make with respect to provisioning and deployment will -affect your day-to-day, week-to-week, and month-to-month maintenance of -the cloud. Your configuration management will be able to evolve over -time. However, more thought and design need to be done for upfront -choices about deployment, disk partitioning, and network configuration. diff --git a/doc/ops-guide/source/arch_scaling.rst b/doc/ops-guide/source/arch_scaling.rst deleted file mode 100644 index 21c9c1ac..00000000 --- a/doc/ops-guide/source/arch_scaling.rst +++ /dev/null @@ -1,427 +0,0 @@ -======= -Scaling -======= - -Whereas traditional applications required larger hardware to scale -("vertical scaling"), cloud-based applications typically request more, -discrete hardware ("horizontal scaling"). If your cloud is successful, -eventually you must add resources to meet the increasing demand. - -To suit the cloud paradigm, OpenStack itself is designed to be -horizontally scalable. Rather than switching to larger servers, you -procure more servers and simply install identically configured services. -Ideally, you scale out and load balance among groups of functionally -identical services (for example, compute nodes or ``nova-api`` nodes), -that communicate on a message bus. - -The Starting Point -~~~~~~~~~~~~~~~~~~ - -Determining the scalability of your cloud and how to improve it is an -exercise with many variables to balance. No one solution meets -everyone's scalability goals. However, it is helpful to track a number -of metrics. Since you can define virtual hardware templates, called -"flavors" in OpenStack, you can start to make scaling decisions based on -the flavors you'll provide. These templates define sizes for memory in -RAM, root disk size, amount of ephemeral data disk space available, and -number of cores for starters. - -The default OpenStack flavors are shown in the following table. - -.. list-table:: OpenStack default flavors - :widths: 20 20 20 20 20 - :header-rows: 1 - - * - Name - - Virtual cores - - Memory - - Disk - - Ephemeral - * - m1.tiny - - 1 - - 512 MB - - 1 GB - - 0 GB - * - m1.small - - 1 - - 2 GB - - 10 GB - - 20 GB - * - m1.medium - - 2 - - 4 GB - - 10 GB - - 40 GB - * - m1.large - - 4 - - 8 GB - - 10 GB - - 80 GB - * - m1.xlarge - - 8 - - 16 GB - - 10 GB - - 160 GB - -The starting point for most is the core count of your cloud. By applying -some ratios, you can gather information about: - -- The number of virtual machines (VMs) you expect to run, - ``((overcommit fraction × cores) / virtual cores per instance)`` - -- How much storage is required ``(flavor disk size × number of instances)`` - -You can use these ratios to determine how much additional infrastructure -you need to support your cloud. - -Here is an example using the ratios for gathering scalability -information for the number of VMs expected as well as the storage -needed. The following numbers support (200 / 2) × 16 = 1600 VM instances -and require 80 TB of storage for ``/var/lib/nova/instances``: - -- 200 physical cores. - -- Most instances are size m1.medium (two virtual cores, 50 GB of - storage). - -- Default CPU overcommit ratio (``cpu_allocation_ratio`` in nova.conf) - of 16:1. - -.. note:: - Regardless of the overcommit ratio, an instance can not be placed - on any physical node with fewer raw (pre-overcommit) resources than - instance flavor requires. - -However, you need more than the core count alone to estimate the load -that the API services, database servers, and queue servers are likely to -encounter. You must also consider the usage patterns of your cloud. - -As a specific example, compare a cloud that supports a managed -web-hosting platform with one running integration tests for a -development project that creates one VM per code commit. In the former, -the heavy work of creating a VM happens only every few months, whereas -the latter puts constant heavy load on the cloud controller. You must -consider your average VM lifetime, as a larger number generally means -less load on the cloud controller. - -Aside from the creation and termination of VMs, you must consider the -impact of users accessing the service—particularly on ``nova-api`` and -its associated database. Listing instances garners a great deal of -information and, given the frequency with which users run this -operation, a cloud with a large number of users can increase the load -significantly. This can occur even without their knowledge—leaving the -OpenStack dashboard instances tab open in the browser refreshes the list -of VMs every 30 seconds. - -After you consider these factors, you can determine how many cloud -controller cores you require. A typical eight core, 8 GB of RAM server -is sufficient for up to a rack of compute nodes — given the above -caveats. - -You must also consider key hardware specifications for the performance -of user VMs, as well as budget and performance needs, including storage -performance (spindles/core), memory availability (RAM/core), network -bandwidthbandwidth hardware specifications and (Gbps/core), and overall -CPU performance (CPU/core). - -.. note:: - - For a discussion of metric tracking, including how to extract - metrics from your cloud, see :doc:`ops_logging_monitoring`. - -Adding Cloud Controller Nodes -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -You can facilitate the horizontal expansion of your cloud by adding -nodes. Adding compute nodes is straightforward—they are easily picked up -by the existing installation. However, you must consider some important -points when you design your cluster to be highly available. - -Recall that a cloud controller node runs several different services. You -can install services that communicate only using the message queue -internally—\ ``nova-scheduler`` and ``nova-console``—on a new server for -expansion. However, other integral parts require more care. - -You should load balance user-facing services such as dashboard, -``nova-api``, or the Object Storage proxy. Use any standard HTTP -load-balancing method (DNS round robin, hardware load balancer, or -software such as Pound or HAProxy). One caveat with dashboard is the VNC -proxy, which uses the WebSocket protocol—something that an L7 load -balancer might struggle with. See also `Horizon session storage -`_. - -You can configure some services, such as ``nova-api`` and -``glance-api``, to use multiple processes by changing a flag in their -configuration file—allowing them to share work between multiple cores on -the one machine. - -.. note:: - - Several options are available for MySQL load balancing, and the - supported AMQP brokers have built-in clustering support. Information - on how to configure these and many of the other services can be - found in :doc:`operations`. - -Segregating Your Cloud -~~~~~~~~~~~~~~~~~~~~~~ - -When you want to offer users different regions to provide legal -considerations for data storage, redundancy across earthquake fault -lines, or for low-latency API calls, you segregate your cloud. Use one -of the following OpenStack methods to segregate your cloud: *cells*, -*regions*, *availability zones*, or *host aggregates*. - -Each method provides different functionality and can be best divided -into two groups: - -- Cells and regions, which segregate an entire cloud and result in - running separate Compute deployments. - -- :term:`Availability zones ` and host aggregates, which - merely divide a single Compute deployment. - -The table below provides a comparison view of each segregation method currently -provided by OpenStack Compute. - -.. list-table:: OpenStack segregation methods - :widths: 20 20 20 20 20 - :header-rows: 1 - - * - - - Cells - - Regions - - Availability zones - - Host aggregates - * - **Use when you need** - - A single :term:`API endpoint` for compute, or you require a second - level of scheduling. - - Discrete regions with separate API endpoints and no coordination - between regions. - - Logical separation within your nova deployment for physical isolation - or redundancy. - - To schedule a group of hosts with common features. - * - **Example** - - A cloud with multiple sites where you can schedule VMs "anywhere" or on - a particular site. - - A cloud with multiple sites, where you schedule VMs to a particular - site and you want a shared infrastructure. - - A single-site cloud with equipment fed by separate power supplies. - - Scheduling to hosts with trusted hardware support. - * - **Overhead** - - Considered experimental. A new service, nova-cells. Each cell has a full - nova installation except nova-api. - - A different API endpoint for every region. Each region has a full nova - installation. - - Configuration changes to ``nova.conf``. - - Configuration changes to ``nova.conf``. - * - **Shared services** - - Keystone, ``nova-api`` - - Keystone - - Keystone, All nova services - - Keystone, All nova services - - -Cells and Regions ------------------ - -OpenStack Compute cells are designed to allow running the cloud in a -distributed fashion without having to use more complicated technologies, -or be invasive to existing nova installations. Hosts in a cloud are -partitioned into groups called *cells*. Cells are configured in a tree. -The top-level cell ("API cell") has a host that runs the ``nova-api`` -service, but no ``nova-compute`` services. Each child cell runs all of -the other typical ``nova-*`` services found in a regular installation, -except for the ``nova-api`` service. Each cell has its own message queue -and database service and also runs ``nova-cells``, which manages the -communication between the API cell and child cells. - -This allows for a single API server being used to control access to -multiple cloud installations. Introducing a second level of scheduling -(the cell selection), in addition to the regular ``nova-scheduler`` -selection of hosts, provides greater flexibility to control where -virtual machines are run. - -Unlike having a single API endpoint, regions have a separate API -endpoint per installation, allowing for a more discrete separation. -Users wanting to run instances across sites have to explicitly select a -region. However, the additional complexity of a running a new service is -not required. - -The OpenStack dashboard (horizon) can be configured to use multiple -regions. This can be configured through the ``AVAILABLE_REGIONS`` -parameter. - -Availability Zones and Host Aggregates --------------------------------------- - -You can use availability zones, host aggregates, or both to partition a -nova deployment. - -Availability zones are implemented through and configured in a similar -way to host aggregates. - -However, you use them for different reasons. - -Availability zone -~~~~~~~~~~~~~~~~~ - -This enables you to arrange OpenStack compute hosts into logical groups -and provides a form of physical isolation and redundancy from other -availability zones, such as by using a separate power supply or network -equipment. - -You define the availability zone in which a specified compute host -resides locally on each server. An availability zone is commonly used to -identify a set of servers that have a common attribute. For instance, if -some of the racks in your data center are on a separate power source, -you can put servers in those racks in their own availability zone. -Availability zones can also help separate different classes of hardware. - -When users provision resources, they can specify from which availability -zone they want their instance to be built. This allows cloud consumers -to ensure that their application resources are spread across disparate -machines to achieve high availability in the event of hardware failure. - -Host aggregates zone -~~~~~~~~~~~~~~~~~~~~ - -This enables you to partition OpenStack Compute deployments into logical -groups for load balancing and instance distribution. You can use host -aggregates to further partition an availability zone. For example, you -might use host aggregates to partition an availability zone into groups -of hosts that either share common resources, such as storage and -network, or have a special property, such as trusted computing -hardware. - -A common use of host aggregates is to provide information for use with -the ``nova-scheduler``. For example, you might use a host aggregate to -group a set of hosts that share specific flavors or images. - -The general case for this is setting key-value pairs in the aggregate -metadata and matching key-value pairs in flavor's ``extra_specs`` -metadata. The ``AggregateInstanceExtraSpecsFilter`` in the filter -scheduler will enforce that instances be scheduled only on hosts in -aggregates that define the same key to the same value. - -An advanced use of this general concept allows different flavor types to -run with different CPU and RAM allocation ratios so that high-intensity -computing loads and low-intensity development and testing systems can -share the same cloud without either starving the high-use systems or -wasting resources on low-utilization systems. This works by setting -``metadata`` in your host aggregates and matching ``extra_specs`` in -your flavor types. - -The first step is setting the aggregate metadata keys -``cpu_allocation_ratio`` and ``ram_allocation_ratio`` to a -floating-point value. The filter schedulers ``AggregateCoreFilter`` and -``AggregateRamFilter`` will use those values rather than the global -defaults in ``nova.conf`` when scheduling to hosts in the aggregate. It -is important to be cautious when using this feature, since each host can -be in multiple aggregates but should have only one allocation ratio for -each resources. It is up to you to avoid putting a host in multiple -aggregates that define different values for the same resource. - -This is the first half of the equation. To get flavor types that are -guaranteed a particular ratio, you must set the ``extra_specs`` in the -flavor type to the key-value pair you want to match in the aggregate. -For example, if you define ``extra_specs`` ``cpu_allocation_ratio`` to -"1.0", then instances of that type will run in aggregates only where the -metadata key ``cpu_allocation_ratio`` is also defined as "1.0." In -practice, it is better to define an additional key-value pair in the -aggregate metadata to match on rather than match directly on -``cpu_allocation_ratio`` or ``core_allocation_ratio``. This allows -better abstraction. For example, by defining a key ``overcommit`` and -setting a value of "high," "medium," or "low," you could then tune the -numeric allocation ratios in the aggregates without also needing to -change all flavor types relating to them. - -.. note:: - - Previously, all services had an availability zone. Currently, only - the ``nova-compute`` service has its own availability zone. Services - such as ``nova-scheduler``, ``nova-network``, and ``nova-conductor`` - have always spanned all availability zones. - - When you run any of the following operations, the services appear in - their own internal availability zone - (CONF.internal_service_availability_zone): - - - :command:`nova host-list` (os-hosts) - - - :command:`euca-describe-availability-zones verbose` - - - :command:`nova service-list` - - The internal availability zone is hidden in - euca-describe-availability_zones (nonverbose). - - CONF.node_availability_zone has been renamed to - CONF.default_availability_zone and is used only by the - ``nova-api`` and ``nova-scheduler`` services. - - CONF.node_availability_zone still works but is deprecated. - -Scalable Hardware -~~~~~~~~~~~~~~~~~ - -While several resources already exist to help with deploying and -installing OpenStack, it's very important to make sure that you have -your deployment planned out ahead of time. This guide presumes that you -have at least set aside a rack for the OpenStack cloud but also offers -suggestions for when and what to scale. - -Hardware Procurement --------------------- - -“The Cloud” has been described as a volatile environment where servers -can be created and terminated at will. While this may be true, it does -not mean that your servers must be volatile. Ensuring that your cloud's -hardware is stable and configured correctly means that your cloud -environment remains up and running. Basically, put effort into creating -a stable hardware environment so that you can host a cloud that users -may treat as unstable and volatile. - -OpenStack can be deployed on any hardware supported by an -OpenStack-compatible Linux distribution. - -Hardware does not have to be consistent, but it should at least have the -same type of CPU to support instance migration. - -The typical hardware recommended for use with OpenStack is the standard -value-for-money offerings that most hardware vendors stock. It should be -straightforward to divide your procurement into building blocks such as -"compute," "object storage," and "cloud controller," and request as many -of these as you need. Alternatively, should you be unable to spend more, -if you have existing servers—provided they meet your performance -requirements and virtualization technology—they are quite likely to be -able to support OpenStack. - -Capacity Planning ------------------ - -OpenStack is designed to increase in size in a straightforward manner. -Taking into account the considerations that we've mentioned in this -chapter—particularly on the sizing of the cloud controller—it should be -possible to procure additional compute or object storage nodes as -needed. New nodes do not need to be the same specification, or even -vendor, as existing nodes. - -For compute nodes, ``nova-scheduler`` will take care of differences in -sizing having to do with core count and RAM amounts; however, you should -consider that the user experience changes with differing CPU speeds. -When adding object storage nodes, a weight should be specified that -reflects the capability of the node. - -Monitoring the resource usage and user growth will enable you to know -when to procure. :doc:`ops_logging_monitoring` details some useful metrics. - -Burn-in Testing ---------------- - -The chances of failure for the server's hardware are high at the start -and the end of its life. As a result, dealing with hardware failures -while in production can be avoided by appropriate burn-in testing to -attempt to trigger the early-stage failures. The general principle is to -stress the hardware to its limits. Examples of burn-in tests include -running a CPU or disk benchmark for several days. - diff --git a/doc/ops-guide/source/arch_storage.rst b/doc/ops-guide/source/arch_storage.rst deleted file mode 100644 index 1c83db98..00000000 --- a/doc/ops-guide/source/arch_storage.rst +++ /dev/null @@ -1,521 +0,0 @@ -================= -Storage Decisions -================= - -Storage is found in many parts of the OpenStack stack, and the differing -types can cause confusion to even experienced cloud engineers. This -section focuses on persistent storage options you can configure with -your cloud. It's important to understand the distinction between -:term:`ephemeral ` storage and -:term:`persistent ` storage. - -Ephemeral Storage -~~~~~~~~~~~~~~~~~ - -If you deploy only the OpenStack :term:`Compute service` (nova), your users do -not have access to any form of persistent storage by default. The disks -associated with VMs are "ephemeral," meaning that (from the user's point -of view) they effectively disappear when a virtual machine is -terminated. - -Persistent Storage -~~~~~~~~~~~~~~~~~~ - -Persistent storage means that the storage resource outlives any other -resource and is always available, regardless of the state of a running -instance. - -Today, OpenStack clouds explicitly support three types of persistent -storage: *object storage*, *block storage*, and *file system storage*. - -Object Storage --------------- - -With object storage, users access binary objects through a REST API. You -may be familiar with Amazon S3, which is a well-known example of an -object storage system. Object storage is implemented in OpenStack by the -OpenStack Object Storage (swift) project. If your intended users need to -archive or manage large datasets, you want to provide them with object -storage. In addition, OpenStack can store your virtual machine (VM) -images inside of an object storage system, as an alternative to storing -the images on a file system. - -OpenStack Object Storage provides a highly scalable, highly available -storage solution by relaxing some of the constraints of traditional file -systems. In designing and procuring for such a cluster, it is important -to understand some key concepts about its operation. Essentially, this -type of storage is built on the idea that all storage hardware fails, at -every level, at some point. Infrequently encountered failures that would -hamstring other storage systems, such as issues taking down RAID cards -or entire servers, are handled gracefully with OpenStack Object -Storage. - -A good document describing the Object Storage architecture is found -within the `developer -documentation `_ -— read this first. Once you understand the architecture, you should know what a -proxy server does and how zones work. However, some important points are -often missed at first glance. - -When designing your cluster, you must consider durability and -availability. Understand that the predominant source of these is the -spread and placement of your data, rather than the reliability of the -hardware. Consider the default value of the number of replicas, which is -three. This means that before an object is marked as having been -written, at least two copies exist—in case a single server fails to -write, the third copy may or may not yet exist when the write operation -initially returns. Altering this number increases the robustness of your -data, but reduces the amount of storage you have available. Next, look -at the placement of your servers. Consider spreading them widely -throughout your data center's network and power-failure zones. Is a zone -a rack, a server, or a disk? - -Object Storage's network patterns might seem unfamiliar at first. -Consider these main traffic flows: - -- Among :term:`object`, :term:`container`, and - :term:`account servers ` - -- Between those servers and the proxies - -- Between the proxies and your users - -Object Storage is very "chatty" among servers hosting data—even a small -cluster does megabytes/second of traffic, which is predominantly, “Do -you have the object?”/“Yes I have the object!” Of course, if the answer -to the aforementioned question is negative or the request times out, -replication of the object begins. - -Consider the scenario where an entire server fails and 24 TB of data -needs to be transferred "immediately" to remain at three copies—this can -put significant load on the network. - -Another fact that's often forgotten is that when a new file is being -uploaded, the proxy server must write out as many streams as there are -replicas—giving a multiple of network traffic. For a three-replica -cluster, 10 Gbps in means 30 Gbps out. Combining this with the previous -high bandwidth bandwidth private vs. public network recommendations -demands of replication is what results in the recommendation that your -private network be of significantly higher bandwidth than your public -need be. Oh, and OpenStack Object Storage communicates internally with -unencrypted, unauthenticated rsync for performance—you do want the -private network to be private. - -The remaining point on bandwidth is the public-facing portion. The -``swift-proxy`` service is stateless, which means that you can easily -add more and use HTTP load-balancing methods to share bandwidth and -availability between them. - -More proxies means more bandwidth, if your storage can keep up. - -Block Storage -------------- - -Block storage (sometimes referred to as volume storage) provides users -with access to block-storage devices. Users interact with block storage -by attaching volumes to their running VM instances. - -These volumes are persistent: they can be detached from one instance and -re-attached to another, and the data remains intact. Block storage is -implemented in OpenStack by the OpenStack Block Storage (cinder) -project, which supports multiple back ends in the form of drivers. Your -choice of a storage back end must be supported by a Block Storage -driver. - -Most block storage drivers allow the instance to have direct access to -the underlying storage hardware's block device. This helps increase the -overall read/write IO. However, support for utilizing files as volumes -is also well established, with full support for NFS, GlusterFS and -others. - -These drivers work a little differently than a traditional "block" -storage driver. On an NFS or GlusterFS file system, a single file is -created and then mapped as a "virtual" volume into the instance. This -mapping/translation is similar to how OpenStack utilizes QEMU's -file-based virtual machines stored in ``/var/lib/nova/instances``. - -Shared File Systems Service ---------------------------- - -The Shared File Systems service provides a set of services for -management of Shared File Systems in a multi-tenant cloud environment. -Users interact with Shared File Systems service by mounting remote File -Systems on their instances with the following usage of those systems for -file storing and exchange. Shared File Systems service provides you with -shares. A share is a remote, mountable file system. You can mount a -share to and access a share from several hosts by several users at a -time. With shares, user can also: - -- Create a share specifying its size, shared file system protocol, - visibility level - -- Create a share on either a share server or standalone, depending on - the selected back-end mode, with or without using a share network. - -- Specify access rules and security services for existing shares. - -- Combine several shares in groups to keep data consistency inside the - groups for the following safe group operations. - -- Create a snapshot of a selected share or a share group for storing - the existing shares consistently or creating new shares from that - snapshot in a consistent way - -- Create a share from a snapshot. - -- Set rate limits and quotas for specific shares and snapshots - -- View usage of share resources - -- Remove shares. - -Like Block Storage, the Shared File Systems service is persistent. It -can be: - -- Mounted to any number of client machines. - -- Detached from one instance and attached to another without data loss. - During this process the data are safe unless the Shared File Systems - service itself is changed or removed. - -Shares are provided by the Shared File Systems service. In OpenStack, -Shared File Systems service is implemented by Shared File System -(manila) project, which supports multiple back-ends in the form of -drivers. The Shared File Systems service can be configured to provision -shares from one or more back-ends. Share servers are, mostly, virtual -machines that export file shares via different protocols such as NFS, -CIFS, GlusterFS, or HDFS. - -OpenStack Storage Concepts -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The table below explains the different storage concepts provided by OpenStack. - -.. list-table:: OpenStack storage - :widths: 20 20 20 20 20 - :header-rows: 1 - - * - - - Ephemeral storage - - Block storage - - Object storage - - Shared File System storage - * - Used to… - - Run operating system and scratch space - - Add additional persistent storage to a virtual machine (VM) - - Store data, including VM images - - Add additional persistent storage to a virtual machine - * - Accessed through… - - A file system - - A block device that can be partitioned, formatted, and mounted - (such as, /dev/vdc) - - The REST API - - A Shared File Systems service share (either manila managed or an - external one registered in manila) that can be partitioned, formatted - and mounted (such as /dev/vdc) - * - Accessible from… - - Within a VM - - Within a VM - - Anywhere - - Within a VM - * - Managed by… - - OpenStack Compute (nova) - - OpenStack Block Storage (cinder) - - OpenStack Object Storage (swift) - - OpenStack Shared File System Storage (manila) - * - Persists until… - - VM is terminated - - Deleted by user - - Deleted by user - - Deleted by user - * - Sizing determined by… - - Administrator configuration of size settings, known as *flavors* - - User specification in initial request - - Amount of available physical storage - - * User specification in initial request - * Requests for extension - * Available user-level quotes - * Limitations applied by Administrator - * - Encryption set by… - - Parameter in nova.conf - - Admin establishing `encrypted volume type - `_, - then user selecting encrypted volume - - Not yet available - - Shared File Systems service does not apply any additional encryption - above what the share’s back-end storage provides - * - Example of typical usage… - - 10 GB first disk, 30 GB second disk - - 1 TB disk - - 10s of TBs of dataset storage - - Depends completely on the size of back-end storage specified when - a share was being created. In case of thin provisioning it can be - partial space reservation (for more details see - `Capabilities and Extra-Specs - `_ - specification) - - -With file-level storage, users access stored data using the operating -system's file system interface. Most users, if they have used a network -storage solution before, have encountered this form of networked -storage. In the Unix world, the most common form of this is NFS. In the -Windows world, the most common form is called CIFS (previously, -SMB). - -OpenStack clouds do not present file-level storage to end users. -However, it is important to consider file-level storage for storing -instances under ``/var/lib/nova/instances`` when designing your cloud, -since you must have a shared file system if you want to support live -migration. - -Choosing Storage Back Ends -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Users will indicate different needs for their cloud use cases. Some may -need fast access to many objects that do not change often, or want to -set a time-to-live (TTL) value on a file. Others may access only storage -that is mounted with the file system itself, but want it to be -replicated instantly when starting a new instance. For other systems, -ephemeral storage—storage that is released when a VM attached to it is -shut down— is the preferred way. When you select -:term:`storage back ends `, -ask the following questions on behalf of your users: - -- Do my users need block storage? - -- Do my users need object storage? - -- Do I need to support live migration? - -- Should my persistent storage drives be contained in my compute nodes, - or should I use external storage? - -- What is the platter count I can achieve? Do more spindles result in - better I/O despite network access? - -- Which one results in the best cost-performance scenario I'm aiming - for? - -- How do I manage the storage operationally? - -- How redundant and distributed is the storage? What happens if a - storage node fails? To what extent can it mitigate my data-loss - disaster scenarios? - -To deploy your storage by using only commodity hardware, you can use a -number of open-source packages, as shown in the following table. - -.. list-table:: Persistent file-based storage support - :widths: 25 25 25 25 - :header-rows: 1 - - * -   - - Object - - Block - - File-level - * - Swift - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - - -   - * - LVM - - - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - -   - * - Ceph - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - Experimental - * - Gluster - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - * - NFS - - - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - * - ZFS - - - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - -   - * - Sheepdog - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - .. image:: figures/Check_mark_23x20_02.png - :width: 30% - - - - -.. note:: - - This list of open source file-level shared storage solutions is not - exhaustive; other open source solutions exist (MooseFS). Your - organization may already have deployed a file-level shared storage - solution that you can use. - -**Storage Driver Support** - -In addition to the open source technologies, there are a number of -proprietary solutions that are officially supported by OpenStack Block -Storage. They are offered by the following vendors: - -- IBM (Storwize family/SVC, XIV) - -- NetApp - -- Nexenta - -- SolidFire - -You can find a matrix of the functionality provided by all of the -supported Block Storage drivers on the `OpenStack -wiki `_. - -Also, you need to decide whether you want to support object storage in -your cloud. The two common use cases for providing object storage in a -compute cloud are: - -- To provide users with a persistent storage mechanism - -- As a scalable, reliable data store for virtual machine images - -Commodity Storage Back-end Technologies ---------------------------------------- - -This section provides a high-level overview of the differences among the -different commodity storage back end technologies. Depending on your -cloud user's needs, you can implement one or many of these technologies -in different combinations: - -OpenStack Object Storage (swift) - The official OpenStack Object Store implementation. It is a mature - technology that has been used for several years in production by - Rackspace as the technology behind Rackspace Cloud Files. As it is - highly scalable, it is well-suited to managing petabytes of storage. - OpenStack Object Storage's advantages are better integration with - OpenStack (integrates with OpenStack Identity, works with the - OpenStack dashboard interface) and better support for multiple data - center deployment through support of asynchronous eventual - consistency replication. - - Therefore, if you eventually plan on distributing your storage - cluster across multiple data centers, if you need unified accounts - for your users for both compute and object storage, or if you want - to control your object storage with the OpenStack dashboard, you - should consider OpenStack Object Storage. More detail can be found - about OpenStack Object Storage in the section below. - -Ceph - A scalable storage solution that replicates data across commodity - storage nodes. Ceph was originally developed by one of the founders - of DreamHost and is currently used in production there. - - Ceph was designed to expose different types of storage interfaces to - the end user: it supports object storage, block storage, and - file-system interfaces, although the file-system interface is not - yet considered production-ready. Ceph supports the same API as swift - for object storage and can be used as a back end for cinder block - storage as well as back-end storage for glance images. Ceph supports - "thin provisioning," implemented using copy-on-write. - - This can be useful when booting from volume because a new volume can - be provisioned very quickly. Ceph also supports keystone-based - authentication (as of version 0.56), so it can be a seamless swap in - for the default OpenStack swift implementation. - - Ceph's advantages are that it gives the administrator more - fine-grained control over data distribution and replication - strategies, enables you to consolidate your object and block - storage, enables very fast provisioning of boot-from-volume - instances using thin provisioning, and supports a distributed - file-system interface, though this interface is `not yet - recommended `_ for use in - production deployment by the Ceph project. - - If you want to manage your object and block storage within a single - system, or if you want to support fast boot-from-volume, you should - consider Ceph. - -Gluster - A distributed, shared file system. As of Gluster version 3.3, you - can use Gluster to consolidate your object storage and file storage - into one unified file and object storage solution, which is called - Gluster For OpenStack (GFO). GFO uses a customized version of swift - that enables Gluster to be used as the back-end storage. - - The main reason to use GFO rather than regular swift is if you also - want to support a distributed file system, either to support shared - storage live migration or to provide it as a separate service to - your end users. If you want to manage your object and file storage - within a single system, you should consider GFO. - -LVM - The Logical Volume Manager is a Linux-based system that provides an - abstraction layer on top of physical disks to expose logical volumes - to the operating system. The LVM back-end implements block storage - as LVM logical partitions. - - On each host that will house block storage, an administrator must - initially create a volume group dedicated to Block Storage volumes. - Blocks are created from LVM logical volumes. - - .. note:: - - LVM does *not* provide any replication. Typically, - administrators configure RAID on nodes that use LVM as block - storage to protect against failures of individual hard drives. - However, RAID does not protect against a failure of the entire - host. - -ZFS - The Solaris iSCSI driver for OpenStack Block Storage implements - blocks as ZFS entities. ZFS is a file system that also has the - functionality of a volume manager. This is unlike on a Linux system, - where there is a separation of volume manager (LVM) and file system - (such as, ext3, ext4, xfs, and btrfs). ZFS has a number of - advantages over ext4, including improved data-integrity checking. - - The ZFS back end for OpenStack Block Storage supports only - Solaris-based systems, such as Illumos. While there is a Linux port - of ZFS, it is not included in any of the standard Linux - distributions, and it has not been tested with OpenStack Block - Storage. As with LVM, ZFS does not provide replication across hosts - on its own; you need to add a replication solution on top of ZFS if - your cloud needs to be able to handle storage-node failures. - - We don't recommend ZFS unless you have previous experience with - deploying it, since the ZFS back end for Block Storage requires a - Solaris-based operating system, and we assume that your experience - is primarily with Linux-based systems. - -Sheepdog - Sheepdog is a userspace distributed storage system. Sheepdog scales - to several hundred nodes, and has powerful virtual disk management - features like snapshot, cloning, rollback, thin provisioning. - - It is essentially an object storage system that manages disks and - aggregates the space and performance of disks linearly in hyper - scale on commodity hardware in a smart way. On top of its object - store, Sheepdog provides elastic volume service and http service. - Sheepdog does not assume anything about kernel version and can work - nicely with xattr-supported file systems. - -Conclusion -~~~~~~~~~~ - -We hope that you now have some considerations in mind and questions to -ask your future cloud users about their storage use cases. As you can -see, your storage decisions will also influence your network design for -performance and security needs. Continue with us to make more informed -decisions about your OpenStack cloud design. - diff --git a/doc/ops-guide/source/architecture.rst b/doc/ops-guide/source/architecture.rst deleted file mode 100644 index 475db031..00000000 --- a/doc/ops-guide/source/architecture.rst +++ /dev/null @@ -1,52 +0,0 @@ -============ -Architecture -============ - -Designing an OpenStack cloud is a great achievement. It requires a -robust understanding of the requirements and needs of the cloud's users -to determine the best possible configuration to meet them. OpenStack -provides a great deal of flexibility to achieve your needs, and this -part of the book aims to shine light on many of the decisions you need -to make during the process. - -To design, deploy, and configure OpenStack, administrators must -understand the logical architecture. A diagram can help you envision all -the integrated services within OpenStack and how they interact with each -other. - -OpenStack modules are one of the following types: - -Daemon - Runs as a background process. On Linux platforms, a daemon is usually - installed as a service. - -Script - Installs a virtual environment and runs tests. - -Command-line interface (CLI) - Enables users to submit API calls to OpenStack services through commands. - -As shown, end users can interact through the dashboard, CLIs, and APIs. -All services authenticate through a common Identity service, and -individual services interact with each other through public APIs, except -where privileged administrator commands are necessary. -:ref:`logical_architecture` shows the most common, but not the only logical -architecture for an OpenStack cloud. - -.. _logical_architecture: - -.. figure:: figures/osog_0001.png - :width: 100% - - Figure. OpenStack Logical Architecture - -.. toctree:: - :maxdepth: 2 - - arch_examples.rst - arch_provision.rst - arch_cloud_controller.rst - arch_compute_nodes.rst - arch_scaling.rst - arch_storage.rst - arch_network_design.rst diff --git a/doc/ops-guide/source/common b/doc/ops-guide/source/common deleted file mode 120000 index dc879abe..00000000 --- a/doc/ops-guide/source/common +++ /dev/null @@ -1 +0,0 @@ -../../common \ No newline at end of file diff --git a/doc/ops-guide/source/conf.py b/doc/ops-guide/source/conf.py deleted file mode 100644 index ced05372..00000000 --- a/doc/ops-guide/source/conf.py +++ /dev/null @@ -1,289 +0,0 @@ -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or -# implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# This file is execfile()d with the current directory set to its -# containing dir. -# -# Note that not all possible configuration values are present in this -# autogenerated file. -# -# All configuration values have a default; values that are commented out -# serve to show the default. - -import os -# import sys - -import openstackdocstheme - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# sys.path.insert(0, os.path.abspath('.')) - -# -- General configuration ------------------------------------------------ - -# If your documentation needs a minimal Sphinx version, state it here. -# needs_sphinx = '1.0' - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [] - -# Add any paths that contain templates here, relative to this directory. -# templates_path = ['_templates'] - -# The suffix of source filenames. -source_suffix = '.rst' - -# The encoding of source files. -# source_encoding = 'utf-8-sig' - -# The master toctree document. -master_doc = 'index' - -# General information about the project. -project = u'Operations Guide' -bug_tag = u'ops-guide' -copyright = u'2016, OpenStack contributors' - -# The version info for the project you're documenting, acts as replacement for -# |version| and |release|, also used in various other places throughout the -# built documents. -# -# The short X.Y version. -version = '0.0.1' -# The full version, including alpha/beta/rc tags. -release = '0.0.1' - -# A few variables have to be set for the log-a-bug feature. -# giturl: The location of conf.py on Git. Must be set manually. -# gitsha: The SHA checksum of the bug description. Automatically extracted from git log. -# bug_tag: Tag for categorizing the bug. Must be set manually. -# These variables are passed to the logabug code via html_context. -giturl = u'http://git.openstack.org/cgit/openstack/operations-guide/tree/doc/ops-guide/source' -git_cmd = "/usr/bin/git log | head -n1 | cut -f2 -d' '" -gitsha = os.popen(git_cmd).read().strip('\n') -html_context = {"gitsha": gitsha, "bug_tag": bug_tag, - "giturl": giturl} - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# language = None - -# There are two options for replacing |today|: either, you set today to some -# non-false value, then it is used: -# today = '' -# Else, today_fmt is used as the format for a strftime call. -# today_fmt = '%B %d, %Y' - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -exclude_patterns = [] - -# The reST default role (used for this markup: `text`) to use for all -# documents. -# default_role = None - -# If true, '()' will be appended to :func: etc. cross-reference text. -# add_function_parentheses = True - -# If true, the current module name will be prepended to all description -# unit titles (such as .. function::). -# add_module_names = True - -# If true, sectionauthor and moduleauthor directives will be shown in the -# output. They are ignored by default. -# show_authors = False - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = 'sphinx' - -# A list of ignored prefixes for module index sorting. -# modindex_common_prefix = [] - -# If true, keep warnings as "system message" paragraphs in the built documents. -# keep_warnings = False - - -# -- Options for HTML output ---------------------------------------------- - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -html_theme = 'openstackdocs' - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# html_theme_options = {} - -# Add any paths that contain custom themes here, relative to this directory. -html_theme_path = [openstackdocstheme.get_html_theme_path()] - -# The name for this set of Sphinx documents. If None, it defaults to -# " v documentation". -# html_title = None - -# A shorter title for the navigation bar. Default is the same as html_title. -# html_short_title = None - -# The name of an image file (relative to this directory) to place at the top -# of the sidebar. -# html_logo = None - -# The name of an image file (within the static path) to use as favicon of the -# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 -# pixels large. -# html_favicon = None - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -# html_static_path = [] - -# Add any extra paths that contain custom files (such as robots.txt or -# .htaccess) here, relative to this directory. These files are copied -# directly to the root of the documentation. -# html_extra_path = [] - -# If not '', a 'Last updated on:' timestamp is inserted at every page bottom, -# using the given strftime format. -# So that we can enable "log-a-bug" links from each output HTML page, this -# variable must be set to a format that includes year, month, day, hours and -# minutes. -html_last_updated_fmt = '%Y-%m-%d %H:%M' - -# If true, SmartyPants will be used to convert quotes and dashes to -# typographically correct entities. -# html_use_smartypants = True - -# Custom sidebar templates, maps document names to template names. -# html_sidebars = {} - -# Additional templates that should be rendered to pages, maps page names to -# template names. -# html_additional_pages = {} - -# If false, no module index is generated. -# html_domain_indices = True - -# If false, no index is generated. -html_use_index = False - -# If true, the index is split into individual pages for each letter. -# html_split_index = False - -# If true, links to the reST sources are added to the pages. -html_show_sourcelink = False - -# If true, "Created using Sphinx" is shown in the HTML footer. Default is True. -# html_show_sphinx = True - -# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. -# html_show_copyright = True - -# If true, an OpenSearch description file will be output, and all pages will -# contain a tag referring to it. The value of this option must be the -# base URL from which the finished HTML is served. -# html_use_opensearch = '' - -# This is the file name suffix for HTML files (e.g. ".xhtml"). -# html_file_suffix = None - -# Output file base name for HTML help builder. -htmlhelp_basename = 'ops-guide' - -# If true, publish source files -html_copy_source = False - -# -- Options for LaTeX output --------------------------------------------- - -latex_elements = { - # The paper size ('letterpaper' or 'a4paper'). - # 'papersize': 'letterpaper', - - # The font size ('10pt', '11pt' or '12pt'). - # 'pointsize': '10pt', - - # Additional stuff for the LaTeX preamble. - # 'preamble': '', -} - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, -# author, documentclass [howto, manual, or own class]). -latex_documents = [ - ('index', 'OpsGuide.tex', u'Operations Guide', - u'OpenStack contributors', 'manual'), -] - -# The name of an image file (relative to this directory) to place at the top of -# the title page. -# latex_logo = None - -# For "manual" documents, if this is true, then toplevel headings are parts, -# not chapters. -# latex_use_parts = False - -# If true, show page references after internal links. -# latex_show_pagerefs = False - -# If true, show URL addresses after external links. -# latex_show_urls = False - -# Documents to append as an appendix to all manuals. -# latex_appendices = [] - -# If false, no module index is generated. -# latex_domain_indices = True - - -# -- Options for manual page output --------------------------------------- - -# One entry per manual page. List of tuples -# (source start file, name, description, authors, manual section). -man_pages = [ - ('index', 'opsguide', u'Operations Guide', - [u'OpenStack contributors'], 1) -] - -# If true, show URL addresses after external links. -# man_show_urls = False - - -# -- Options for Texinfo output ------------------------------------------- - -# Grouping the document tree into Texinfo files. List of tuples -# (source start file, target name, title, author, -# dir menu entry, description, category) -texinfo_documents = [ - ('index', 'OpsGuide', u'Operations Guide', - u'OpenStack contributors', 'OpsGuide', - 'This book provides information about designing and operating ' - 'OpenStack clouds.', 'Miscellaneous'), -] - -# Documents to append as an appendix to all manuals. -# texinfo_appendices = [] - -# If false, no module index is generated. -# texinfo_domain_indices = True - -# How to display URL addresses: 'footnote', 'no', or 'inline'. -# texinfo_show_urls = 'footnote' - -# If true, do not generate a @detailmenu in the "Top" node's menu. -# texinfo_no_detailmenu = False - -# -- Options for Internationalization output ------------------------------ -locale_dirs = ['locale/'] diff --git a/doc/ops-guide/source/figures b/doc/ops-guide/source/figures deleted file mode 120000 index 7a7db2ad..00000000 --- a/doc/ops-guide/source/figures +++ /dev/null @@ -1 +0,0 @@ -../../figures/ \ No newline at end of file diff --git a/doc/ops-guide/source/index.rst b/doc/ops-guide/source/index.rst deleted file mode 100644 index bd33cc91..00000000 --- a/doc/ops-guide/source/index.rst +++ /dev/null @@ -1,26 +0,0 @@ -========================== -OpenStack Operations Guide -========================== - -Abstract -~~~~~~~~ - -This book provides information about designing and operating OpenStack clouds. - - -Contents -~~~~~~~~ - -.. toctree:: - :maxdepth: 2 - - acknowledgements.rst - preface_ops.rst - architecture.rst - operations.rst - app_usecases.rst - app_crypt.rst - app_roadmaps.rst - app_resources.rst - common/app_support.rst - common/glossary.rst diff --git a/doc/ops-guide/source/operations.rst b/doc/ops-guide/source/operations.rst deleted file mode 100644 index 4372eba3..00000000 --- a/doc/ops-guide/source/operations.rst +++ /dev/null @@ -1,41 +0,0 @@ -========== -Operations -========== - -Congratulations! By now, you should have a solid design for your cloud. -We now recommend that you turn to the `OpenStack Installation Guides -`_, which contains a -step-by-step guide on how to manually install the OpenStack packages and -dependencies on your cloud. - -While it is important for an operator to be familiar with the steps -involved in deploying OpenStack, we also strongly encourage you to -evaluate configuration-management tools, such as :term:`Puppet` or -:term:`Chef`, which can help automate this deployment process. - -In the remainder of this guide, we assume that you have successfully -deployed an OpenStack cloud and are able to perform basic operations -such as adding images, booting instances, and attaching volumes. - -As your focus turns to stable operations, we recommend that you do skim -the remainder of this book to get a sense of the content. Some of this -content is useful to read in advance so that you can put best practices -into effect to simplify your life in the long run. Other content is more -useful as a reference that you might turn to when an unexpected event -occurs (such as a power failure), or to troubleshoot a particular -problem. - -.. toctree:: - :maxdepth: 2 - - ops_lay_of_the_land.rst - ops_projects_users.rst - ops_user_facing_operations.rst - ops_maintenance.rst - ops_network_troubleshooting.rst - ops_logging_monitoring.rst - ops_backup_recovery.rst - ops_customize.rst - ops_upstream.rst - ops_advanced_configuration.rst - ops_upgrades.rst diff --git a/doc/ops-guide/source/ops_advanced_configuration.rst b/doc/ops-guide/source/ops_advanced_configuration.rst deleted file mode 100644 index 8c75ea67..00000000 --- a/doc/ops-guide/source/ops_advanced_configuration.rst +++ /dev/null @@ -1,163 +0,0 @@ -====================== -Advanced Configuration -====================== - -OpenStack is intended to work well across a variety of installation -flavors, from very small private clouds to large public clouds. To -achieve this, the developers add configuration options to their code -that allow the behavior of the various components to be tweaked -depending on your needs. Unfortunately, it is not possible to cover all -possible deployments with the default configuration values. - -At the time of writing, OpenStack has more than 3,000 configuration -options. You can see them documented at the -`OpenStack configuration reference -guide `_. -This chapter cannot hope to document all of these, but we do try to -introduce the important concepts so that you know where to go digging -for more information. - -Differences Between Various Drivers -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Many OpenStack projects implement a driver layer, and each of these -drivers will implement its own configuration options. For example, in -OpenStack Compute (nova), there are various hypervisor drivers -implemented—libvirt, xenserver, hyper-v, and vmware, for example. Not -all of these hypervisor drivers have the same features, and each has -different tuning requirements. - -.. note:: - - The currently implemented hypervisors are listed on the `OpenStack - documentation - website `_. - You can see a matrix of the various features in OpenStack Compute - (nova) hypervisor drivers on the OpenStack wiki at the `Hypervisor - support matrix - page `_. - -The point we are trying to make here is that just because an option -exists doesn't mean that option is relevant to your driver choices. -Normally, the documentation notes which drivers the configuration -applies to. - -Implementing Periodic Tasks -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Another common concept across various OpenStack projects is that of -periodic tasks. Periodic tasks are much like cron jobs on traditional -Unix systems, but they are run inside an OpenStack process. For example, -when OpenStack Compute (nova) needs to work out what images it can -remove from its local cache, it runs a periodic task to do this. - -Periodic tasks are important to understand because of limitations in the -threading model that OpenStack uses. OpenStack uses cooperative -threading in Python, which means that if something long and complicated -is running, it will block other tasks inside that process from running -unless it voluntarily yields execution to another cooperative thread. - -A tangible example of this is the ``nova-compute`` process. In order to -manage the image cache with libvirt, ``nova-compute`` has a periodic -process that scans the contents of the image cache. Part of this scan is -calculating a checksum for each of the images and making sure that -checksum matches what ``nova-compute`` expects it to be. However, images -can be very large, and these checksums can take a long time to generate. -At one point, before it was reported as a bug and fixed, -``nova-compute`` would block on this task and stop responding to RPC -requests. This was visible to users as failure of operations such as -spawning or deleting instances. - -The take away from this is if you observe an OpenStack process that -appears to "stop" for a while and then continue to process normally, you -should check that periodic tasks aren't the problem. One way to do this -is to disable the periodic tasks by setting their interval to zero. -Additionally, you can configure how often these periodic tasks run—in -some cases, it might make sense to run them at a different frequency -from the default. - -The frequency is defined separately for each periodic task. Therefore, -to disable every periodic task in OpenStack Compute (nova), you would -need to set a number of configuration options to zero. The current list -of configuration options you would need to set to zero are: - -- ``bandwidth_poll_interval`` - -- ``sync_power_state_interval`` - -- ``heal_instance_info_cache_interval`` - -- ``host_state_interval`` - -- ``image_cache_manager_interval`` - -- ``reclaim_instance_interval`` - -- ``volume_usage_poll_interval`` - -- ``shelved_poll_interval`` - -- ``shelved_offload_time`` - -- ``instance_delete_interval`` - -To set a configuration option to zero, include a line such as -``image_cache_manager_interval=0`` in your ``nova.conf`` file. - -This list will change between releases, so please refer to your -configuration guide for up-to-date information. - -Specific Configuration Topics -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section covers specific examples of configuration options you might -consider tuning. It is by no means an exhaustive list. - -Security Configuration for Compute, Networking, and Storage ------------------------------------------------------------ - -The `OpenStack Security Guide `_ -provides a deep dive into securing an OpenStack cloud, including -SSL/TLS, key management, PKI and certificate management, data transport -and privacy concerns, and compliance. - -High Availability ------------------ - -The `OpenStack High Availability -Guide `_ offers -suggestions for elimination of a single point of failure that could -cause system downtime. While it is not a completely prescriptive -document, it offers methods and techniques for avoiding downtime and -data loss. - -Enabling IPv6 Support ---------------------- - -You can follow the progress being made on IPV6 support by watching the -`neutron IPv6 Subteam at -work `_.Liberty -IPv6 supportIPv6, enabling support forconfiguration options IPv6 support - -By modifying your configuration setup, you can set up IPv6 when using -``nova-network`` for networking, and a tested setup is documented for -FlatDHCP and a multi-host configuration. The key is to make -``nova-network`` think a ``radvd`` command ran successfully. The entire -configuration is detailed in a Cybera blog post, `“An IPv6 enabled -cloud” `_. - -Geographical Considerations for Object Storage ----------------------------------------------- - -Support for global clustering of object storage servers is available for -all supported releases. You would implement these global clusters to -ensure replication across geographic areas in case of a natural disaster -and also to ensure that users can write or access their objects more -quickly based on the closest data center. You configure a default region -with one zone for each cluster, but be sure your network (WAN) can -handle the additional request and response load between zones as you add -more zones and build a ring that handles more zones. Refer to -`Geographically Distributed -Clusters `_ -in the documentation for additional information. - diff --git a/doc/ops-guide/source/ops_backup_recovery.rst b/doc/ops-guide/source/ops_backup_recovery.rst deleted file mode 100644 index 02f19071..00000000 --- a/doc/ops-guide/source/ops_backup_recovery.rst +++ /dev/null @@ -1,203 +0,0 @@ -=================== -Backup and Recovery -=================== - -Standard backup best practices apply when creating your OpenStack backup -policy. For example, how often to back up your data is closely related -to how quickly you need to recover from data loss. - -.. note:: - - If you cannot have any data loss at all, you should also focus on a - highly available deployment. The `OpenStack High Availability - Guide `_ offers - suggestions for elimination of a single point of failure that could - cause system downtime. While it is not a completely prescriptive - document, it offers methods and techniques for avoiding downtime and - data loss. - -Other backup considerations include: - -- How many backups to keep? - -- Should backups be kept off-site? - -- How often should backups be tested? - -Just as important as a backup policy is a recovery policy (or at least -recovery testing). - -What to Back Up -~~~~~~~~~~~~~~~ - -While OpenStack is composed of many components and moving parts, backing -up the critical data is quite simple. - -This chapter describes only how to back up configuration files and -databases that the various OpenStack components need to run. This -chapter does not describe how to back up objects inside Object Storage -or data contained inside Block Storage. Generally these areas are left -for users to back up on their own. - -Database Backups -~~~~~~~~~~~~~~~~ - -The example OpenStack architecture designates the cloud controller as -the MySQL server. This MySQL server hosts the databases for nova, -glance, cinder, and keystone. With all of these databases in one place, -it's very easy to create a database backup: - -.. code-block:: console - - # mysqldump --opt --all-databases > openstack.sql - -If you only want to backup a single database, you can instead run: - -.. code-block:: console - - # mysqldump --opt nova > nova.sql - -where ``nova`` is the database you want to back up. - -You can easily automate this process by creating a cron job that runs -the following script once per day: - -.. code-block:: bash - - #!/bin/bash - backup_dir="/var/lib/backups/mysql" - filename="${backup_dir}/mysql-`hostname`-`eval date +%Y%m%d`.sql.gz" - # Dump the entire MySQL database - /usr/bin/mysqldump --opt --all-databases | gzip > $filename - # Delete backups older than 7 days - find $backup_dir -ctime +7 -type f -delete - -This script dumps the entire MySQL database and deletes any backups -older than seven days. - -File System Backups -~~~~~~~~~~~~~~~~~~~ - -This section discusses which files and directories should be backed up -regularly, organized by service. - -Compute -------- - -The ``/etc/nova`` directory on both the cloud controller and compute -nodes should be regularly backed up. - -``/var/log/nova`` does not need to be backed up if you have all logs -going to a central area. It is highly recommended to use a central -logging server or back up the log directory. - -``/var/lib/nova`` is another important directory to back up. The -exception to this is the ``/var/lib/nova/instances`` subdirectory on -compute nodes. This subdirectory contains the KVM images of running -instances. You would want to back up this directory only if you need to -maintain backup copies of all instances. Under most circumstances, you -do not need to do this, but this can vary from cloud to cloud and your -service levels. Also be aware that making a backup of a live KVM -instance can cause that instance to not boot properly if it is ever -restored from a backup. - -Image Catalog and Delivery --------------------------- - -``/etc/glance`` and ``/var/log/glance`` follow the same rules as their -nova counterparts. - -``/var/lib/glance`` should also be backed up. Take special notice of -``/var/lib/glance/images``. If you are using a file-based back end of -glance, ``/var/lib/glance/images`` is where the images are stored and -care should be taken. - -There are two ways to ensure stability with this directory. The first is -to make sure this directory is run on a RAID array. If a disk fails, the -directory is available. The second way is to use a tool such as rsync to -replicate the images to another server: - -.. code-block:: console - - # rsync -az --progress /var/lib/glance/images \ - backup-server:/var/lib/glance/images/ - -Identity --------- - -``/etc/keystone`` and ``/var/log/keystone`` follow the same rules as -other components. - -``/var/lib/keystone``, although it should not contain any data being -used, can also be backed up just in case. - -Block Storage -------------- - -``/etc/cinder`` and ``/var/log/cinder`` follow the same rules as other -components. - -``/var/lib/cinder`` should also be backed up. - -Object Storage --------------- - -``/etc/swift`` is very important to have backed up. This directory -contains the swift configuration files as well as the ring files and -ring :term:`builder files `, which if lost, render the data -on your cluster inaccessible. A best practice is to copy the builder files -to all storage nodes along with the ring files. Multiple backup copies are -spread throughout your storage cluster. - -Recovering Backups -~~~~~~~~~~~~~~~~~~ - -Recovering backups is a fairly simple process. To begin, first ensure -that the service you are recovering is not running. For example, to do a -full recovery of ``nova`` on the cloud controller, first stop all -``nova`` services: - -.. code-block:: console - - # stop nova-api - # stop nova-cert - # stop nova-consoleauth - # stop nova-novncproxy - # stop nova-objectstore - # stop nova-scheduler - -Now you can import a previously backed-up database: - -.. code-block:: console - - # mysql nova < nova.sql - -You can also restore backed-up nova directories: - -.. code-block:: console - - # mv /etc/nova{,.orig} - # cp -a /path/to/backup/nova /etc/ - -Once the files are restored, start everything back up: - -.. code-block:: console - - # start mysql - # for i in nova-api nova-cert nova-consoleauth nova-novncproxy - nova-objectstore nova-scheduler - > do - > start $i - > done - -Other services follow the same process, with their respective -directories and databases. - -Summary -~~~~~~~ - -Backup and subsequent recovery is one of the first tasks system -administrators learn. However, each system has different items that need -attention. By taking care of your database, image service, and -appropriate file system locations, you can be assured that you can -handle any event requiring recovery. diff --git a/doc/ops-guide/source/ops_customize.rst b/doc/ops-guide/source/ops_customize.rst deleted file mode 100644 index 028498a1..00000000 --- a/doc/ops-guide/source/ops_customize.rst +++ /dev/null @@ -1,850 +0,0 @@ -============= -Customization -============= - -OpenStack might not do everything you need it to do out of the box. To -add a new feature, you can follow different paths. - -To take the first path, you can modify the OpenStack code directly. -Learn `how to -contribute `_, -follow the `code review -workflow `_, make your -changes, and contribute them back to the upstream OpenStack project. -This path is recommended if the feature you need requires deep -integration with an existing project. The community is always open to -contributions and welcomes new functionality that follows the -feature-development guidelines. This path still requires you to use -DevStack for testing your feature additions, so this chapter walks you -through the DevStack environment. - -For the second path, you can write new features and plug them in using -changes to a configuration file. If the project where your feature would -need to reside uses the Python Paste framework, you can create -middleware for it and plug it in through configuration. There may also -be specific ways of customizing a project, such as creating a new -scheduler driver for Compute or a custom tab for the dashboard. - -This chapter focuses on the second path for customizing OpenStack by -providing two examples for writing new features. The first example shows -how to modify Object Storage (swift) middleware to add a new feature, -and the second example provides a new scheduler feature for OpenStack -Compute (nova). To customize OpenStack this way you need a development -environment. The best way to get an environment up and running quickly -is to run DevStack within your cloud. - -Create an OpenStack Development Environment -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -To create a development environment, you can use DevStack. DevStack is -essentially a collection of shell scripts and configuration files that -builds an OpenStack development environment for you. You use it to -create such an environment for developing a new feature. - -You can find all of the documentation at the -`DevStack `_ website. - -**To run DevStack on an instance in your OpenStack cloud:** - -#. Boot an instance from the dashboard or the nova command-line interface - (CLI) with the following parameters: - - - Name: devstack - - - Image: Ubuntu 14.04 LTS - - - Memory Size: 4 GB RAM - - - Disk Size: minimum 5 GB - - If you are using the ``nova`` client, specify :option:`--flavor 3` for the - :command:`nova boot` command to get adequate memory and disk sizes. - -#. Log in and set up DevStack. Here's an example of the commands you can - use to set up DevStack on a virtual machine: - - #. Log in to the instance: - - .. code-block:: console - - $ ssh username@my.instance.ip.address - - #. Update the virtual machine's operating system: - - .. code-block:: console - - # apt-get -y update - - #. Install git: - - .. code-block:: console - - # apt-get -y install git - - #. Clone the ``devstack`` repository: - - .. code-block:: console - - $ git clone https://git.openstack.org/openstack-dev/devstack - - #. Change to the ``devstack`` repository: - - .. code-block:: console - - $ cd devstack - -#. (Optional) If you've logged in to your instance as the root user, you - must create a "stack" user; otherwise you'll run into permission issues. - If you've logged in as a user other than root, you can skip these steps: - - #. Run the DevStack script to create the stack user: - - .. code-block:: console - - # tools/create-stack-user.sh - - #. Give ownership of the ``devstack`` directory to the stack user: - - .. code-block:: console - - # chown -R stack:stack /root/devstack - - #. Set some permissions you can use to view the DevStack screen later: - - .. code-block:: console - - # chmod o+rwx /dev/pts/0 - - #. Switch to the stack user: - - .. code-block:: console - - $ su stack - -#. Edit the ``local.conf`` configuration file that controls what DevStack - will deploy. Copy the example ``local.conf`` file at the end of this - section (:ref:`local.conf`): - - .. code-block:: console - - $ vim local.conf - -#. Run the stack script that will install OpenStack: - - .. code-block:: console - - $ ./stack.sh - -#. When the stack script is done, you can open the screen session it - started to view all of the running OpenStack services: - - .. code-block:: console - - $ screen -r stack - -#. Press ``Ctrl+A`` followed by 0 to go to the first ``screen`` window. - -.. note:: - - - The ``stack.sh`` script takes a while to run. Perhaps you can - take this opportunity to `join the OpenStack - Foundation `__. - - - ``Screen`` is a useful program for viewing many related services - at once. For more information, see the `GNU screen quick - reference `__. - -Now that you have an OpenStack development environment, you're free to -hack around without worrying about damaging your production deployment. -:ref:`local.conf` provides a working environment for -running OpenStack Identity, Compute, Block Storage, Image service, the -OpenStack dashboard, and Object Storage as the starting point. - -.. _local.conf: - -local.conf ----------- - -.. code-block:: bash - - [[local|localrc]] - FLOATING_RANGE=192.168.1.224/27 - FIXED_RANGE=10.11.12.0/24 - FIXED_NETWORK_SIZE=256 - FLAT_INTERFACE=eth0 - ADMIN_PASSWORD=supersecret - DATABASE_PASSWORD=iheartdatabases - RABBIT_PASSWORD=flopsymopsy - SERVICE_PASSWORD=iheartksl - SERVICE_TOKEN=xyzpdqlazydog - -Customizing Object Storage (Swift) Middleware -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack Object Storage, known as swift when reading the code, is based -on the Python `Paste `_ framework. The best -introduction to its architecture is `A Do-It-Yourself -Framework `_. -Because of the swift project's use of this framework, you are able to -add features to a project by placing some custom code in a project's -pipeline without having to change any of the core code. - -Imagine a scenario where you have public access to one of your -containers, but what you really want is to restrict access to that to a -set of IPs based on a whitelist. In this example, we'll create a piece -of middleware for swift that allows access to a container from only a -set of IP addresses, as determined by the container's metadata items. -Only those IP addresses that you explicitly whitelist using the -container's metadata will be able to access the container. - -.. warning:: - - This example is for illustrative purposes only. It should not be - used as a container IP whitelist solution without further - development and extensive security testing. - -When you join the screen session that ``stack.sh`` starts with -``screen -r stack``, you see a screen for each service running, which -can be a few or several, depending on how many services you configured -DevStack to run. - -The asterisk * indicates which screen window you are viewing. This -example shows we are viewing the key (for keystone) screen window: - - -.. code-block:: console - - 0$ shell 1$ key* 2$ horizon 3$ s-proxy 4$ s-object 5$ s-container 6$ s-account - -The purpose of the screen windows are as follows: - - -``shell`` - A shell where you can get some work done - -``key*`` - The keystone service - -``horizon`` - The horizon dashboard web application - -``s-{name}`` - The swift services - -**To create the middleware and plug it in through Paste configuration:** - -All of the code for OpenStack lives in ``/opt/stack``. Go to the swift -directory in the ``shell`` screen and edit your middleware module. - -#. Change to the directory where Object Storage is installed: - - .. code-block:: console - - $ cd /opt/stack/swift - -#. Create the ``ip_whitelist.py`` Python source code file: - - .. code-block:: console - - $ vim swift/common/middleware/ip_whitelist.py - -#. Copy the code as shown below into ``ip_whitelist.py``. - The following code is a middleware example that - restricts access to a container based on IP address as explained at the - beginning of the section. Middleware passes the request on to another - application. This example uses the swift "swob" library to wrap Web - Server Gateway Interface (WSGI) requests and responses into objects for - swift to interact with. When you're done, save and close the file. - - .. code-block:: python - - # vim: tabstop=4 shiftwidth=4 softtabstop=4 - # Copyright (c) 2014 OpenStack Foundation - # All Rights Reserved. - # - # Licensed under the Apache License, Version 2.0 (the "License"); you may - # not use this file except in compliance with the License. You may obtain - # a copy of the License at - # - # http://www.apache.org/licenses/LICENSE-2.0 - # - # Unless required by applicable law or agreed to in writing, software - # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - # License for the specific language governing permissions and limitations - # under the License. - - import socket - - from swift.common.utils import get_logger - from swift.proxy.controllers.base import get_container_info - from swift.common.swob import Request, Response - - class IPWhitelistMiddleware(object): - """ - IP Whitelist Middleware - - Middleware that allows access to a container from only a set of IP - addresses as determined by the container's metadata items that start - with the prefix 'allow'. E.G. allow-dev=192.168.0.20 - """ - - def __init__(self, app, conf, logger=None): - self.app = app - - if logger: - self.logger = logger - else: - self.logger = get_logger(conf, log_route='ip_whitelist') - - self.deny_message = conf.get('deny_message', "IP Denied") - self.local_ip = socket.gethostbyname(socket.gethostname()) - - def __call__(self, env, start_response): - """ - WSGI entry point. - Wraps env in swob.Request object and passes it down. - - :param env: WSGI environment dictionary - :param start_response: WSGI callable - """ - req = Request(env) - - try: - version, account, container, obj = req.split_path(1, 4, True) - except ValueError: - return self.app(env, start_response) - - container_info = get_container_info( - req.environ, self.app, swift_source='IPWhitelistMiddleware') - - remote_ip = env['REMOTE_ADDR'] - self.logger.debug("Remote IP: %(remote_ip)s", - {'remote_ip': remote_ip}) - - meta = container_info['meta'] - allow = {k:v for k,v in meta.iteritems() if k.startswith('allow')} - allow_ips = set(allow.values()) - allow_ips.add(self.local_ip) - self.logger.debug("Allow IPs: %(allow_ips)s", - {'allow_ips': allow_ips}) - - if remote_ip in allow_ips: - return self.app(env, start_response) - else: - self.logger.debug( - "IP %(remote_ip)s denied access to Account=%(account)s " - "Container=%(container)s. Not in %(allow_ips)s", locals()) - return Response( - status=403, - body=self.deny_message, - request=req)(env, start_response) - - - def filter_factory(global_conf, **local_conf): - """ - paste.deploy app factory for creating WSGI proxy apps. - """ - conf = global_conf.copy() - conf.update(local_conf) - - def ip_whitelist(app): - return IPWhitelistMiddleware(app, conf) - return ip_whitelist - - - There is a lot of useful information in ``env`` and ``conf`` that you - can use to decide what to do with the request. To find out more about - what properties are available, you can insert the following log - statement into the ``__init__`` method: - - .. code-block:: python - - self.logger.debug("conf = %(conf)s", locals()) - - - and the following log statement into the ``__call__`` method: - - .. code-block:: python - - self.logger.debug("env = %(env)s", locals()) - -#. To plug this middleware into the swift Paste pipeline, you edit one - configuration file, ``/etc/swift/proxy-server.conf``: - - .. code-block:: console - - $ vim /etc/swift/proxy-server.conf - -#. Find the ``[filter:ratelimit]`` section in - ``/etc/swift/proxy-server.conf``, and copy in the following - configuration section after it: - - .. code-block:: ini - - [filter:ip_whitelist] - paste.filter_factory = swift.common.middleware.ip_whitelist:filter_factory - # You can override the default log routing for this filter here: - # set log_name = ratelimit - # set log_facility = LOG_LOCAL0 - # set log_level = INFO - # set log_headers = False - # set log_address = /dev/log - deny_message = You shall not pass! - -#. Find the ``[pipeline:main]`` section in - ``/etc/swift/proxy-server.conf``, and add ``ip_whitelist`` after - ratelimit to the list like so. When you're done, save and close the - file: - - .. code-block:: ini - - [pipeline:main] - pipeline = catch_errors gatekeeper healthcheck proxy-logging cache bulk tempurl ratelimit ip_whitelist ... - -#. Restart the ``swift proxy`` service to make swift use your middleware. - Start by switching to the ``swift-proxy`` screen: - - #. Press **Ctrl+A** followed by 3. - - #. Press **Ctrl+C** to kill the service. - - #. Press Up Arrow to bring up the last command. - - #. Press Enter to run it. - -#. Test your middleware with the ``swift`` CLI. Start by switching to the - shell screen and finish by switching back to the ``swift-proxy`` screen - to check the log output: - - #. Press  **Ctrl+A** followed by 0. - - #. Make sure you're in the ``devstack`` directory: - - .. code-block:: console - - $ cd /root/devstack - - #. Source openrc to set up your environment variables for the CLI: - - .. code-block:: console - - $ source openrc - - #. Create a container called ``middleware-test``: - - .. code-block:: console - - $ swift post middleware-test - - #. Press **Ctrl+A** followed by 3 to check the log output. - -#. Among the log statements you'll see the lines: - - .. code-block:: ini - - proxy-server Remote IP: my.instance.ip.address (txn: ...) - proxy-server Allow IPs: set(['my.instance.ip.address']) (txn: ...) - - These two statements are produced by our middleware and show that the - request was sent from our DevStack instance and was allowed. - -#. Test the middleware from outside DevStack on a remote machine that has - access to your DevStack instance: - - #. Install the ``keystone`` and ``swift`` clients on your local machine: - - .. code-block:: console - - # pip install python-keystoneclient python-swiftclient - - #. Attempt to list the objects in the ``middleware-test`` container: - - .. code-block:: console - - $ swift --os-auth-url=http://my.instance.ip.address:5000/v2.0/ \ - --os-region-name=RegionOne --os-username=demo:demo \ - --os-password=devstack list middleware-test - Container GET failed: http://my.instance.ip.address:8080/v1/AUTH_.../ - middleware-test?format=json 403 Forbidden   You shall not pass! - -#. Press **Ctrl+A** followed by 3 to check the log output. Look at the - swift log statements again, and among the log statements, you'll see the - lines: - - .. code-block:: console - - proxy-server Authorizing from an overriding middleware (i.e: tempurl) (txn: ...) - proxy-server ... IPWhitelistMiddleware - proxy-server Remote IP: my.local.ip.address (txn: ...) - proxy-server Allow IPs: set(['my.instance.ip.address']) (txn: ...) - proxy-server IP my.local.ip.address denied access to Account=AUTH_... \ - Container=None. Not in set(['my.instance.ip.address']) (txn: ...) - - Here we can see that the request was denied because the remote IP - address wasn't in the set of allowed IPs. - -#. Back in your DevStack instance on the shell screen, add some metadata to - your container to allow the request from the remote machine: - - #. Press **Ctrl+A** followed by 0. - - #. Add metadata to the container to allow the IP: - - .. code-block:: console - - $ swift post --meta allow-dev:my.local.ip.address middleware-test - - #. Now try the command from Step 10 again and it succeeds. There are no - objects in the container, so there is nothing to list; however, there is - also no error to report. - -.. warning:: - - Functional testing like this is not a replacement for proper unit - and integration testing, but it serves to get you started. - -You can follow a similar pattern in other projects that use the Python -Paste framework. Simply create a middleware module and plug it in -through configuration. The middleware runs in sequence as part of that -project's pipeline and can call out to other services as necessary. No -project core code is touched. Look for a ``pipeline`` value in the -project's ``conf`` or ``ini`` configuration files in ``/etc/`` -to identify projects that use Paste. - -When your middleware is done, we encourage you to open source it and let -the community know on the OpenStack mailing list. Perhaps others need -the same functionality. They can use your code, provide feedback, and -possibly contribute. If enough support exists for it, perhaps you can -propose that it be added to the official swift -`middleware `_. - -Customizing the OpenStack Compute (nova) Scheduler -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Many OpenStack projects allow for customization of specific features -using a driver architecture. You can write a driver that conforms to a -particular interface and plug it in through configuration. For example, -you can easily plug in a new scheduler for Compute. The existing -schedulers for Compute are feature full and well documented at -`Scheduling `_. -However, depending on your user's use cases, the existing schedulers -might not meet your requirements. You might need to create a new -scheduler. - -To create a scheduler, you must inherit from the class -``nova.scheduler.driver.Scheduler``. Of the five methods that you can -override, you *must* override the two methods marked with an asterisk -(\*) below: - -- ``update_service_capabilities`` - -- ``hosts_up`` - -- ``group_hosts`` - -- \* ``schedule_run_instance`` - -- \* ``select_destinations`` - -To demonstrate customizing OpenStack, we'll create an example of a -Compute scheduler that randomly places an instance on a subset of hosts, -depending on the originating IP address of the request and the prefix of -the hostname. Such an example could be useful when you have a group of -users on a subnet and you want all of their instances to start within -some subset of your hosts. - -.. warning:: - - This example is for illustrative purposes only. It should not be - used as a scheduler for Compute without further development and - testing. - -When you join the screen session that ``stack.sh`` starts with -``screen -r stack``, you are greeted with many screen windows: - -.. code-block:: console - - 0$ shell*  1$ key  2$ horizon  ...  9$ n-api  ...  14$ n-sch ... - - -``shell`` - A shell where you can get some work done - -``key`` - The keystone service - -``horizon`` - The horizon dashboard web application - -``n-{name}`` - The nova services - -``n-sch`` - The nova scheduler service - -**To create the scheduler and plug it in through configuration** - -#. The code for OpenStack lives in ``/opt/stack``, so go to the ``nova`` - directory and edit your scheduler module. Change to the directory where - ``nova`` is installed: - - .. code-block:: console - - $ cd /opt/stack/nova - -#. Create the ``ip_scheduler.py`` Python source code file: - - .. code-block:: console - - $ vim nova/scheduler/ip_scheduler.py - -#. The code shown below is a driver that will - schedule servers to hosts based on IP address as explained at the - beginning of the section. Copy the code into ``ip_scheduler.py``. When - you're done, save and close the file. - - .. code-block:: python - - # vim: tabstop=4 shiftwidth=4 softtabstop=4 - # Copyright (c) 2014 OpenStack Foundation - # All Rights Reserved. - # - # Licensed under the Apache License, Version 2.0 (the "License"); you may - # not use this file except in compliance with the License. You may obtain - # a copy of the License at - # - # http://www.apache.org/licenses/LICENSE-2.0 - # - # Unless required by applicable law or agreed to in writing, software - # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - # License for the specific language governing permissions and limitations - # under the License. - - """ - IP Scheduler implementation - """ - - import random - - from oslo.config import cfg - - from nova.compute import rpcapi as compute_rpcapi - from nova import exception - from nova.openstack.common import log as logging - from nova.openstack.common.gettextutils import _ - from nova.scheduler import driver - - CONF = cfg.CONF - CONF.import_opt('compute_topic', 'nova.compute.rpcapi') - LOG = logging.getLogger(__name__) - - class IPScheduler(driver.Scheduler): - """ - Implements Scheduler as a random node selector based on - IP address and hostname prefix. - """ - - def __init__(self, *args, **kwargs): - super(IPScheduler, self).__init__(*args, **kwargs) - self.compute_rpcapi = compute_rpcapi.ComputeAPI() - - def _filter_hosts(self, request_spec, hosts, filter_properties, - hostname_prefix): - """Filter a list of hosts based on hostname prefix.""" - - hosts = [host for host in hosts if host.startswith(hostname_prefix)] - return hosts - - def _schedule(self, context, topic, request_spec, filter_properties): - """Picks a host that is up at random.""" - - elevated = context.elevated() - hosts = self.hosts_up(elevated, topic) - if not hosts: - msg = _("Is the appropriate service running?") - raise exception.NoValidHost(reason=msg) - - remote_ip = context.remote_address - - if remote_ip.startswith('10.1'): - hostname_prefix = 'doc' - elif remote_ip.startswith('10.2'): - hostname_prefix = 'ops' - else: - hostname_prefix = 'dev' - - hosts = self._filter_hosts(request_spec, hosts, filter_properties, - hostname_prefix) - if not hosts: - msg = _("Could not find another compute") - raise exception.NoValidHost(reason=msg) - - host = random.choice(hosts) - LOG.debug("Request from %(remote_ip)s scheduled to %(host)s" % locals()) - - return host - - def select_destinations(self, context, request_spec, filter_properties): - """Selects random destinations.""" - num_instances = request_spec['num_instances'] - # NOTE(timello): Returns a list of dicts with 'host', 'nodename' and - # 'limits' as keys for compatibility with filter_scheduler. - dests = [] - for i in range(num_instances): - host = self._schedule(context, CONF.compute_topic, - request_spec, filter_properties) - host_state = dict(host=host, nodename=None, limits=None) - dests.append(host_state) - - if len(dests) < num_instances: - raise exception.NoValidHost(reason='') - return dests - - def schedule_run_instance(self, context, request_spec, - admin_password, injected_files, - requested_networks, is_first_time, - filter_properties, legacy_bdm_in_spec): - """Create and run an instance or instances.""" - instance_uuids = request_spec.get('instance_uuids') - for num, instance_uuid in enumerate(instance_uuids): - request_spec['instance_properties']['launch_index'] = num - try: - host = self._schedule(context, CONF.compute_topic, - request_spec, filter_properties) - updated_instance = driver.instance_update_db(context, - instance_uuid) - self.compute_rpcapi.run_instance(context, - instance=updated_instance, host=host, - requested_networks=requested_networks, - injected_files=injected_files, - admin_password=admin_password, - is_first_time=is_first_time, - request_spec=request_spec, - filter_properties=filter_properties, - legacy_bdm_in_spec=legacy_bdm_in_spec) - except Exception as ex: - # NOTE(vish): we don't reraise the exception here to make sure - # that all instances in the request get set to - # error properly - driver.handle_schedule_error(context, ex, instance_uuid, - request_spec) - - - There is a lot of useful information in ``context``, ``request_spec``, - and ``filter_properties`` that you can use to decide where to schedule - the instance. To find out more about what properties are available, you - can insert the following log statements into the - ``schedule_run_instance`` method of the scheduler above: - - .. code-block:: python - - LOG.debug("context = %(context)s" % {'context': context.__dict__}) - LOG.debug("request_spec = %(request_spec)s" % locals()) - LOG.debug("filter_properties = %(filter_properties)s" % locals()) - -#. To plug this scheduler into nova, edit one configuration file, - ``/etc/nova/nova.conf``: - - .. code-block:: console - - $ vim /etc/nova/nova.conf - -#. Find the ``scheduler_driver`` config and change it like so: - - .. code-block:: ini - - scheduler_driver=nova.scheduler.ip_scheduler.IPScheduler - -#. Restart the nova scheduler service to make nova use your scheduler. - Start by switching to the ``n-sch`` screen: - - #. Press **Ctrl+A** followed by 9. - - #. Press **Ctrl+A** followed by N until you reach the ``n-sch`` screen. - - #. Press **Ctrl+C** to kill the service. - - #. Press Up Arrow to bring up the last command. - - #. Press Enter to run it. - -#. Test your scheduler with the nova CLI. Start by switching to the - ``shell`` screen and finish by switching back to the ``n-sch`` screen to - check the log output: - - #. Press  **Ctrl+A** followed by 0. - - #. Make sure you're in the ``devstack`` directory: - - .. code-block:: console - - $ cd /root/devstack - - #. Source ``openrc`` to set up your environment variables for the CLI: - - .. code-block:: console - - $ source openrc - - #. Put the image ID for the only installed image into an environment - variable: - - .. code-block:: console - - $ IMAGE_ID=`nova image-list | egrep cirros | egrep -v "kernel|ramdisk" | awk '{print $2}'` - - #. Boot a test server: - - .. code-block:: console - - $ nova boot --flavor 1 --image $IMAGE_ID scheduler-test - -#. Switch back to the ``n-sch`` screen. Among the log statements, you'll - see the line: - - .. code-block:: console - - 2014-01-23 19:57:47.262 DEBUG nova.scheduler.ip_scheduler \ - [req-... demo demo] Request from 162.242.221.84 \ - scheduled to devstack-havana \ - _schedule /opt/stack/nova/nova/scheduler/ip_scheduler.py:76 - -.. warning:: - - Functional testing like this is not a replacement for proper unit - and integration testing, but it serves to get you started. - -A similar pattern can be followed in other projects that use the driver -architecture. Simply create a module and class that conform to the -driver interface and plug it in through configuration. Your code runs -when that feature is used and can call out to other services as -necessary. No project core code is touched. Look for a "driver" value in -the project's ``.conf`` configuration files in ``/etc/`` to -identify projects that use a driver architecture. - -When your scheduler is done, we encourage you to open source it and let -the community know on the OpenStack mailing list. Perhaps others need -the same functionality. They can use your code, provide feedback, and -possibly contribute. If enough support exists for it, perhaps you can -propose that it be added to the official Compute -`schedulers `_. - -Customizing the Dashboard (Horizon) -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The dashboard is based on the Python -`Django `_ web application framework. -The best guide to customizing it has already been written and can be -found at `Building on -Horizon `_. - -Conclusion -~~~~~~~~~~ - -When operating an OpenStack cloud, you may discover that your users can -be quite demanding. If OpenStack doesn't do what your users need, it may -be up to you to fulfill those requirements. This chapter provided you -with some options for customization and gave you the tools you need to -get started. diff --git a/doc/ops-guide/source/ops_lay_of_the_land.rst b/doc/ops-guide/source/ops_lay_of_the_land.rst deleted file mode 100644 index 352448a4..00000000 --- a/doc/ops-guide/source/ops_lay_of_the_land.rst +++ /dev/null @@ -1,602 +0,0 @@ -=============== -Lay of the Land -=============== - -This chapter helps you set up your working environment and use it to -take a look around your cloud. - -Using the OpenStack Dashboard for Administration -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -As a cloud administrative user, you can use the OpenStack dashboard to -create and manage projects, users, images, and flavors. Users are -allowed to create and manage images within specified projects and to -share images, depending on the Image service configuration. Typically, -the policy configuration allows admin users only to set quotas and -create and manage services. The dashboard provides an :guilabel:`Admin` -tab with a :guilabel:`System Panel` and an :guilabel:`Identity` tab. -These interfaces give you access to system information and usage as -well as to settings for configuring what -end users can do. Refer to the `OpenStack Administrator -Guide `_ for -detailed how-to information about using the dashboard as an admin user. - -Command-Line Tools -~~~~~~~~~~~~~~~~~~ - -We recommend using a combination of the OpenStack command-line interface -(CLI) tools and the OpenStack dashboard for administration. Some users -with a background in other cloud technologies may be using the EC2 -Compatibility API, which uses naming conventions somewhat different from -the native API. We highlight those differences. - -We strongly suggest that you install the command-line clients from the -`Python Package Index `_ (PyPI) instead -of from the distribution packages. The clients are under heavy -development, and it is very likely at any given time that the version of -the packages distributed by your operating-system vendor are out of -date. - -The pip utility is used to manage package installation from the PyPI -archive and is available in the python-pip package in most Linux -distributions. Each OpenStack project has its own client, so depending -on which services your site runs, install some or all of the -following packages: - -* python-novaclient (:term:`nova` CLI) -* python-glanceclient (:term:`glance` CLI) -* python-keystoneclient (:term:`keystone` CLI) -* python-cinderclient (:term:`cinder` CLI) -* python-swiftclient (:term:`swift` CLI) -* python-neutronclient (:term:`neutron` CLI) - -Installing the Tools --------------------- - -To install (or upgrade) a package from the PyPI archive with pip, -command-line tools installingas root: - -.. code-block:: console - - # pip install [--upgrade] - -To remove the package: - -.. code-block:: console - - # pip uninstall - -If you need even newer versions of the clients, pip can install directly -from the upstream git repository using the :option:`-e` flag. You must specify -a name for the Python egg that is installed. For example: - -.. code-block:: console - - # pip install -e git+https://git.openstack.org/openstack/python-novaclient#egg=python-novaclient - -If you support the EC2 API on your cloud, you should also install the -euca2ools package or some other EC2 API tool so that you can get the -same view your users have. Using EC2 API-based tools is mostly out of -the scope of this guide, though we discuss getting credentials for use -with it. - -Administrative Command-Line Tools ---------------------------------- - -There are also several :command:`*-manage` command-line tools. These are -installed with the project's services on the cloud controller and do not -need to be installed\*-manage command-line toolscommand-line tools -administrative separately: - -* :command:`glance-manage` -* :command:`keystone-manage` -* :command:`cinder-manage` - -Unlike the CLI tools mentioned above, the :command:`*-manage` tools must -be run from the cloud controller, as root, because they need read access -to the config files such as ``/etc/nova/nova.conf`` and to make queries -directly against the database rather than against the OpenStack -:term:`API endpoints `. - -.. warning:: - - The existence of the ``*-manage`` tools is a legacy issue. It is a - goal of the OpenStack project to eventually migrate all of the - remaining functionality in the ``*-manage`` tools into the API-based - tools. Until that day, you need to SSH into the - :term:`cloud controller node` to perform some maintenance operations - that require one of the ``*-manage`` tools. - -Getting Credentials -------------------- - -You must have the appropriate credentials if you want to use the -command-line tools to make queries against your OpenStack cloud. By far, -the easiest way to obtain :term:`authentication` credentials to use with -command-line clients is to use the OpenStack dashboard. Select -:guilabel:`Project`, click the :guilabel:`Project` tab, and click -:guilabel:`Access & Security` on the :guilabel:`Compute` category. -On the :guilabel:`Access & Security` page, click the :guilabel:`API Access` -tab to display two buttons, :guilabel:`Download OpenStack RC File` and -:guilabel:`Download EC2 Credentials`, which let you generate files that -you can source in your shell to populate the environment variables the -command-line tools require to know where your service endpoints and your -authentication information are. The user you logged in to the dashboard -dictates the filename for the openrc file, such as ``demo-openrc.sh``. -When logged in as admin, the file is named ``admin-openrc.sh``. - -The generated file looks something like this: - -.. code-block:: bash - - #!/bin/bash - - # With the addition of Keystone, to use an openstack cloud you should - # authenticate against keystone, which returns a **Token** and **Service - # Catalog**. The catalog contains the endpoint for all services the - # user/tenant has access to--including nova, glance, keystone, swift. - # - # *NOTE*: Using the 2.0 *auth api* does not mean that compute api is 2.0. - # We use the 1.1 *compute api* - export OS_AUTH_URL=http://203.0.113.10:5000/v2.0 - - # With the addition of Keystone we have standardized on the term **tenant** - # as the entity that owns the resources. - export OS_TENANT_ID=98333aba48e756fa8f629c83a818ad57 - export OS_TENANT_NAME="test-project" - - # In addition to the owning entity (tenant), openstack stores the entity - # performing the action as the **user**. - export OS_USERNAME=demo - - # With Keystone you pass the keystone password. - echo "Please enter your OpenStack Password: " - read -s OS_PASSWORD_INPUT - export OS_PASSWORD=$OS_PASSWORD_INPUT - -.. warning:: - - This does not save your password in plain text, which is a good - thing. But when you source or run the script, it prompts you for - your password and then stores your response in the environment - variable ``OS_PASSWORD``. It is important to note that this does - require interactivity. It is possible to store a value directly in - the script if you require a noninteractive operation, but you then - need to be extremely cautious with the security and permissions of - this file.passwordssecurity issues passwords - -EC2 compatibility credentials can be downloaded by selecting -:guilabel:`Project`, then :guilabel:`Compute`, then -:guilabel:`Access & Security`, then :guilabel:`API Access` to display the -:guilabel:`Download EC2 Credentials` button. Click the button to generate -a ZIP file with server x509 certificates and a shell script fragment. -Create a new directory in a secure location because these are live credentials -containing all the authentication information required to access your -cloud identity, unlike the default ``user-openrc``. Extract the ZIP file -here. You should have ``cacert.pem``, ``cert.pem``, ``ec2rc.sh``, and -``pk.pem``. The ``ec2rc.sh`` is similar to this: - -.. code-block:: bash - - #!/bin/bash - - NOVARC=$(readlink -f "${BASH_SOURCE:-${0}}" 2>/dev/null) ||\ - NOVARC=$(python -c 'import os,sys; \ - print os.path.abspath(os.path.realpath(sys.argv[1]))' "${BASH_SOURCE:-${0}}") - NOVA_KEY_DIR=${NOVARC%/*} - export EC2_ACCESS_KEY=df7f93ec47e84ef8a347bbb3d598449a - export EC2_SECRET_KEY=ead2fff9f8a344e489956deacd47e818 - export EC2_URL=http://203.0.113.10:8773/services/Cloud - export EC2_USER_ID=42 # nova does not use user id, but bundling requires it - export EC2_PRIVATE_KEY=${NOVA_KEY_DIR}/pk.pem - export EC2_CERT=${NOVA_KEY_DIR}/cert.pem - export NOVA_CERT=${NOVA_KEY_DIR}/cacert.pem - export EUCALYPTUS_CERT=${NOVA_CERT} # euca-bundle-image seems to require this - - alias ec2-bundle-image="ec2-bundle-image --cert $EC2_CERT --privatekey \ - $EC2_PRIVATE_KEY --user 42 --ec2cert $NOVA_CERT" - alias ec2-upload-bundle="ec2-upload-bundle -a $EC2_ACCESS_KEY -s \ - $EC2_SECRET_KEY --url $S3_URL --ec2cert $NOVA_CERT" - -To put the EC2 credentials into your environment, source the -``ec2rc.sh`` file. - -Inspecting API Calls --------------------- - -The command-line tools can be made to show the OpenStack API calls they -make by passing the :option:`--debug` flag to them.API (application -programming interface) API calls, inspectingcommand-line tools -inspecting API calls For example: - -.. code-block:: console - - # nova --debug list - -This example shows the HTTP requests from the client and the responses -from the endpoints, which can be helpful in creating custom tools -written to the OpenStack API. - -Using cURL for further inspection -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Underlying the use of the command-line tools is the OpenStack API, which -is a RESTful API that runs over HTTP. There may be cases where you want -to interact with the API directly or need to use it because of a -suspected bug in one of the CLI tools. The best way to do this is to use -a combination of `cURL `_ and another tool, -such as `jq `_, to parse the JSON from -the responses. - -The first thing you must do is authenticate with the cloud using your -credentials to get an authentication token. - -Your credentials are a combination of username, password, and tenant -(project). You can extract these values from the ``openrc.sh`` discussed -above. The token allows you to interact with your other service -endpoints without needing to reauthenticate for every request. Tokens -are typically good for 24 hours, and when the token expires, you are -alerted with a 401 (Unauthorized) response and you can request another -token. - -#. Look at your OpenStack service catalog: - - .. code-block:: console - - $ curl -s -X POST http://203.0.113.10:35357/v2.0/tokens \ - -d '{"auth": {"passwordCredentials": {"username":"test-user", \ - "password":"test-password"}, \ - "tenantName":"test-project"}}' \ - -H "Content-type: application/json" | jq . - -#. Read through the JSON response to get a feel for how the catalog is - laid out. - - To make working with subsequent requests easier, store the token in - an environment variable: - - .. code-block:: console - - $ TOKEN=`curl -s -X POST http://203.0.113.10:35357/v2.0/tokens \ - -d '{"auth": {"passwordCredentials": {"username":"test-user", \ - "password":"test-password"}, \ - "tenantName":"test-project"}}' \ - -H "Content-type: application/json" |  jq -r .access.token.id` - - Now you can refer to your token on the command line as ``$TOKEN``. - -#. Pick a service endpoint from your service catalog, such as compute. - Try a request, for example, listing instances (servers): - - .. code-block:: console - - $ curl -s \ - -H "X-Auth-Token: $TOKEN" \ - http://203.0.113.10:8774/v2/98333aba48e756fa8f629c83a818ad57/servers | jq . - -To discover how API requests should be structured, read the `OpenStack -API Reference `_. To chew -through the responses using jq, see the `jq -Manual `_. - -The ``-s flag`` used in the cURL commands above are used to prevent -the progress meter from being shown. If you are having trouble running -cURL commands, you'll want to remove it. Likewise, to help you -troubleshoot cURL commands, you can include the ``-v`` flag to show you -the verbose output. There are many more extremely useful features in -cURL; refer to the man page for all the options. - -Servers and Services --------------------- - -As an administrator, you have a few ways to discover what your OpenStack -cloud looks like simply by using the OpenStack tools available. This -section gives you an idea of how to get an overview of your cloud, its -shape, size, and current state. - -First, you can discover what servers belong to your OpenStack cloud by -running: - -.. code-block:: console - - # nova service-list - -The output looks like the following: - -.. code-block:: console - - +----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ - | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | - +----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ - | 1 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 2 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 3 | nova-compute | c01.example.com. | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 4 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 5 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 6 | nova-compute | c01.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 7 | nova-conductor | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 8 | nova-cert | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:42.000000 | - | - | 9 | nova-scheduler | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:38.000000 | - | - | 10 | nova-consoleauth | cloud.example.com | nova | enabled | up | 2016-01-05T17:20:35.000000 | - | - +----+------------------+-------------------+------+---------+-------+----------------------------+-----------------+ - -The output shows that there are five compute nodes and one cloud -controller. You see all the services in the up state, which indicates that -the services are up and running. If a service is in a down state, it is -no longer available. This is an indication that you -should troubleshoot why the service is down. - -If you are using cinder, run the following command to see a similar -listing: - -.. code-block:: console - - # cinder-manage host list | sort - host zone - c01.example.com nova - c02.example.com nova - c03.example.com nova - c04.example.com nova - c05.example.com nova - cloud.example.com nova - -With these two tables, you now have a good overview of what servers and -services make up your cloud. - -You can also use the Identity service (keystone) to see what services -are available in your cloud as well as what endpoints have been -configured for the services. - -The following command requires you to have your shell environment -configured with the proper administrative variables: - -.. code-block:: console - - $ openstack catalog list - +----------+------------+---------------------------------------------------------------------------------+ - | Name | Type | Endpoints | - +----------+------------+---------------------------------------------------------------------------------+ - | nova | compute | RegionOne | - | | | publicURL: http://192.168.122.10:8774/v2/9faa845768224258808fc17a1bb27e5e | - | | | internalURL: http://192.168.122.10:8774/v2/9faa845768224258808fc17a1bb27e5e | - | | | adminURL: http://192.168.122.10:8774/v2/9faa845768224258808fc17a1bb27e5e | - | | | | - | cinderv2 | volumev2 | RegionOne | - | | | publicURL: http://192.168.122.10:8776/v2/9faa845768224258808fc17a1bb27e5e | - | | | internalURL: http://192.168.122.10:8776/v2/9faa845768224258808fc17a1bb27e5e | - | | | adminURL: http://192.168.122.10:8776/v2/9faa845768224258808fc17a1bb27e5e | - | | | | - -The preceding output has been truncated to show only two services. You -will see one service entry for each service that your cloud provides. -Note how the endpoint domain can be different depending on the endpoint -type. Different endpoint domains per type are not required, but this can -be done for different reasons, such as endpoint privacy or network -traffic segregation. - -You can find the version of the Compute installation by using the -nova client command: - -.. code-block:: console - - # nova version-list - -Diagnose Your Compute Nodes ---------------------------- - -You can obtain extra information about virtual machines that are -running—their CPU usage, the memory, the disk I/O or network I/O—per -instance, by running the :command:`nova diagnostics` command with a server ID: - -.. code-block:: console - - $ nova diagnostics - -The output of this command varies depending on the hypervisor because -hypervisors support different attributes. The following demonstrates -the difference between the two most popular hypervisors. -Here is example output when the hypervisor is Xen: - -.. code-block:: console - - +----------------+-----------------+ - | Property | Value | - +----------------+-----------------+ - | cpu0 | 4.3627 | - | memory | 1171088064.0000 | - | memory_target | 1171088064.0000 | - | vbd_xvda_read | 0.0 | - | vbd_xvda_write | 0.0 | - | vif_0_rx | 3223.6870 | - | vif_0_tx | 0.0 | - | vif_1_rx | 104.4955 | - | vif_1_tx | 0.0 | - +----------------+-----------------+ - -While the command should work with any hypervisor that is controlled -through libvirt (KVM, QEMU, or LXC), it has been tested only with KVM. -Here is the example output when the hypervisor is KVM: - -.. code-block:: console - - +------------------+------------+ - | Property | Value | - +------------------+------------+ - | cpu0_time | 2870000000 | - | memory | 524288 | - | vda_errors | -1 | - | vda_read | 262144 | - | vda_read_req | 112 | - | vda_write | 5606400 | - | vda_write_req | 376 | - | vnet0_rx | 63343 | - | vnet0_rx_drop | 0 | - | vnet0_rx_errors | 0 | - | vnet0_rx_packets | 431 | - | vnet0_tx | 4905 | - | vnet0_tx_drop | 0 | - | vnet0_tx_errors | 0 | - | vnet0_tx_packets | 45 | - +------------------+------------+ - -Network Inspection -~~~~~~~~~~~~~~~~~~ - -To see which fixed IP networks are configured in your cloud, you can use -the :command:`nova` command-line client to get the IP ranges: - -.. code-block:: console - - $ nova network-list - +--------------------------------------+--------+--------------+ - | ID | Label | Cidr | - +--------------------------------------+--------+--------------+ - | 3df67919-9600-4ea8-952e-2a7be6f70774 | test01 | 10.1.0.0/24 | - | 8283efb2-e53d-46e1-a6bd-bb2bdef9cb9a | test02 | 10.1.1.0/24 | - +--------------------------------------+--------+--------------+ - -The nova command-line client can provide some additional details: - -.. code-block:: console - - # nova network-list - id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid - 1 10.1.0.0/24 None 10.1.0.3 None None 300 2725bbd beacb3f2 - 2 10.1.1.0/24 None 10.1.1.3 None None 301 none d0b1a796 - -This output shows that two networks are configured, each network -containing 255 IPs (a /24 subnet). The first network has been assigned -to a certain project, while the second network is still open for -assignment. You can assign this network manually; otherwise, it is -automatically assigned when a project launches its first instance. - -To find out whether any floating IPs are available in your cloud, run: - -.. code-block:: console - - # nova floating-ip-list - 2725bb...59f43f 1.2.3.4 None nova vlan20 - None 1.2.3.5 48a415...b010ff nova vlan20 - -Here, two floating IPs are available. The first has been allocated to a -project, while the other is unallocated. - -Users and Projects -~~~~~~~~~~~~~~~~~~ - -To see a list of projects that have been added to the cloud,projects -obtaining list of currentuser management listing usersworking -environment users and projects run: - -.. code-block:: console - - $ openstack project list - +----------------------------------+--------------------+ - | ID | Name | - +----------------------------------+--------------------+ - | 422c17c0b26f4fbe9449f37a5621a5e6 | alt_demo | - | 5dc65773519248f3a580cfe28ba7fa3f | demo | - | 9faa845768224258808fc17a1bb27e5e | admin | - | a733070a420c4b509784d7ea8f6884f7 | invisible_to_admin | - | aeb3e976e7794f3f89e4a7965db46c1e | service | - +----------------------------------+--------------------+ - -To see a list of users, run: - -.. code-block:: console - - $ openstack user list - +----------------------------------+----------+ - | ID | Name | - +----------------------------------+----------+ - | 5837063598694771aedd66aa4cddf0b8 | demo | - | 58efd9d852b74b87acc6efafaf31b30e | cinder | - | 6845d995a57a441f890abc8f55da8dfb | glance | - | ac2d15a1205f46d4837d5336cd4c5f5a | alt_demo | - | d8f593c3ae2b47289221f17a776a218b | admin | - | d959ec0a99e24df0b7cb106ff940df20 | nova | - +----------------------------------+----------+ - -.. note:: - - Sometimes a user and a group have a one-to-one mapping. This happens - for standard system accounts, such as cinder, glance, nova, and - swift, or when only one user is part of a group. - -Running Instances -~~~~~~~~~~~~~~~~~ - -To see a list of running instances,instances list of runningworking -environment running instances run: - -.. code-block:: console - - $ nova list --all-tenants - +-----+------------------+--------+-------------------------------------------+ - | ID | Name | Status | Networks | - +-----+------------------+--------+-------------------------------------------+ - | ... | Windows | ACTIVE | novanetwork_1=10.1.1.3, 199.116.232.39 | - | ... | cloud controller | ACTIVE | novanetwork_0=10.1.0.6; jtopjian=10.1.2.3 | - | ... | compute node 1 | ACTIVE | novanetwork_0=10.1.0.4; jtopjian=10.1.2.4 | - | ... | devbox | ACTIVE | novanetwork_0=10.1.0.3 | - | ... | devstack | ACTIVE | novanetwork_0=10.1.0.5 | - | ... | initial | ACTIVE | nova_network=10.1.7.4, 10.1.8.4 | - | ... | lorin-head | ACTIVE | nova_network=10.1.7.3, 10.1.8.3 | - +-----+------------------+--------+-------------------------------------------+ - -Unfortunately, this command does not tell you various details about the -running instances, such as what compute node the instance is running on, -what flavor the instance is, and so on. You can use the following -command to view details about individual instances: - -.. code-block:: console - - $ nova show - -For example: - -.. code-block:: console - - # nova show 81db556b-8aa5-427d-a95c-2a9a6972f630 - +-------------------------------------+-----------------------------------+ - | Property | Value | - +-------------------------------------+-----------------------------------+ - | OS-DCF:diskConfig | MANUAL | - | OS-EXT-SRV-ATTR:host | c02.example.com | - | OS-EXT-SRV-ATTR:hypervisor_hostname | c02.example.com | - | OS-EXT-SRV-ATTR:instance_name | instance-00000029 | - | OS-EXT-STS:power_state | 1 | - | OS-EXT-STS:task_state | None | - | OS-EXT-STS:vm_state | active | - | accessIPv4 | | - | accessIPv6 | | - | config_drive | | - | created | 2013-02-13T20:08:36Z | - | flavor | m1.small (6) | - | hostId | ... | - | id | ... | - | image | Ubuntu 12.04 cloudimg amd64 (...) | - | key_name | jtopjian-sandbox | - | metadata | {} | - | name | devstack | - | novanetwork_0 network | 10.1.0.5 | - | progress | 0 | - | security_groups | [{u'name': u'default'}] | - | status | ACTIVE | - | tenant_id | ... | - | updated | 2013-02-13T20:08:59Z | - | user_id | ... | - +-------------------------------------+-----------------------------------+ - -This output shows that an instance named ``devstack`` was created from -an Ubuntu 12.04 image using a flavor of ``m1.small`` and is hosted on -the compute node ``c02.example.com``. - -Summary -~~~~~~~ - -We hope you have enjoyed this quick tour of your working environment, -including how to interact with your cloud and extract useful -information. From here, you can use the `Administrator -Guide `_ as your -reference for all of the command-line functionality in your cloud. diff --git a/doc/ops-guide/source/ops_logging_monitoring.rst b/doc/ops-guide/source/ops_logging_monitoring.rst deleted file mode 100644 index 0e8deb61..00000000 --- a/doc/ops-guide/source/ops_logging_monitoring.rst +++ /dev/null @@ -1,777 +0,0 @@ -====================== -Logging and Monitoring -====================== - -As an OpenStack cloud is composed of so many different services, there -are a large number of log files. This chapter aims to assist you in -locating and working with them and describes other ways to track the -status of your deployment. - -Where Are the Logs? -~~~~~~~~~~~~~~~~~~~ - -Most services use the convention of writing their log files to -subdirectories of the ``/var/log directory``, as listed in the -below table. - -.. list-table:: OpenStack log locations - :widths: 33 33 33 - :header-rows: 1 - - * - Node type - - Service - - Log location - * - Cloud controller - - ``nova-*`` - - ``/var/log/nova`` - * - Cloud controller - - ``glance-*`` - - ``/var/log/glance`` - * - Cloud controller - - ``cinder-*`` - - ``/var/log/cinder`` - * - Cloud controller - - ``keystone-*`` - - ``/var/log/keystone`` - * - Cloud controller - - ``neutron-*`` - - ``/var/log/neutron`` - * - Cloud controller - - horizon - - ``/var/log/apache2/`` - * - All nodes - - misc (swift, dnsmasq) - - ``/var/log/syslog`` - * - Compute nodes - - libvirt - - ``/var/log/libvirt/libvirtd.log`` - * - Compute nodes - - Console (boot up messages) for VM instances: - - ``/var/lib/nova/instances/instance-/console.log`` - * - Block Storage nodes - - cinder-volume - - ``/var/log/cinder/cinder-volume.log`` - - -Reading the Logs -~~~~~~~~~~~~~~~~ - -OpenStack services use the standard logging levels, at increasing -severity: DEBUG, INFO, AUDIT, WARNING, ERROR, CRITICAL, and TRACE. That -is, messages only appear in the logs if they are more "severe" than the -particular log level, with DEBUG allowing all log statements through. -For example, TRACE is logged only if the software has a stack trace, -while INFO is logged for every message including those that are only for -information. - -To disable DEBUG-level logging, edit ``/etc/nova/nova.conf`` file as follows: - -.. code-block:: ini - - debug=false - -Keystone is handled a little differently. To modify the logging level, -edit the ``/etc/keystone/logging.conf`` file and look at the -``logger_root`` and ``handler_file`` sections. - -Logging for horizon is configured in -``/etc/openstack_dashboard/local_settings.py``. Because horizon is -a Django web application, it follows the `Django Logging framework -conventions `_. - -The first step in finding the source of an error is typically to search -for a CRITICAL, TRACE, or ERROR message in the log starting at the -bottom of the log file. - -Here is an example of a CRITICAL log message, with the corresponding -TRACE (Python traceback) immediately following: - -.. code-block:: console - - 2013-02-25 21:05:51 17409 CRITICAL cinder [-] Bad or unexpected response from the storage volume backend API: volume group - cinder-volumes doesn't exist - 2013-02-25 21:05:51 17409 TRACE cinder Traceback (most recent call last): - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/bin/cinder-volume", line 48, in - 2013-02-25 21:05:51 17409 TRACE cinder service.wait() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 422, in wait - 2013-02-25 21:05:51 17409 TRACE cinder _launcher.wait() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 127, in wait - 2013-02-25 21:05:51 17409 TRACE cinder service.wait() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 166, in wait - 2013-02-25 21:05:51 17409 TRACE cinder return self._exit_event.wait() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/event.py", line 116, in wait - 2013-02-25 21:05:51 17409 TRACE cinder return hubs.get_hub().switch() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/hubs/hub.py", line 177, in switch - 2013-02-25 21:05:51 17409 TRACE cinder return self.greenlet.switch() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main - 2013-02-25 21:05:51 17409 TRACE cinder result = function(*args, **kwargs) - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 88, in run_server - 2013-02-25 21:05:51 17409 TRACE cinder server.start() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 159, in start - 2013-02-25 21:05:51 17409 TRACE cinder self.manager.init_host() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 95, - in init_host - 2013-02-25 21:05:51 17409 TRACE cinder self.driver.check_for_setup_error() - 2013-02-25 21:05:51 17409 TRACE cinder File "/usr/lib/python2.7/dist-packages/cinder/volume/driver.py", line 116, - in check_for_setup_error - 2013-02-25 21:05:51 17409 TRACE cinder raise exception.VolumeBackendAPIException(data=exception_message) - 2013-02-25 21:05:51 17409 TRACE cinder VolumeBackendAPIException: Bad or unexpected response from the storage volume - backend API: volume group cinder-volumes doesn't exist - 2013-02-25 21:05:51 17409 TRACE cinder - -In this example, ``cinder-volumes`` failed to start and has provided a -stack trace, since its volume back end has been unable to set up the -storage volume—probably because the LVM volume that is expected from the -configuration does not exist. - -Here is an example error log: - -.. code-block:: console - - 2013-02-25 20:26:33 6619 ERROR nova.openstack.common.rpc.common [-] AMQP server on localhost:5672 is unreachable: - [Errno 111] ECONNREFUSED. Trying again in 23 seconds. - -In this error, a nova service has failed to connect to the RabbitMQ -server because it got a connection refused error. - -Tracing Instance Requests -~~~~~~~~~~~~~~~~~~~~~~~~~ - -When an instance fails to behave properly, you will often have to trace -activity associated with that instance across the log files of various -``nova-*`` services and across both the cloud controller and compute -nodes. - -The typical way is to trace the UUID associated with an instance across -the service logs. - -Consider the following example: - -.. code-block:: console - - $ nova list - +--------------------------------+--------+--------+--------------------------+ - | ID | Name | Status | Networks | - +--------------------------------+--------+--------+--------------------------+ - | fafed8-4a46-413b-b113-f1959ffe | cirros | ACTIVE | novanetwork=192.168.100.3| - +--------------------------------------+--------+--------+--------------------+ - -Here, the ID associated with the instance is -``faf7ded8-4a46-413b-b113-f19590746ffe``. If you search for this string -on the cloud controller in the ``/var/log/nova-*.log`` files, it appears -in ``nova-api.log`` and ``nova-scheduler.log``. If you search for this -on the compute nodes in ``/var/log/nova-*.log``, it appears in -``nova-network.log`` and ``nova-compute.log``. If no ERROR or CRITICAL -messages appear, the most recent log entry that reports this may provide -a hint about what has gone wrong. - -Adding Custom Logging Statements -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If there is not enough information in the existing logs, you may need to -add your own custom logging statements to the ``nova-*`` -services. - -The source files are located in -``/usr/lib/python2.7/dist-packages/nova``. - -To add logging statements, the following line should be near the top of -the file. For most files, these should already be there: - -.. code-block:: python - - from nova.openstack.common import log as logging - LOG = logging.getLogger(__name__) - -To add a DEBUG logging statement, you would do: - -.. code-block:: python - - LOG.debug("This is a custom debugging statement") - -You may notice that all the existing logging messages are preceded by an -underscore and surrounded by parentheses, for example: - -.. code-block:: python - - LOG.debug(_("Logging statement appears here")) - -This formatting is used to support translation of logging messages into -different languages using the -`gettext `_ -internationalization library. You don't need to do this for your own -custom log messages. However, if you want to contribute the code back to -the OpenStack project that includes logging statements, you must -surround your log messages with underscores and parentheses. - -RabbitMQ Web Management Interface or rabbitmqctl -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Aside from connection failures, RabbitMQ log files are generally not -useful for debugging OpenStack related issues. Instead, we recommend you -use the RabbitMQ web management interface.RabbitMQlogging/monitoring -RabbitMQ web management interface Enable it on your cloud -controller: - -.. code-block:: console - - # /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management - -.. code-block:: console - - # service rabbitmq-server restart - -The RabbitMQ web management interface is accessible on your cloud -controller at *http://localhost:55672*. - -.. note:: - - Ubuntu 12.04 installs RabbitMQ version 2.7.1, which uses port 55672. - RabbitMQ versions 3.0 and above use port 15672 instead. You can - check which version of RabbitMQ you have running on your local - Ubuntu machine by doing: - - .. code-block:: console - - $ dpkg -s rabbitmq-server | grep "Version:" - Version: 2.7.1-0ubuntu4 - -An alternative to enabling the RabbitMQ web management interface is to -use the ``rabbitmqctl`` commands. For example, -:command:`rabbitmqctl list_queues| grep cinder` displays any messages left in -the queue. If there are messages, it's a possible sign that cinder -services didn't connect properly to rabbitmq and might have to be -restarted. - -Items to monitor for RabbitMQ include the number of items in each of the -queues and the processing time statistics for the server. - -Centrally Managing Logs -~~~~~~~~~~~~~~~~~~~~~~~ - -Because your cloud is most likely composed of many servers, you must -check logs on each of those servers to properly piece an event together. -A better solution is to send the logs of all servers to a central -location so that they can all be accessed from the same -area. - -Ubuntu uses rsyslog as the default logging service. Since it is natively -able to send logs to a remote location, you don't have to install -anything extra to enable this feature, just modify the configuration -file. In doing this, consider running your logging over a management -network or using an encrypted VPN to avoid interception. - -rsyslog Client Configuration ----------------------------- - -To begin, configure all OpenStack components to log to syslog in -addition to their standard log file location. Also configure each -component to log to a different syslog facility. This makes it easier to -split the logs into individual components on the central server: - -``nova.conf``: - -.. code-block:: ini - - use_syslog=True - syslog_log_facility=LOG_LOCAL0 - -``glance-api.conf`` and ``glance-registry.conf``: - -.. code-block:: ini - - use_syslog=True - syslog_log_facility=LOG_LOCAL1 - -``cinder.conf``: - -.. code-block:: ini - - use_syslog=True - syslog_log_facility=LOG_LOCAL2 - -``keystone.conf``: - -.. code-block:: ini - - use_syslog=True - syslog_log_facility=LOG_LOCAL3 - -By default, Object Storage logs to syslog. - -Next, create ``/etc/rsyslog.d/client.conf`` with the following line: - -.. code-block:: ini - - *.* @192.168.1.10 - -This instructs rsyslog to send all logs to the IP listed. In this -example, the IP points to the cloud controller. - -rsyslog Server Configuration ----------------------------- - -Designate a server as the central logging server. The best practice is -to choose a server that is solely dedicated to this purpose. Create a -file called ``/etc/rsyslog.d/server.conf`` with the following contents: - -.. code-block:: ini - - # Enable UDP - $ModLoad imudp - # Listen on 192.168.1.10 only - $UDPServerAddress 192.168.1.10 - # Port 514 - $UDPServerRun 514 - - # Create logging templates for nova - $template NovaFile,"/var/log/rsyslog/%HOSTNAME%/nova.log" - $template NovaAll,"/var/log/rsyslog/nova.log" - - # Log everything else to syslog.log - $template DynFile,"/var/log/rsyslog/%HOSTNAME%/syslog.log" - *.* ?DynFile - - # Log various openstack components to their own individual file - local0.* ?NovaFile - local0.* ?NovaAll - & ~ - -This example configuration handles the nova service only. It first -configures rsyslog to act as a server that runs on port 514. Next, it -creates a series of logging templates. Logging templates control where -received logs are stored. Using the last example, a nova log from -c01.example.com goes to the following locations: - -- ``/var/log/rsyslog/c01.example.com/nova.log`` - -- ``/var/log/rsyslog/nova.log`` - -This is useful, as logs from c02.example.com go to: - -- ``/var/log/rsyslog/c02.example.com/nova.log`` - -- ``/var/log/rsyslog/nova.log`` - -You have an individual log file for each compute node as well as an -aggregated log that contains nova logs from all nodes. - -Monitoring -~~~~~~~~~~ - -There are two types of monitoring: watching for problems and watching -usage trends. The former ensures that all services are up and running, -creating a functional cloud. The latter involves monitoring resource -usage over time in order to make informed decisions about potential -bottlenecks and upgrades. - -**Nagios** is an open source monitoring service. It's capable of executing -arbitrary commands to check the status of server and network services, -remotely executing arbitrary commands directly on servers, and allowing -servers to push notifications back in the form of passive monitoring. -Nagios has been around since 1999. Although newer monitoring services -are available, Nagios is a tried-and-true systems administration -staple. - -Process Monitoring ------------------- - -A basic type of alert monitoring is to simply check and see whether a -required process is running.monitoring process monitoringprocess -monitoringlogging/monitoring process monitoring For example, ensure that -the ``nova-api`` service is running on the cloud controller: - -.. code-block:: console - - # ps aux | grep nova-api - nova 12786 0.0 0.0 37952 1312 ? Ss Feb11 0:00 su -s /bin/sh -c exec nova-api - --config-file=/etc/nova/nova.conf nova - nova 12787 0.0 0.1 135764 57400 ? S Feb11 0:01 /usr/bin/python - /usr/bin/nova-api --config-file=/etc/nova/nova.conf - nova 12792 0.0 0.0 96052 22856 ? S Feb11 0:01 /usr/bin/python - /usr/bin/nova-api --config-file=/etc/nova/nova.conf - nova 12793 0.0 0.3 290688 115516 ? S Feb11 1:23 /usr/bin/python - /usr/bin/nova-api --config-file=/etc/nova/nova.conf - nova 12794 0.0 0.2 248636 77068 ? S Feb11 0:04 /usr/bin/python - /usr/bin/nova-api --config-file=/etc/nova/nova.conf - root 24121 0.0 0.0 11688 912 pts/5 S+ 13:07 0:00 grep nova-api - -You can create automated alerts for critical processes by using Nagios -and NRPE. For example, to ensure that the ``nova-compute`` process is -running on compute nodes, create an alert on your Nagios server that -looks like this: - -.. code-block:: none - - define service { - host_name c01.example.com - check_command check_nrpe_1arg!check_nova-compute - use generic-service - notification_period 24x7 - contact_groups sysadmins - service_description nova-compute - } - -Then on the actual compute node, create the following NRPE -configuration: - -.. code-block:: none - - \command[check_nova-compute]=/usr/lib/nagios/plugins/check_procs -c 1: \ - -a nova-compute - -Nagios checks that at least one ``nova-compute`` service is running at -all times. - -Resource Alerting ------------------ - -Resource alerting provides notifications when one or more resources are -critically low. While the monitoring thresholds should be tuned to your -specific OpenStack environment, monitoring resource usage is not -specific to OpenStack at all—any generic type of alert will work -fine. - -Some of the resources that you want to monitor include: - -- Disk usage - -- Server load - -- Memory usage - -- Network I/O - -- Available vCPUs - -For example, to monitor disk capacity on a compute node with Nagios, add -the following to your Nagios configuration: - -.. code-block:: none - - define service { - host_name c01.example.com - check_command check_nrpe!check_all_disks!20% 10% - use generic-service - contact_groups sysadmins - service_description Disk - } - -On the compute node, add the following to your NRPE configuration: - -.. code-block:: none - - command[check_all_disks]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c \ - $ARG2$ -e - -Nagios alerts you with a WARNING when any disk on the compute node is 80 -percent full and CRITICAL when 90 percent is full. - -StackTach ---------- - -StackTach is a tool that collects and reports the notifications sent by -``nova``. Notifications are essentially the same as logs but can be much -more detailed. Nearly all OpenStack components are capable of generating -notifications when significant events occur. Notifications are messages -placed on the OpenStack queue (generally RabbitMQ) for consumption by -downstream systems. An overview of notifications can be found at `System -Usage -Data `_. - -To enable ``nova`` to send notifications, add the following to -``nova.conf``: - -.. code-block:: ini - - notification_topics=monitor - notification_driver=messagingv2 - -Once ``nova`` is sending notifications, install and configure StackTach. -StackTach workers for Queue consumption and pipeling processing are -configured to read these notifications from RabbitMQ servers and store -them in a database. Users can inquire on instances, requests and servers -by using the browser interface or command line tool, -`Stacky `_. Since StackTach is -relatively new and constantly changing, installation instructions -quickly become outdated. Please refer to the `StackTach Git -repo `_ for -instructions as well as a demo video. Additional details on the latest -developments can be discovered at the `official -page `_ - -Logstash --------- - -Logstash is a high performance indexing and search engine for logs. Logs -from Jenkins test runs are sent to logstash where they are indexed and -stored. Logstash facilitates reviewing logs from multiple sources in a -single test run, searching for errors or particular events within a test -run, and searching for log event trends across test runs. - -There are four major layers in Logstash setup which are - -- Log Pusher - -- Log Indexer - -- ElasticSearch - -- Kibana - -Each layer scales horizontally. As the number of logs grows you can add -more log pushers, more Logstash indexers, and more ElasticSearch nodes. - -Logpusher is a pair of Python scripts which first listens to Jenkins -build events and converts them into Gearman jobs. Gearman provides a -generic application framework to farm out work to other machines or -processes that are better suited to do the work. It allows you to do -work in parallel, to load balance processing, and to call functions -between languages.Later Logpusher performs Gearman jobs to push log -files into logstash. Logstash indexer reads these log events, filters -them to remove unwanted lines, collapse multiple events together, and -parses useful information before shipping them to ElasticSearch for -storage and indexing. Kibana is a logstash oriented web client for -ElasticSearch. - -OpenStack Telemetry -------------------- - -An integrated OpenStack project (code-named :term:`ceilometer`) collects -metering and event data relating to OpenStack services. Data collected -by the Telemetry service could be used for billing. Depending on -deployment configuration, collected data may be accessible to users -based on the deployment configuration. The Telemetry service provides a -REST API documented at -http://developer.openstack.org/api-ref-telemetry-v2.html. You can read -more about the module in the `OpenStack Administrator -Guide `_ or -in the `developer -documentation `_. - -OpenStack-Specific Resources ----------------------------- - -Resources such as memory, disk, and CPU are generic resources that all -servers (even non-OpenStack servers) have and are important to the -overall health of the server. When dealing with OpenStack specifically, -these resources are important for a second reason: ensuring that enough -are available to launch instances. There are a few ways you can see -OpenStack resource usage.monitoring OpenStack-specific -resourcesresources generic vs. OpenStack-specificlogging/monitoring -OpenStack-specific resources The first is through the :command:`nova` command: - -.. code-block:: console - - # nova usage-list - -This command displays a list of how many instances a tenant has running -and some light usage statistics about the combined instances. This -command is useful for a quick overview of your cloud, but it doesn't -really get into a lot of details. - -Next, the ``nova`` database contains three tables that store usage -information. - -The ``nova.quotas`` and ``nova.quota_usages`` tables store quota -information. If a tenant's quota is different from the default quota -settings, its quota is stored in the ``nova.quotas`` table. For example: - -.. code-block:: mysql - - mysql> select project_id, resource, hard_limit from quotas; - +----------------------------------+-----------------------------+------------+ - | project_id | resource | hard_limit | - +----------------------------------+-----------------------------+------------+ - | 628df59f091142399e0689a2696f5baa | metadata_items | 128 | - | 628df59f091142399e0689a2696f5baa | injected_file_content_bytes | 10240 | - | 628df59f091142399e0689a2696f5baa | injected_files | 5 | - | 628df59f091142399e0689a2696f5baa | gigabytes | 1000 | - | 628df59f091142399e0689a2696f5baa | ram | 51200 | - | 628df59f091142399e0689a2696f5baa | floating_ips | 10 | - | 628df59f091142399e0689a2696f5baa | instances | 10 | - | 628df59f091142399e0689a2696f5baa | volumes | 10 | - | 628df59f091142399e0689a2696f5baa | cores | 20 | - +----------------------------------+-----------------------------+------------+ - -The ``nova.quota_usages`` table keeps track of how many resources the -tenant currently has in use: - -.. code-block:: mysql - - mysql> select project_id, resource, in_use from quota_usages where project_id like '628%'; - +----------------------------------+--------------+--------+ - | project_id | resource | in_use | - +----------------------------------+--------------+--------+ - | 628df59f091142399e0689a2696f5baa | instances | 1 | - | 628df59f091142399e0689a2696f5baa | ram | 512 | - | 628df59f091142399e0689a2696f5baa | cores | 1 | - | 628df59f091142399e0689a2696f5baa | floating_ips | 1 | - | 628df59f091142399e0689a2696f5baa | volumes | 2 | - | 628df59f091142399e0689a2696f5baa | gigabytes | 12 | - | 628df59f091142399e0689a2696f5baa | images | 1 | - +----------------------------------+--------------+--------+ - -By comparing a tenant's hard limit with their current resource usage, -you can see their usage percentage. For example, if this tenant is using -1 floating IP out of 10, then they are using 10 percent of their -floating IP quota. Rather than doing the calculation manually, you can -use SQL or the scripting language of your choice and create a formatted -report: - -.. code-block:: mysql - - +----------------------------------+------------+-------------+---------------+ - | some_tenant | - +-----------------------------------+------------+------------+---------------+ - | Resource | Used | Limit | | - +-----------------------------------+------------+------------+---------------+ - | cores | 1 | 20 | 5 % | - | floating_ips | 1 | 10 | 10 % | - | gigabytes | 12 | 1000 | 1 % | - | images | 1 | 4 | 25 % | - | injected_file_content_bytes | 0 | 10240 | 0 % | - | injected_file_path_bytes | 0 | 255 | 0 % | - | injected_files | 0 | 5 | 0 % | - | instances | 1 | 10 | 10 % | - | key_pairs | 0 | 100 | 0 % | - | metadata_items | 0 | 128 | 0 % | - | ram | 512 | 51200 | 1 % | - | reservation_expire | 0 | 86400 | 0 % | - | security_group_rules | 0 | 20 | 0 % | - | security_groups | 0 | 10 | 0 % | - | volumes | 2 | 10 | 20 % | - +-----------------------------------+------------+------------+---------------+ - -The preceding information was generated by using a custom script that -can be found on -`GitHub `_. - -.. note:: - - This script is specific to a certain OpenStack installation and must - be modified to fit your environment. However, the logic should - easily be transferable. - -Intelligent Alerting --------------------- - -Intelligent alerting can be thought of as a form of continuous -integration for operations. For example, you can easily check to see -whether the Image service is up and running by ensuring that -the ``glance-api`` and ``glance-registry`` processes are running or by -seeing whether ``glace-api`` is responding on port 9292. - -But how can you tell whether images are being successfully uploaded to -the Image service? Maybe the disk that Image service is storing the -images on is full or the S3 back end is down. You could naturally check -this by doing a quick image upload: - -.. code-block:: bash - - #!/bin/bash - # - # assumes that reasonable credentials have been stored at - # /root/auth - - - . /root/openrc - wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img - glance image-create --name='cirros image' --is-public=true - --container-format=bare --disk-format=qcow2 < cirros-0.3.4-x8 - 6_64-disk.img - -By taking this script and rolling it into an alert for your monitoring -system (such as Nagios), you now have an automated way of ensuring that -image uploads to the Image Catalog are working. - -.. note:: - - You must remove the image after each test. Even better, test whether - you can successfully delete an image from the Image service. - -Intelligent alerting takes considerably more time to plan and implement -than the other alerts described in this chapter. A good outline to -implement intelligent alerting is: - -- Review common actions in your cloud. - -- Create ways to automatically test these actions. - -- Roll these tests into an alerting system. - -Some other examples for Intelligent Alerting include: - -- Can instances launch and be destroyed? - -- Can users be created? - -- Can objects be stored and deleted? - -- Can volumes be created and destroyed? - -Trending --------- - -Trending can give you great insight into how your cloud is performing -day to day. You can learn, for example, if a busy day was simply a rare -occurrence or if you should start adding new compute nodes. - -Trending takes a slightly different approach than alerting. While -alerting is interested in a binary result (whether a check succeeds or -fails), trending records the current state of something at a certain -point in time. Once enough points in time have been recorded, you can -see how the value has changed over time. - -All of the alert types mentioned earlier can also be used for trend -reporting. Some other trend examples include: - -- The number of instances on each compute node - -- The types of flavors in use - -- The number of volumes in use - -- The number of Object Storage requests each hour - -- The number of ``nova-api`` requests each hour - -- The I/O statistics of your storage services - -As an example, recording ``nova-api`` usage can allow you to track the -need to scale your cloud controller. By keeping an eye on ``nova-api`` -requests, you can determine whether you need to spawn more ``nova-api`` -processes or go as far as introducing an entirely new server to run -``nova-api``. To get an approximate count of the requests, look for -standard INFO messages in ``/var/log/nova/nova-api.log``: - -.. code-block:: console - - # grep INFO /var/log/nova/nova-api.log | wc - -You can obtain further statistics by looking for the number of -successful requests: - -.. code-block:: console - - # grep " 200 " /var/log/nova/nova-api.log | wc - -By running this command periodically and keeping a record of the result, -you can create a trending report over time that shows whether your -``nova-api`` usage is increasing, decreasing, or keeping steady. - -A tool such as **collectd** can be used to store this information. While -collectd is out of the scope of this book, a good starting point would -be to use collectd to store the result as a COUNTER data type. More -information can be found in `collectd's -documentation `_. - -Summary -~~~~~~~ - -For stable operations, you want to detect failure promptly and determine -causes efficiently. With a distributed system, it's even more important -to track the right items to meet a service-level target. Learning where -these logs are located in the file system or API gives you an advantage. -This chapter also showed how to read, interpret, and manipulate -information from OpenStack services so that you can monitor effectively. diff --git a/doc/ops-guide/source/ops_maintenance.rst b/doc/ops-guide/source/ops_maintenance.rst deleted file mode 100644 index 18142830..00000000 --- a/doc/ops-guide/source/ops_maintenance.rst +++ /dev/null @@ -1,1050 +0,0 @@ -==================================== -Maintenance, Failures, and Debugging -==================================== - -Downtime, whether planned or unscheduled, is a certainty when running a -cloud. This chapter aims to provide useful information for dealing -proactively, or reactively, with these occurrences. - -Cloud Controller and Storage Proxy Failures and Maintenance -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The cloud controller and storage proxy are very similar to each other -when it comes to expected and unexpected downtime. One of each server -type typically runs in the cloud, which makes them very noticeable when -they are not running. - -For the cloud controller, the good news is if your cloud is using the -FlatDHCP multi-host HA network mode, existing instances and volumes -continue to operate while the cloud controller is offline. For the -storage proxy, however, no storage traffic is possible until it is back -up and running. - -Planned Maintenance -------------------- - -One way to plan for cloud controller or storage proxy maintenance is to -simply do it off-hours, such as at 1 a.m. or 2 a.m. This strategy -affects fewer users. If your cloud controller or storage proxy is too -important to have unavailable at any point in time, you must look into -high-availability options. - -Rebooting a Cloud Controller or Storage Proxy ---------------------------------------------- - -All in all, just issue the :command:`reboot` command. The operating system -cleanly shuts down services and then automatically reboots. If you want -to be very thorough, run your backup jobs just before you -reboot. - -After a cloud controller reboots, ensure that all required services were -successfully started. The following commands use :command:`ps` and -:command:`grep` to determine if nova, glance, and keystone are currently -running: - -.. code-block:: console - - # ps aux | grep nova- - # ps aux | grep glance- - # ps aux | grep keystone - # ps aux | grep cinder - -Also check that all services are functioning. The following set of -commands sources the ``openrc`` file, then runs some basic glance, nova, -and openstack commands. If the commands work as expected, you can be -confident that those services are in working condition: - -.. code-block:: console - - # source openrc - # glance index - # nova list - # openstack project list - -For the storage proxy, ensure that the :term:`Object Storage service` has -resumed: - -.. code-block:: console - - # ps aux | grep swift - -Also check that it is functioning: - -.. code-block:: console - - # swift stat - -Total Cloud Controller Failure ------------------------------- - -The cloud controller could completely fail if, for example, its -motherboard goes bad. Users will immediately notice the loss of a cloud -controller since it provides core functionality to your cloud -environment. If your infrastructure monitoring does not alert you that -your cloud controller has failed, your users definitely will. -Unfortunately, this is a rough situation. The cloud controller is an -integral part of your cloud. If you have only one controller, you will -have many missing services if it goes down. - -To avoid this situation, create a highly available cloud controller -cluster. This is outside the scope of this document, but you can read -more in the `OpenStack High Availability -Guide `_. - -The next best approach is to use a configuration-management tool, such -as Puppet, to automatically build a cloud controller. This should not -take more than 15 minutes if you have a spare server available. After -the controller rebuilds, restore any backups taken -(see :doc:`ops_backup_recovery`). - -Also, in practice, the ``nova-compute`` services on the compute nodes do -not always reconnect cleanly to rabbitmq hosted on the controller when -it comes back up after a long reboot; a restart on the nova services on -the compute nodes is required. - -Compute Node Failures and Maintenance -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Sometimes a compute node either crashes unexpectedly or requires a -reboot for maintenance reasons. - -If you need to reboot a compute node due to planned maintenance (such as -a software or hardware upgrade), first ensure that all hosted instances -have been moved off the node. If your cloud is utilizing shared storage, -use the :command:`nova live-migration` command. First, get a list of instances -that need to be moved: - -.. code-block:: console - - # nova list --host c01.example.com --all-tenants - -Next, migrate them one by one: - -.. code-block:: console - - # nova live-migration c02.example.com - -If you are not using shared storage, you can use the -:option:`--block-migrate` option: - -.. code-block:: console - - # nova live-migration --block-migrate c02.example.com - -After you have migrated all instances, ensure that the ``nova-compute`` -service has stopped: - -.. code-block:: console - - # stop nova-compute - -If you use a configuration-management system, such as Puppet, that -ensures the ``nova-compute`` service is always running, you can -temporarily move the ``init`` files: - -.. code-block:: console - - # mkdir /root/tmp - # mv /etc/init/nova-compute.conf /root/tmp - # mv /etc/init.d/nova-compute /root/tmp - -Next, shut down your compute node, perform your maintenance, and turn -the node back on. You can reenable the ``nova-compute`` service by -undoing the previous commands: - -.. code-block:: console - - # mv /root/tmp/nova-compute.conf /etc/init - # mv /root/tmp/nova-compute /etc/init.d/ - -Then start the ``nova-compute`` service: - -.. code-block:: console - - # start nova-compute - -You can now optionally migrate the instances back to their original -compute node. - -After a Compute Node Reboots ----------------------------- - -When you reboot a compute node, first verify that it booted -successfully. This includes ensuring that the ``nova-compute`` service -is running: - -.. code-block:: console - - # ps aux | grep nova-compute - # status nova-compute - -Also ensure that it has successfully connected to the AMQP server: - -.. code-block:: console - - # grep AMQP /var/log/nova/nova-compute - 2013-02-26 09:51:31 12427 INFO nova.openstack.common.rpc.common [-] Connected to AMQP server on 199.116.232.36:5672 - -After the compute node is successfully running, you must deal with the -instances that are hosted on that compute node because none of them are -running. Depending on your SLA with your users or customers, you might -have to start each instance and ensure that they start correctly. - -Instances ---------- - -You can create a list of instances that are hosted on the compute node -by performing the following command: - -.. code-block:: console - - # nova list --host c01.example.com --all-tenants - -After you have the list, you can use the :command:`nova` command to start each -instance: - -.. code-block:: console - - # nova reboot - -.. note:: - - Any time an instance shuts down unexpectedly, it might have problems - on boot. For example, the instance might require an ``fsck`` on the - root partition. If this happens, the user can use the dashboard VNC - console to fix this. - -If an instance does not boot, meaning ``virsh list`` never shows the -instance as even attempting to boot, do the following on the compute -node: - -.. code-block:: console - - # tail -f /var/log/nova/nova-compute.log - -Try executing the :command:`nova reboot` command again. You should see an -error message about why the instance was not able to boot - -In most cases, the error is the result of something in libvirt's XML -file (``/etc/libvirt/qemu/instance-xxxxxxxx.xml``) that no longer -exists. You can enforce re-creation of the XML file as well as rebooting -the instance by running the following command: - -.. code-block:: console - - # nova reboot --hard - -Inspecting and Recovering Data from Failed Instances ----------------------------------------------------- - -In some scenarios, instances are running but are inaccessible through -SSH and do not respond to any command. The VNC console could be -displaying a boot failure or kernel panic error messages. This could be -an indication of file system corruption on the VM itself. If you need to -recover files or inspect the content of the instance, qemu-nbd can be -used to mount the disk. - -.. warning:: - - If you access or view the user's content and data, get approval - first! - -To access the instance's disk -(``/var/lib/nova/instances/instance-xxxxxx/disk``), use the following -steps: - -#. Suspend the instance using the ``virsh`` command. - -#. Connect the qemu-nbd device to the disk. - -#. Mount the qemu-nbd device. - -#. Unmount the device after inspecting. - -#. Disconnect the qemu-nbd device. - -#. Resume the instance. - -If you do not follow last three steps, OpenStack Compute cannot manage -the instance any longer. It fails to respond to any command issued by -OpenStack Compute, and it is marked as shut down. - -Once you mount the disk file, you should be able to access it and treat -it as a collection of normal directories with files and a directory -structure. However, we do not recommend that you edit or touch any files -because this could change the -:term:`access control lists (ACLs) ` that are used -to determine which accounts can perform what operations on files and -directories. Changing ACLs can make the instance unbootable if it is not -already. - -#. Suspend the instance using the :command:`virsh` command, taking note of the - internal ID: - - .. code-block:: console - - # virsh list - Id Name State - ---------------------------------- - 1 instance-00000981 running - 2 instance-000009f5 running - 30 instance-0000274a running - - # virsh suspend 30 - Domain 30 suspended - -#. Connect the qemu-nbd device to the disk: - - .. code-block:: console - - # cd /var/lib/nova/instances/instance-0000274a - # ls -lh - total 33M - -rw-rw---- 1 libvirt-qemu kvm 6.3K Oct 15 11:31 console.log - -rw-r--r-- 1 libvirt-qemu kvm 33M Oct 15 22:06 disk - -rw-r--r-- 1 libvirt-qemu kvm 384K Oct 15 22:06 disk.local - -rw-rw-r-- 1 nova nova 1.7K Oct 15 11:30 libvirt.xml - # qemu-nbd -c /dev/nbd0 `pwd`/disk - -#. Mount the qemu-nbd device. - - The qemu-nbd device tries to export the instance disk's different - partitions as separate devices. For example, if vda is the disk and - vda1 is the root partition, qemu-nbd exports the device as - ``/dev/nbd0`` and ``/dev/nbd0p1``, respectively: - - .. code-block:: console - - # mount /dev/nbd0p1 /mnt/ - - You can now access the contents of ``/mnt``, which correspond to the - first partition of the instance's disk. - - To examine the secondary or ephemeral disk, use an alternate mount - point if you want both primary and secondary drives mounted at the - same time: - - .. code-block:: console - - # umount /mnt - # qemu-nbd -c /dev/nbd1 `pwd`/disk.local - # mount /dev/nbd1 /mnt/ - - .. code-block:: console - - # ls -lh /mnt/ - total 76K - lrwxrwxrwx. 1 root root 7 Oct 15 00:44 bin -> usr/bin - dr-xr-xr-x. 4 root root 4.0K Oct 15 01:07 boot - drwxr-xr-x. 2 root root 4.0K Oct 15 00:42 dev - drwxr-xr-x. 70 root root 4.0K Oct 15 11:31 etc - drwxr-xr-x. 3 root root 4.0K Oct 15 01:07 home - lrwxrwxrwx. 1 root root 7 Oct 15 00:44 lib -> usr/lib - lrwxrwxrwx. 1 root root 9 Oct 15 00:44 lib64 -> usr/lib64 - drwx------. 2 root root 16K Oct 15 00:42 lost+found - drwxr-xr-x. 2 root root 4.0K Feb 3 2012 media - drwxr-xr-x. 2 root root 4.0K Feb 3 2012 mnt - drwxr-xr-x. 2 root root 4.0K Feb 3 2012 opt - drwxr-xr-x. 2 root root 4.0K Oct 15 00:42 proc - dr-xr-x---. 3 root root 4.0K Oct 15 21:56 root - drwxr-xr-x. 14 root root 4.0K Oct 15 01:07 run - lrwxrwxrwx. 1 root root 8 Oct 15 00:44 sbin -> usr/sbin - drwxr-xr-x. 2 root root 4.0K Feb 3 2012 srv - drwxr-xr-x. 2 root root 4.0K Oct 15 00:42 sys - drwxrwxrwt. 9 root root 4.0K Oct 15 16:29 tmp - drwxr-xr-x. 13 root root 4.0K Oct 15 00:44 usr - drwxr-xr-x. 17 root root 4.0K Oct 15 00:44 var - -#. Once you have completed the inspection, unmount the mount point and - release the qemu-nbd device: - - .. code-block:: console - - # umount /mnt - # qemu-nbd -d /dev/nbd0 - /dev/nbd0 disconnected - -#. Resume the instance using :command:`virsh`: - - .. code-block:: console - - # virsh list - Id Name State - ---------------------------------- - 1 instance-00000981 running - 2 instance-000009f5 running - 30 instance-0000274a paused - - # virsh resume 30 - Domain 30 resumed - -.. _volumes: - -Volumes -------- - -If the affected instances also had attached volumes, first generate a -list of instance and volume UUIDs: - -.. code-block:: console - - mysql> select nova.instances.uuid as instance_uuid, - cinder.volumes.id as volume_uuid, cinder.volumes.status, - cinder.volumes.attach_status, cinder.volumes.mountpoint, - cinder.volumes.display_name from cinder.volumes - inner join nova.instances on cinder.volumes.instance_uuid=nova.instances.uuid - where nova.instances.host = 'c01.example.com'; - -You should see a result similar to the following: - -.. code-block:: console - - +--------------+------------+-------+--------------+-----------+--------------+ - |instance_uuid |volume_uuid |status |attach_status |mountpoint | display_name | - +--------------+------------+-------+--------------+-----------+--------------+ - |9b969a05 |1f0fbf36 |in-use |attached |/dev/vdc | test | - +--------------+------------+-------+--------------+-----------+--------------+ - 1 row in set (0.00 sec) - -Next, manually detach and reattach the volumes, where X is the proper -mount point: - -.. code-block:: console - - # nova volume-detach - # nova volume-attach /dev/vdX - -Be sure that the instance has successfully booted and is at a login -screen before doing the above. - -Total Compute Node Failure --------------------------- - -Compute nodes can fail the same way a cloud controller can fail. A -motherboard failure or some other type of hardware failure can cause an -entire compute node to go offline. When this happens, all instances -running on that compute node will not be available. Just like with a -cloud controller failure, if your infrastructure monitoring does not -detect a failed compute node, your users will notify you because of -their lost instances. - -If a compute node fails and won't be fixed for a few hours (or at all), -you can relaunch all instances that are hosted on the failed node if you -use shared storage for ``/var/lib/nova/instances``. - -To do this, generate a list of instance UUIDs that are hosted on the -failed node by running the following query on the nova database: - -.. code-block:: console - - mysql> select uuid from instances where host = \ - 'c01.example.com' and deleted = 0; - -Next, update the nova database to indicate that all instances that used -to be hosted on c01.example.com are now hosted on c02.example.com: - -.. code-block:: console - - mysql> update instances set host = 'c02.example.com' where host = \ - 'c01.example.com' and deleted = 0; - -If you're using the Networking service ML2 plug-in, update the -Networking service database to indicate that all ports that used to be -hosted on c01.example.com are now hosted on c02.example.com: - -.. code-block:: console - - mysql> update ml2_port_bindings set host = 'c02.example.com' where host = \ - 'c01.example.com'; - -.. code-block:: console - - mysql> update ml2_port_binding_levels set host = 'c02.example.com' where host = \ - 'c01.example.com'; - -After that, use the :command:`nova` command to reboot all instances that were -on c01.example.com while regenerating their XML files at the same time: - -.. code-block:: console - - # nova reboot --hard - -Finally, reattach volumes using the same method described in the section -:ref:`volumes`. - -var/lib/nova/instances ----------------------- - -It's worth mentioning this directory in the context of failed compute -nodes. This directory contains the libvirt KVM file-based disk images -for the instances that are hosted on that compute node. If you are not -running your cloud in a shared storage environment, this directory is -unique across all compute nodes. - -``/var/lib/nova/instances`` contains two types of directories. - -The first is the ``_base`` directory. This contains all the cached base -images from glance for each unique image that has been launched on that -compute node. Files ending in ``_20`` (or a different number) are the -ephemeral base images. - -The other directories are titled ``instance-xxxxxxxx``. These -directories correspond to instances running on that compute node. The -files inside are related to one of the files in the ``_base`` directory. -They're essentially differential-based files containing only the changes -made from the original ``_base`` directory. - -All files and directories in ``/var/lib/nova/instances`` are uniquely -named. The files in \_base are uniquely titled for the glance image that -they are based on, and the directory names ``instance-xxxxxxxx`` are -uniquely titled for that particular instance. For example, if you copy -all data from ``/var/lib/nova/instances`` on one compute node to -another, you do not overwrite any files or cause any damage to images -that have the same unique name, because they are essentially the same -file. - -Although this method is not documented or supported, you can use it when -your compute node is permanently offline but you have instances locally -stored on it. - -Storage Node Failures and Maintenance -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Because of the high redundancy of Object Storage, dealing with object -storage node issues is a lot easier than dealing with compute node -issues. - -Rebooting a Storage Node ------------------------- - -If a storage node requires a reboot, simply reboot it. Requests for data -hosted on that node are redirected to other copies while the server is -rebooting. - -If you need to shut down a storage node for an extended period of time -(one or more days), consider removing the node from the storage ring. -For example: - -.. code-block:: console - - # swift-ring-builder account.builder remove - # swift-ring-builder container.builder remove - # swift-ring-builder object.builder remove - # swift-ring-builder account.builder rebalance - # swift-ring-builder container.builder rebalance - # swift-ring-builder object.builder rebalance - -Next, redistribute the ring files to the other nodes: - -.. code-block:: console - - # for i in s01.example.com s02.example.com s03.example.com - > do - > scp *.ring.gz $i:/etc/swift - > done - -These actions effectively take the storage node out of the storage -cluster. - -When the node is able to rejoin the cluster, just add it back to the -ring. The exact syntax you use to add a node to your swift cluster with -``swift-ring-builder`` heavily depends on the original options used when -you originally created your cluster. Please refer back to those -commands. - -Replacing a Swift Disk ----------------------- - -If a hard drive fails in an Object Storage node, replacing it is -relatively easy. This assumes that your Object Storage environment is -configured correctly, where the data that is stored on the failed drive -is also replicated to other drives in the Object Storage environment. - -This example assumes that ``/dev/sdb`` has failed. - -First, unmount the disk: - -.. code-block:: console - - # umount /dev/sdb - -Next, physically remove the disk from the server and replace it with a -working disk. - -Ensure that the operating system has recognized the new disk: - -.. code-block:: console - - # dmesg | tail - -You should see a message about ``/dev/sdb``. - -Because it is recommended to not use partitions on a swift disk, simply -format the disk as a whole: - -.. code-block:: console - - # mkfs.xfs /dev/sdb - -Finally, mount the disk: - -.. code-block:: console - - # mount -a - -Swift should notice the new disk and that no data exists. It then begins -replicating the data to the disk from the other existing replicas. - -Handling a Complete Failure -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -A common way of dealing with the recovery from a full system failure, -such as a power outage of a data center, is to assign each service a -priority, and restore in order. The below table shows an example. - -.. list-table:: Example service restoration priority list - :widths: 50 50 - :header-rows: 1 - - * - Priority - - Services - * - 1 - - Internal network connectivity - * - 2 - - Backing storage services - * - 3 - - Public network connectivity for user virtual machines - * - 4 - - ``nova-compute``, ``nova-network``, cinder hosts - * - 5 - - User virtual machines - * - 10 - - Message queue and database services - * - 15 - - Keystone services - * - 20 - - ``cinder-scheduler`` - * - 21 - - Image Catalog and Delivery services - * - 22 - - ``nova-scheduler`` services - * - 98 - - ``cinder-api`` - * - 99 - - ``nova-api`` services - * - 100 - - Dashboard node - -Use this example priority list to ensure that user-affected services are -restored as soon as possible, but not before a stable environment is in -place. Of course, despite being listed as a single-line item, each step -requires significant work. For example, just after starting the -database, you should check its integrity, or, after starting the nova -services, you should verify that the hypervisor matches the database and -fix any mismatches. - -Configuration Management -~~~~~~~~~~~~~~~~~~~~~~~~ - -Maintaining an OpenStack cloud requires that you manage multiple -physical servers, and this number might grow over time. Because managing -nodes manually is error prone, we strongly recommend that you use a -configuration-management tool. These tools automate the process of -ensuring that all your nodes are configured properly and encourage you -to maintain your configuration information (such as packages and -configuration options) in a version-controlled repository. - -.. note:: - - Several configuration-management tools are available, and this guide - does not recommend a specific one. The two most popular ones in the - OpenStack community are `Puppet `_, with - available `OpenStack Puppet - modules `_; and - `Chef `_, with available `OpenStack - Chef recipes `_. - Other newer configuration tools include - `Juju `_, - `Ansible `_, and - `Salt `_; and more mature configuration - management tools include `CFEngine `_ and - `Bcfg2 `_. - -Working with Hardware -~~~~~~~~~~~~~~~~~~~~~ - -As for your initial deployment, you should ensure that all hardware is -appropriately burned in before adding it to production. Run software -that uses the hardware to its limits—maxing out RAM, CPU, disk, and -network. Many options are available, and normally double as benchmark -software, so you also get a good idea of the performance of your -system. - -Adding a Compute Node ---------------------- - -If you find that you have reached or are reaching the capacity limit of -your computing resources, you should plan to add additional compute -nodes. Adding more nodes is quite easy. The process for adding compute -nodes is the same as when the initial compute nodes were deployed to -your cloud: use an automated deployment system to bootstrap the -bare-metal server with the operating system and then have a -configuration-management system install and configure OpenStack Compute. -Once the Compute service has been installed and configured in the same -way as the other compute nodes, it automatically attaches itself to the -cloud. The cloud controller notices the new node(s) and begins -scheduling instances to launch there. - -If your OpenStack Block Storage nodes are separate from your compute -nodes, the same procedure still applies because the same queuing and -polling system is used in both services. - -We recommend that you use the same hardware for new compute and block -storage nodes. At the very least, ensure that the CPUs are similar in -the compute nodes to not break live migration. - -Adding an Object Storage Node ------------------------------ - -Adding a new object storage node is different from adding compute or -block storage nodes. You still want to initially configure the server by -using your automated deployment and configuration-management systems. -After that is done, you need to add the local disks of the object -storage node into the object storage ring. The exact command to do this -is the same command that was used to add the initial disks to the ring. -Simply rerun this command on the object storage proxy server for all -disks on the new object storage node. Once this has been done, rebalance -the ring and copy the resulting ring files to the other storage nodes. - -.. note:: - - If your new object storage node has a different number of disks than - the original nodes have, the command to add the new node is - different from the original commands. These parameters vary from - environment to environment. - -Replacing Components --------------------- - -Failures of hardware are common in large-scale deployments such as an -infrastructure cloud. Consider your processes and balance time saving -against availability. For example, an Object Storage cluster can easily -live with dead disks in it for some period of time if it has sufficient -capacity. Or, if your compute installation is not full, you could -consider live migrating instances off a host with a RAM failure until -you have time to deal with the problem. - -Databases -~~~~~~~~~ - -Almost all OpenStack components have an underlying database to store -persistent information. Usually this database is MySQL. Normal MySQL -administration is applicable to these databases. OpenStack does not -configure the databases out of the ordinary. Basic administration -includes performance tweaking, high availability, backup, recovery, and -repairing. For more information, see a standard MySQL administration guide. - -You can perform a couple of tricks with the database to either more -quickly retrieve information or fix a data inconsistency error—for -example, an instance was terminated, but the status was not updated in -the database. These tricks are discussed throughout this book. - -Database Connectivity ---------------------- - -Review the component's configuration file to see how each OpenStack -component accesses its corresponding database. Look for either -``sql_connection`` or simply ``connection``. The following command uses -``grep`` to display the SQL connection string for nova, glance, cinder, -and keystone: - -.. code-block:: console - - # grep -hE "connection ?=" /etc/nova/nova.conf /etc/glance/glance-*.conf - /etc/cinder/cinder.conf /etc/keystone/keystone.conf - sql_connection = mysql+pymysql://nova:nova@cloud.alberta.sandbox.cybera.ca/nova - sql_connection = mysql+pymysql://glance:password@cloud.example.com/glance - sql_connection = mysql+pymysql://glance:password@cloud.example.com/glance - sql_connection = mysql+pymysql://cinder:password@cloud.example.com/cinder - connection = mysql+pymysql://keystone_admin:password@cloud.example.com/keystone - -The connection strings take this format: - -.. code-block:: console - - mysql+pymysql:// : @ / - -Performance and Optimizing --------------------------- - -As your cloud grows, MySQL is utilized more and more. If you suspect -that MySQL might be becoming a bottleneck, you should start researching -MySQL optimization. The MySQL manual has an entire section dedicated to -this topic: `Optimization -Overview `_. - -HDWMY -~~~~~ - -Here's a quick list of various to-do items for each hour, day, week, -month, and year. Please note that these tasks are neither required nor -definitive but helpful ideas: - -Hourly ------- - -- Check your monitoring system for alerts and act on them. - -- Check your ticket queue for new tickets. - -Daily ------ - -- Check for instances in a failed or weird state and investigate why. - -- Check for security patches and apply them as needed. - -Weekly ------- - -- Check cloud usage: - - - User quotas - - - Disk space - - - Image usage - - - Large instances - - - Network usage (bandwidth and IP usage) - -- Verify your alert mechanisms are still working. - -Monthly -------- - -- Check usage and trends over the past month. - -- Check for user accounts that should be removed. - -- Check for operator accounts that should be removed. - -Quarterly ---------- - -- Review usage and trends over the past quarter. - -- Prepare any quarterly reports on usage and statistics. - -- Review and plan any necessary cloud additions. - -- Review and plan any major OpenStack upgrades. - -Semiannually ------------- - -- Upgrade OpenStack. - -- Clean up after an OpenStack upgrade (any unused or new services to be - aware of?). - -Determining Which Component Is Broken -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack's collection of different components interact with each other -strongly. For example, uploading an image requires interaction from -``nova-api``, ``glance-api``, ``glance-registry``, keystone, and -potentially ``swift-proxy``. As a result, it is sometimes difficult to -determine exactly where problems lie. Assisting in this is the purpose -of this section. - -Tailing Logs ------------- - -The first place to look is the log file related to the command you are -trying to run. For example, if ``nova list`` is failing, try tailing a -nova log file and running the command again: - -Terminal 1: - -.. code-block:: console - - # tail -f /var/log/nova/nova-api.log - -Terminal 2: - -.. code-block:: console - - # nova list - -Look for any errors or traces in the log file. For more information, see -:doc:`ops_logging_monitoring`. - -If the error indicates that the problem is with another component, -switch to tailing that component's log file. For example, if nova cannot -access glance, look at the ``glance-api`` log: - -Terminal 1: - -.. code-block:: console - - # tail -f /var/log/glance/api.log - -Terminal 2: - -.. code-block:: console - - # nova list - -Wash, rinse, and repeat until you find the core cause of the problem. - -Running Daemons on the CLI --------------------------- - -Unfortunately, sometimes the error is not apparent from the log files. -In this case, switch tactics and use a different command; maybe run the -service directly on the command line. For example, if the ``glance-api`` -service refuses to start and stay running, try launching the daemon from -the command line: - -.. code-block:: console - - # sudo -u glance -H glance-api - -This might print the error and cause of the problem. - -.. note:: - - The ``-H`` flag is required when running the daemons with sudo - because some daemons will write files relative to the user's home - directory, and this write may fail if ``-H`` is left off. - -**Example of Complexity** - -One morning, a compute node failed to run any instances. The log files -were a bit vague, claiming that a certain instance was unable to be -started. This ended up being a red herring because the instance was -simply the first instance in alphabetical order, so it was the first -instance that ``nova-compute`` would touch. - -Further troubleshooting showed that libvirt was not running at all. This -made more sense. If libvirt wasn't running, then no instance could be -virtualized through KVM. Upon trying to start libvirt, it would silently -die immediately. The libvirt logs did not explain why. - -Next, the ``libvirtd`` daemon was run on the command line. Finally a -helpful error message: it could not connect to d-bus. As ridiculous as -it sounds, libvirt, and thus ``nova-compute``, relies on d-bus and -somehow d-bus crashed. Simply starting d-bus set the entire chain back -on track, and soon everything was back up and running. - -What to do when things are running slowly -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When you are getting slow responses from various services, it can be -hard to know where to start looking. The first thing to check is the -extent of the slowness: is it specific to a single service, or varied -among different services? If your problem is isolated to a specific -service, it can temporarily be fixed by restarting the service, but that -is often only a fix for the symptom and not the actual problem. - -This is a collection of ideas from experienced operators on common -things to look at that may be the cause of slowness. It is not, however, -designed to be an exhaustive list. - -OpenStack Identity service --------------------------- - -If OpenStack :term:`Identity service` is responding slowly, it could be due -to the token table getting large. This can be fixed by running the -:command:`keystone-manage token_flush` command. - -Additionally, for Identity-related issues, try the tips -in :ref:`sql_backend`. - -OpenStack Image service ------------------------ - -OpenStack :term:`Image service` can be slowed down by things related to the -Identity service, but the Image service itself can be slowed down if -connectivity to the back-end storage in use is slow or otherwise -problematic. For example, your back-end NFS server might have gone down. - -OpenStack Block Storage service -------------------------------- - -OpenStack :term:`Block Storage service` is similar to the Image service, so -start by checking Identity-related services, and the back-end storage. -Additionally, both the Block Storage and Image services rely on AMQP and -SQL functionality, so consider these when debugging. - -OpenStack Compute service -------------------------- - -Services related to OpenStack Compute are normally fairly fast and rely -on a couple of backend services: Identity for authentication and -authorization), and AMQP for interoperability. Any slowness related to -services is normally related to one of these. Also, as with all other -services, SQL is used extensively. - -OpenStack Networking service ----------------------------- - -Slowness in the OpenStack :term:`Networking service` can be caused by services -that it relies upon, but it can also be related to either physical or -virtual networking. For example: network namespaces that do not exist or -are not tied to interfaces correctly; DHCP daemons that have hung or are -not running; a cable being physically disconnected; a switch not being -configured correctly. When debugging Networking service problems, begin -by verifying all physical networking functionality (switch -configuration, physical cabling, etc.). After the physical networking is -verified, check to be sure all of the Networking services are running -(neutron-server, neutron-dhcp-agent, etc.), then check on AMQP and SQL -back ends. - -AMQP broker ------------ - -Regardless of which AMQP broker you use, such as RabbitMQ, there are -common issues which not only slow down operations, but can also cause -real problems. Sometimes messages queued for services stay on the queues -and are not consumed. This can be due to dead or stagnant services and -can be commonly cleared up by either restarting the AMQP-related -services or the OpenStack service in question. - -.. _sql_backend: - -SQL back end ------------- - -Whether you use SQLite or an RDBMS (such as MySQL), SQL interoperability -is essential to a functioning OpenStack environment. A large or -fragmented SQLite file can cause slowness when using files as a back -end. A locked or long-running query can cause delays for most RDBMS -services. In this case, do not kill the query immediately, but look into -it to see if it is a problem with something that is hung, or something -that is just taking a long time to run and needs to finish on its own. -The administration of an RDBMS is outside the scope of this document, -but it should be noted that a properly functioning RDBMS is essential to -most OpenStack services. - -Uninstalling -~~~~~~~~~~~~ - -While we'd always recommend using your automated deployment system to -reinstall systems from scratch, sometimes you do need to remove -OpenStack from a system the hard way. Here's how: - -- Remove all packages. - -- Remove remaining files. - -- Remove databases. - -These steps depend on your underlying distribution, but in general you -should be looking for :command:`purge` commands in your package manager, like -:command:`aptitude purge ~c $package`. Following this, you can look for -orphaned files in the directories referenced throughout this guide. To -uninstall the database properly, refer to the manual appropriate for the -product in use. diff --git a/doc/ops-guide/source/ops_network_troubleshooting.rst b/doc/ops-guide/source/ops_network_troubleshooting.rst deleted file mode 100644 index ccbe5cea..00000000 --- a/doc/ops-guide/source/ops_network_troubleshooting.rst +++ /dev/null @@ -1,1088 +0,0 @@ -======================= -Network Troubleshooting -======================= - -Network troubleshooting can unfortunately be a very difficult and -confusing procedure. A network issue can cause a problem at several -points in the cloud. Using a logical troubleshooting procedure can help -mitigate the confusion and more quickly isolate where exactly the -network issue is. This chapter aims to give you the information you need -to identify any issues for either ``nova-network`` or OpenStack -Networking (neutron) with Linux Bridge or Open vSwitch. - -Using "ip a" to Check Interface States -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -On compute nodes and nodes running ``nova-network``, use the following -command to see information about interfaces, including information about -IPs, VLANs, and whether your interfaces are up: - -.. code-block:: console - - # ip a - -If you're encountering any sort of networking difficulty, one good -initial sanity check is to make sure that your interfaces are up. For -example: - -.. code-block:: console - - $ ip a | grep state - 1: lo: mtu 16436 qdisc noqueue state UNKNOWN - 2: eth0: mtu 1500 qdisc pfifo_fast state UP - qlen 1000 - 3: eth1: mtu 1500 qdisc pfifo_fast - master br100 state UP qlen 1000 - 4: virbr0: mtu 1500 qdisc noqueue state DOWN - 5: br100: mtu 1500 qdisc noqueue state UP - -You can safely ignore the state of ``virbr0``, which is a default bridge -created by libvirt and not used by OpenStack. - -Visualizing nova-network Traffic in the Cloud -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -If you are logged in to an instance and ping an external host—for -example, Google—the ping packet takes the route shown in -the following figure. - -.. figure:: figures/osog_1201.png - :alt: Traffic route for ping packet - :width: 100% - -#. The instance generates a packet and places it on the virtual Network - Interface Card (NIC) inside the instance, such as ``eth0``. - -#. The packet transfers to the virtual NIC of the compute host, such as, - ``vnet1``. You can find out what vnet NIC is being used by looking at - the ``/etc/libvirt/qemu/instance-xxxxxxxx.xml`` file. - -#. From the vnet NIC, the packet transfers to a bridge on the compute - node, such as ``br100``. - - If you run FlatDHCPManager, one bridge is on the compute node. If you - run VlanManager, one bridge exists for each VLAN. - - To see which bridge the packet will use, run the command: - - .. code-block:: console - - $ brctl show - - Look for the vnet NIC. You can also reference ``nova.conf`` and look - for the ``flat_interface_bridge`` option. - -#. The packet transfers to the main NIC of the compute node. You can - also see this NIC in the :command:`brctl` output, or you can find it by - referencing the ``flat_interface`` option in ``nova.conf``. - -#. After the packet is on this NIC, it transfers to the compute node's - default gateway. The packet is now most likely out of your control at - this point. The diagram depicts an external gateway. However, in the - default configuration with multi-host, the compute host is the - gateway. - -Reverse the direction to see the path of a ping reply. From this path, -you can see that a single packet travels across four different NICs. If -a problem occurs with any of these NICs, a network issue occurs. - -Visualizing OpenStack Networking Service Traffic in the Cloud -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack Networking has many more degrees of freedom than -``nova-network`` does because of its pluggable back end. It can be -configured with open source or vendor proprietary plug-ins that control -software defined networking (SDN) hardware or plug-ins that use Linux -native facilities on your hosts, such as Open vSwitch or Linux Bridge. - -The networking chapter of the `OpenStack Administrator -Guide `_ -shows a variety of networking scenarios and their connection paths. The -purpose of this section is to give you the tools to troubleshoot the -various components involved however they are plumbed together in your -environment. - -For this example, we will use the Open vSwitch (OVS) back end. Other -back-end plug-ins will have very different flow paths. OVS is the most -popularly deployed network driver, according to the October 2015 -OpenStack User Survey, with 41 percent more sites using it than the -Linux Bridge driver. We'll describe each step in turn, with -:ref:`network_paths` for reference. - -#. The instance generates a packet and places it on the virtual NIC - inside the instance, such as eth0. - -#. The packet transfers to a Test Access Point (TAP) device on the - compute host, such as tap690466bc-92. You can find out what TAP is - being used by looking at the - ``/etc/libvirt/qemu/instance-xxxxxxxx.xml`` file. - - The TAP device name is constructed using the first 11 characters of - the port ID (10 hex digits plus an included '-'), so another means of - finding the device name is to use the :command:`neutron` command. This - returns a pipe-delimited list, the first item of which is the port - ID. For example, to get the port ID associated with IP address - 10.0.0.10, do this: - - .. code-block:: console - - # neutron port-list | grep 10.0.0.10 | cut -d \| -f 2 - ff387e54-9e54-442b-94a3-aa4481764f1d - - Taking the first 11 characters, we can construct a device name of - tapff387e54-9e from this output. - - .. _network_paths: - - .. figure:: figures/osog_1202.png - :alt: Neutron network paths - :width: 100% - - Figure. Neutron network paths - -#. The TAP device is connected to the integration bridge, ``br-int``. - This bridge connects all the instance TAP devices and any other - bridges on the system. In this example, we have ``int-br-eth1`` and - ``patch-tun``. ``int-br-eth1`` is one half of a veth pair connecting - to the bridge ``br-eth1``, which handles VLAN networks trunked over - the physical Ethernet device ``eth1``. ``patch-tun`` is an Open - vSwitch internal port that connects to the ``br-tun`` bridge for GRE - networks. - - The TAP devices and veth devices are normal Linux network devices and - may be inspected with the usual tools, such as :command:`ip` and - :command:`tcpdump`. Open vSwitch internal devices, such as ``patch-tun``, - are only visible within the Open vSwitch environment. If you try to - run :command:`tcpdump -i patch-tun`, it will raise an error, saying that - the device does not exist. - - It is possible to watch packets on internal interfaces, but it does - take a little bit of networking gymnastics. First you need to create - a dummy network device that normal Linux tools can see. Then you need - to add it to the bridge containing the internal interface you want to - snoop on. Finally, you need to tell Open vSwitch to mirror all - traffic to or from the internal port onto this dummy port. After all - this, you can then run :command:`tcpdump` on the dummy interface and see - the traffic on the internal port. - - **To capture packets from the patch-tun internal interface on integration - bridge, ``br-int``:** - - #. Create and bring up a dummy interface, ``snooper0``: - - .. code-block:: console - - # ip link add name snooper0 type dummy - - .. code-block:: console - - # ip link set dev snooper0 up - - #. Add device ``snooper0`` to bridge ``br-int``: - - .. code-block:: console - - # ovs-vsctl add-port br-int snooper0 - - #. Create mirror of ``patch-tun`` to ``snooper0`` (returns UUID of - mirror port): - - .. code-block:: console - - # ovs-vsctl -- set Bridge br-int mirrors=@m -- --id=@snooper0 \ - get Port snooper0 -- --id=@patch-tun get Port patch-tun \ - -- --id=@m create Mirror name=mymirror select-dst-port=@patch-tun \ - select-src-port=@patch-tun output-port=@snooper0 select_all=1 - - #. Profit. You can now see traffic on ``patch-tun`` by running - :command:`tcpdump -i snooper0`. - - #. Clean up by clearing all mirrors on ``br-int`` and deleting the dummy - interface: - - .. code-block:: console - - # ovs-vsctl clear Bridge br-int mirrors - - .. code-block:: console - - # ovs-vsctl del-port br-int snooper0 - - .. code-block:: console - - # ip link delete dev snooper0 - - On the integration bridge, networks are distinguished using internal - VLANs regardless of how the networking service defines them. This - allows instances on the same host to communicate directly without - transiting the rest of the virtual, or physical, network. These - internal VLAN IDs are based on the order they are created on the node - and may vary between nodes. These IDs are in no way related to the - segmentation IDs used in the network definition and on the physical - wire. - - VLAN tags are translated between the external tag defined in the - network settings, and internal tags in several places. On the - ``br-int``, incoming packets from the ``int-br-eth1`` are translated - from external tags to internal tags. Other translations also happen - on the other bridges and will be discussed in those sections. - - **To discover which internal VLAN tag is in use for a given external VLAN - by using the :command:`ovs-ofctl` command** - - #. Find the external VLAN tag of the network you're interested in. This - is the ``provider:segmentation_id`` as returned by the networking - service: - - .. code-block:: console - - # neutron net-show --fields provider:segmentation_id - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | provider:network_type | vlan | - | provider:segmentation_id | 2113 | - +---------------------------+--------------------------------------+ - - #. Grep for the ``provider:segmentation_id``, 2113 in this case, in the - output of :command:`ovs-ofctl dump-flows br-int`: - - .. code-block:: console - - # ovs-ofctl dump-flows br-int|grep vlan=2113 - cookie=0x0, duration=173615.481s, table=0, n_packets=7676140, - n_bytes=444818637, idle_age=0, hard_age=65534, priority=3, - in_port=1,dl_vlan=2113 actions=mod_vlan_vid:7,NORMAL - - Here you can see packets received on port ID 1 with the VLAN tag 2113 - are modified to have the internal VLAN tag 7. Digging a little - deeper, you can confirm that port 1 is in fact ``int-br-eth1``: - - .. code-block:: console - - # ovs-ofctl show br-int - OFPT_FEATURES_REPLY (xid=0x2): dpid:000022bc45e1914b - n_tables:254, n_buffers:256 - capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS - ARP_MATCH_IP - actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC - SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC - SET_TP_DST ENQUEUE - 1(int-br-eth1): addr:c2:72:74:7f:86:08 - config: 0 - state: 0 - current: 10GB-FD COPPER - speed: 10000 Mbps now, 0 Mbps max - 2(patch-tun): addr:fa:24:73:75:ad:cd - config: 0 - state: 0 - speed: 0 Mbps now, 0 Mbps max - 3(tap9be586e6-79): addr:fe:16:3e:e6:98:56 - config: 0 - state: 0 - current: 10MB-FD COPPER - speed: 10 Mbps now, 0 Mbps max - LOCAL(br-int): addr:22:bc:45:e1:91:4b - config: 0 - state: 0 - speed: 0 Mbps now, 0 Mbps max - OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0 - -#. The next step depends on whether the virtual network is configured to - use 802.1q VLAN tags or GRE: - - #. VLAN-based networks exit the integration bridge via veth interface - ``int-br-eth1`` and arrive on the bridge ``br-eth1`` on the other - member of the veth pair ``phy-br-eth1``. Packets on this interface - arrive with internal VLAN tags and are translated to external tags - in the reverse of the process described above: - - .. code-block:: console - - # ovs-ofctl dump-flows br-eth1|grep 2113 - cookie=0x0, duration=184168.225s, table=0, n_packets=0, n_bytes=0, - idle_age=65534, hard_age=65534, priority=4,in_port=1,dl_vlan=7 - actions=mod_vlan_vid:2113,NORMAL - - Packets, now tagged with the external VLAN tag, then exit onto the - physical network via ``eth1``. The Layer2 switch this interface is - connected to must be configured to accept traffic with the VLAN ID - used. The next hop for this packet must also be on the same - layer-2 network. - - #. GRE-based networks are passed with ``patch-tun`` to the tunnel - bridge ``br-tun`` on interface ``patch-int``. This bridge also - contains one port for each GRE tunnel peer, so one for each - compute node and network node in your network. The ports are named - sequentially from ``gre-1`` onward. - - Matching ``gre-`` interfaces to tunnel endpoints is possible by - looking at the Open vSwitch state: - - .. code-block:: console - - # ovs-vsctl show |grep -A 3 -e Port\ \"gre- - Port "gre-1" - Interface "gre-1" - type: gre - options: {in_key=flow, local_ip="10.10.128.21", - out_key=flow, remote_ip="10.10.128.16"} - - - In this case, ``gre-1`` is a tunnel from IP 10.10.128.21, which - should match a local interface on this node, to IP 10.10.128.16 on - the remote side. - - These tunnels use the regular routing tables on the host to route - the resulting GRE packet, so there is no requirement that GRE - endpoints are all on the same layer-2 network, unlike VLAN - encapsulation. - - All interfaces on the ``br-tun`` are internal to Open vSwitch. To - monitor traffic on them, you need to set up a mirror port as - described above for ``patch-tun`` in the ``br-int`` bridge. - - All translation of GRE tunnels to and from internal VLANs happens - on this bridge. - - **To discover which internal VLAN tag is in use for a GRE tunnel by using - the :command:`ovs-ofctl` command** - - #. Find the ``provider:segmentation_id`` of the network you're - interested in. This is the same field used for the VLAN ID in - VLAN-based networks: - - .. code-block:: console - - # neutron net-show --fields provider:segmentation_id - +--------------------------+-------+ - | Field | Value | - +--------------------------+-------+ - | provider:network_type | gre | - | provider:segmentation_id | 3 | - +--------------------------+-------+ - - #. Grep for 0x<``provider:segmentation_id``>, 0x3 in this case, in the - output of ``ovs-ofctl dump-flows br-tun``: - - .. code-block:: console - - # ovs-ofctl dump-flows br-tun|grep 0x3 - cookie=0x0, duration=380575.724s, table=2, n_packets=1800, - n_bytes=286104, priority=1,tun_id=0x3 - actions=mod_vlan_vid:1,resubmit(,10) - cookie=0x0, duration=715.529s, table=20, n_packets=5, - n_bytes=830, hard_timeout=300,priority=1, - vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:a6:48:24 - actions=load:0->NXM_OF_VLAN_TCI[], - load:0x3->NXM_NX_TUN_ID[],output:53 - cookie=0x0, duration=193729.242s, table=21, n_packets=58761, - n_bytes=2618498, dl_vlan=1 actions=strip_vlan,set_tunnel:0x3, - output:4,output:58,output:56,output:11,output:12,output:47, - output:13,output:48,output:49,output:44,output:43,output:45, - output:46,output:30,output:31,output:29,output:28,output:26, - output:27,output:24,output:25,output:32,output:19,output:21, - output:59,output:60,output:57,output:6,output:5,output:20, - output:18,output:17,output:16,output:15,output:14,output:7, - output:9,output:8,output:53,output:10,output:3,output:2, - output:38,output:37,output:39,output:40,output:34,output:23, - output:36,output:35,output:22,output:42,output:41,output:54, - output:52,output:51,output:50,output:55,output:33 - - Here, you see three flows related to this GRE tunnel. The first is - the translation from inbound packets with this tunnel ID to internal - VLAN ID 1. The second shows a unicast flow to output port 53 for - packets destined for MAC address fa:16:3e:a6:48:24. The third shows - the translation from the internal VLAN representation to the GRE - tunnel ID flooded to all output ports. For further details of the - flow descriptions, see the man page for ``ovs-ofctl``. As in the - previous VLAN example, numeric port IDs can be matched with their - named representations by examining the output of ``ovs-ofctl show br-tun``. - -#. The packet is then received on the network node. Note that any - traffic to the l3-agent or dhcp-agent will be visible only within - their network namespace. Watching any interfaces outside those - namespaces, even those that carry the network traffic, will only show - broadcast packets like Address Resolution Protocols (ARPs), but - unicast traffic to the router or DHCP address will not be seen. See - :ref:`dealing_with_network_namespaces` - for detail on how to run commands within these namespaces. - - Alternatively, it is possible to configure VLAN-based networks to use - external routers rather than the l3-agent shown here, so long as the - external router is on the same VLAN: - - #. VLAN-based networks are received as tagged packets on a physical - network interface, ``eth1`` in this example. Just as on the - compute node, this interface is a member of the ``br-eth1`` - bridge. - - #. GRE-based networks will be passed to the tunnel bridge ``br-tun``, - which behaves just like the GRE interfaces on the compute node. - -#. Next, the packets from either input go through the integration - bridge, again just as on the compute node. - -#. The packet then makes it to the l3-agent. This is actually another - TAP device within the router's network namespace. Router namespaces - are named in the form ``qrouter-``. Running :command:`ip a` - within the namespace will show the TAP device name, - qr-e6256f7d-31 in this example: - - .. code-block:: console - - # ip netns exec qrouter-e521f9d0-a1bd-4ff4-bc81-78a60dd88fe5 ip a|grep state - 10: qr-e6256f7d-31: mtu 1500 qdisc noqueue - state UNKNOWN - 11: qg-35916e1f-36: mtu 1500 - qdisc pfifo_fast state UNKNOWN qlen 500 - 28: lo: mtu 16436 qdisc noqueue state UNKNOWN - -#. The ``qg-`` interface in the l3-agent router namespace sends the - packet on to its next hop through device ``eth2`` on the external - bridge ``br-ex``. This bridge is constructed similarly to ``br-eth1`` - and may be inspected in the same way. - -#. This external bridge also includes a physical network interface, - ``eth2`` in this example, which finally lands the packet on the - external network destined for an external router or destination. - -#. DHCP agents running on OpenStack networks run in namespaces similar - to the l3-agents. DHCP namespaces are named ``qdhcp-`` and have - a TAP device on the integration bridge. Debugging of DHCP issues - usually involves working inside this network namespace. - -Finding a Failure in the Path -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Use ping to quickly find where a failure exists in the network path. In -an instance, first see whether you can ping an external host, such as -google.com. If you can, then there shouldn't be a network problem at -all. - -If you can't, try pinging the IP address of the compute node where the -instance is hosted. If you can ping this IP, then the problem is -somewhere between the compute node and that compute node's gateway. - -If you can't ping the IP address of the compute node, the problem is -between the instance and the compute node. This includes the bridge -connecting the compute node's main NIC with the vnet NIC of the -instance. - -One last test is to launch a second instance and see whether the two -instances can ping each other. If they can, the issue might be related -to the firewall on the compute node. - -tcpdump -~~~~~~~ - -One great, although very in-depth, way of troubleshooting network issues -is to use ``tcpdump``. We recommended using ``tcpdump`` at several -points along the network path to correlate where a problem might be. If -you prefer working with a GUI, either live or by using a ``tcpdump`` -capture, do also check out -`Wireshark `_. - -For example, run the following command: - -.. code-block:: console - - tcpdump -i any -n -v \ 'icmp[icmptype] = icmp-echoreply or icmp[icmptype] = - icmp-echo' - -Run this on the command line of the following areas: - -#. An external server outside of the cloud - -#. A compute node - -#. An instance running on that compute node - -In this example, these locations have the following IP addresses: - -.. code-block:: console - - Instance - 10.0.2.24 - 203.0.113.30 - Compute Node - 10.0.0.42 - 203.0.113.34 - External Server - 1.2.3.4 - -Next, open a new shell to the instance and then ping the external host -where ``tcpdump`` is running. If the network path to the external server -and back is fully functional, you see something like the following: - -On the external server: - -.. code-block:: console - - 12:51:42.020227 IP (tos 0x0, ttl 61, id 0, offset 0, flags [DF], - proto ICMP (1), length 84) - 203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 - 12:51:42.020255 IP (tos 0x0, ttl 64, id 8137, offset 0, flags [none], - proto ICMP (1), length 84) - 1.2.3.4 > 203.0.113.30: ICMP echo reply, id 24895, seq 1, - length 64 - -On the compute node: - -.. code-block:: console - - 12:51:42.019519 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], - proto ICMP (1), length 84) - 10.0.2.24 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 - 12:51:42.019519 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], - proto ICMP (1), length 84) - 10.0.2.24 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 - 12:51:42.019545 IP (tos 0x0, ttl 63, id 0, offset 0, flags [DF], - proto ICMP (1), length 84) - 203.0.113.30 > 1.2.3.4: ICMP echo request, id 24895, seq 1, length 64 - 12:51:42.019780 IP (tos 0x0, ttl 62, id 8137, offset 0, flags [none], - proto ICMP (1), length 84) - 1.2.3.4 > 203.0.113.30: ICMP echo reply, id 24895, seq 1, length 64 - 12:51:42.019801 IP (tos 0x0, ttl 61, id 8137, offset 0, flags [none], - proto ICMP (1), length 84) - 1.2.3.4 > 10.0.2.24: ICMP echo reply, id 24895, seq 1, length 64 - 12:51:42.019807 IP (tos 0x0, ttl 61, id 8137, offset 0, flags [none], - proto ICMP (1), length 84) - 1.2.3.4 > 10.0.2.24: ICMP echo reply, id 24895, seq 1, length 64 - -On the instance: - -.. code-block:: console - - 12:51:42.020974 IP (tos 0x0, ttl 61, id 8137, offset 0, flags [none], - proto ICMP (1), length 84) - 1.2.3.4 > 10.0.2.24: ICMP echo reply, id 24895, seq 1, length 64 - -Here, the external server received the ping request and sent a ping -reply. On the compute node, you can see that both the ping and ping -reply successfully passed through. You might also see duplicate packets -on the compute node, as seen above, because ``tcpdump`` captured the -packet on both the bridge and outgoing interface. - -iptables -~~~~~~~~ - -Through ``nova-network`` or ``neutron``, OpenStack Compute automatically -manages iptables, including forwarding packets to and from instances on -a compute node, forwarding floating IP traffic, and managing security -group rules. In addition to managing the rules, comments (if supported) -will be inserted in the rules to help indicate the purpose of the rule. - -The following comments are added to the rule set as appropriate: - -- Perform source NAT on outgoing traffic. - -- Default drop rule for unmatched traffic. - -- Direct traffic from the VM interface to the security group chain. - -- Jump to the VM specific chain. - -- Direct incoming traffic from VM to the security group chain. - -- Allow traffic from defined IP/MAC pairs. - -- Drop traffic without an IP/MAC allow rule. - -- Allow DHCP client traffic. - -- Prevent DHCP Spoofing by VM. - -- Send unmatched traffic to the fallback chain. - -- Drop packets that are not associated with a state. - -- Direct packets associated with a known session to the RETURN chain. - -- Allow IPv6 ICMP traffic to allow RA packets. - -Run the following command to view the current iptables configuration: - -.. code-block:: console - - # iptables-save - -.. note:: - - If you modify the configuration, it reverts the next time you - restart ``nova-network`` or ``neutron-server``. You must use - OpenStack to manage iptables. - -Network Configuration in the Database for nova-network -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -With ``nova-network``, the nova database table contains a few tables -with networking information: - -``fixed_ips`` - Contains each possible IP address for the subnet(s) added to - Compute. This table is related to the ``instances`` table by way of - the ``fixed_ips.instance_uuid`` column. - -``floating_ips`` - Contains each floating IP address that was added to Compute. This - table is related to the ``fixed_ips`` table by way of the - ``floating_ips.fixed_ip_id`` column. - -``instances`` - Not entirely network specific, but it contains information about the - instance that is utilizing the ``fixed_ip`` and optional - ``floating_ip``. - -From these tables, you can see that a floating IP is technically never -directly related to an instance; it must always go through a fixed IP. - -Manually Disassociating a Floating IP -------------------------------------- - -Sometimes an instance is terminated but the floating IP was not -correctly de-associated from that instance. Because the database is in -an inconsistent state, the usual tools to disassociate the IP no longer -work. To fix this, you must manually update the database. - -First, find the UUID of the instance in question: - -.. code-block:: mysql - - mysql> select uuid from instances where hostname = 'hostname'; - -Next, find the fixed IP entry for that UUID: - -.. code-block:: mysql - - mysql> select * from fixed_ips where instance_uuid = ''; - -You can now get the related floating IP entry: - -.. code-block:: mysql - - mysql> select * from floating_ips where fixed_ip_id = ''; - -And finally, you can disassociate the floating IP: - -.. code-block:: mysql - - mysql> update floating_ips set fixed_ip_id = NULL, host = NULL where - fixed_ip_id = ''; - -You can optionally also deallocate the IP from the user's pool: - -.. code-block:: mysql - - mysql> update floating_ips set project_id = NULL where - fixed_ip_id = ''; - -Debugging DHCP Issues with nova-network -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -One common networking problem is that an instance boots successfully but -is not reachable because it failed to obtain an IP address from dnsmasq, -which is the DHCP server that is launched by the ``nova-network`` -service. - -The simplest way to identify that this is the problem with your instance -is to look at the console output of your instance. If DHCP failed, you -can retrieve the console log by doing: - -.. code-block:: console - - $ nova console-log - -If your instance failed to obtain an IP through DHCP, some messages -should appear in the console. For example, for the Cirros image, you see -output that looks like the following: - -.. code-block:: console - - udhcpc (v1.17.2) started - Sending discover... - Sending discover... - Sending discover... - No lease, forking to background - starting DHCP forEthernet interface eth0 [ [1;32mOK[0;39m ] - cloud-setup: checking http://169.254.169.254/2009-04-04/meta-data/instance-id - wget: can't connect to remote host (169.254.169.254): Network is - unreachable - -After you establish that the instance booted properly, the task is to -figure out where the failure is. - -A DHCP problem might be caused by a misbehaving dnsmasq process. First, -debug by checking logs and then restart the dnsmasq processes only for -that project (tenant). In VLAN mode, there is a dnsmasq process for each -tenant. Once you have restarted targeted dnsmasq processes, the simplest -way to rule out dnsmasq causes is to kill all of the dnsmasq processes -on the machine and restart ``nova-network``. As a last resort, do this -as root: - -.. code-block:: console - - # killall dnsmasq - # restart nova-network - -.. note:: - - Use ``openstack-nova-network`` on RHEL/CentOS/Fedora but - ``nova-network`` on Ubuntu/Debian. - -Several minutes after ``nova-network`` is restarted, you should see new -dnsmasq processes running: - -.. code-block:: console - - # ps aux | grep dnsmasq - -.. code-block:: console - - nobody 3735 0.0 0.0 27540 1044 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order - --bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - root 3736 0.0 0.0 27512 444 ? S 15:40 0:00 /usr/sbin/dnsmasq --strict-order - --bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - -If your instances are still not able to obtain IP addresses, the next -thing to check is whether dnsmasq is seeing the DHCP requests from the -instance. On the machine that is running the dnsmasq process, which is -the compute host if running in multi-host mode, look at -``/var/log/syslog`` to see the dnsmasq output. If dnsmasq is seeing the -request properly and handing out an IP, the output looks like this: - -.. code-block:: console - - Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPDISCOVER(br100) fa:16:3e:56:0b:6f - Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPOFFER(br100) 192.168.100.3 - fa:16:3e:56:0b:6f - Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPREQUEST(br100) 192.168.100.3 - fa:16:3e:56:0b:6f - Feb 27 22:01:36 mynode dnsmasq-dhcp[2438]: DHCPACK(br100) 192.168.100.3 - fa:16:3e:56:0b:6f test - -If you do not see the ``DHCPDISCOVER``, a problem exists with the packet -getting from the instance to the machine running dnsmasq. If you see all -of the preceding output and your instances are still not able to obtain -IP addresses, then the packet is able to get from the instance to the -host running dnsmasq, but it is not able to make the return trip. - -You might also see a message such as this: - -.. code-block:: console - - Feb 27 22:01:36 mynode dnsmasq-dhcp[25435]: DHCPDISCOVER(br100) - fa:16:3e:78:44:84 no address available - -This may be a dnsmasq and/or ``nova-network`` related issue. (For the -preceding example, the problem happened to be that dnsmasq did not have -any more IP addresses to give away because there were no more fixed IPs -available in the OpenStack Compute database.) - -If there's a suspicious-looking dnsmasq log message, take a look at the -command-line arguments to the dnsmasq processes to see if they look -correct: - -.. code-block:: console - - $ ps aux | grep dnsmasq - -The output looks something like the following: - -.. code-block:: console - - 108 1695 0.0 0.0 25972 1000 ? S Feb26 0:00 /usr/sbin/dnsmasq - -u libvirt-dnsmasq - --strict-order --bind-interfaces - --pid-file=/var/run/libvirt/network/default.pid --conf-file= - --except-interface lo --listen-address 192.168.122.1 - --dhcp-range 192.168.122.2,192.168.122.254 - --dhcp-leasefile=/var/lib/libvirt/dnsmasq/default.leases - --dhcp-lease-max=253 --dhcp-no-override - nobody 2438 0.0 0.0 27540 1096 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order - --bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 - --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - root 2439 0.0 0.0 27512 472 ? S Feb26 0:00 /usr/sbin/dnsmasq --strict-order - --bind-interfaces --conf-file= - --domain=novalocal --pid-file=/var/lib/nova/networks/nova-br100.pid - --listen-address=192.168.100.1 - --except-interface=lo - --dhcp-range=set:'novanetwork',192.168.100.2,static,120s - --dhcp-lease-max=256 - --dhcp-hostsfile=/var/lib/nova/networks/nova-br100.conf - --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro - -The output shows three different dnsmasq processes. The dnsmasq process -that has the DHCP subnet range of 192.168.122.0 belongs to libvirt and -can be ignored. The other two dnsmasq processes belong to -``nova-network``. The two processes are actually related—one is simply -the parent process of the other. The arguments of the dnsmasq processes -should correspond to the details you configured ``nova-network`` with. - -If the problem does not seem to be related to dnsmasq itself, at this -point use ``tcpdump`` on the interfaces to determine where the packets -are getting lost. - -DHCP traffic uses UDP. The client sends from port 68 to port 67 on the -server. Try to boot a new instance and then systematically listen on the -NICs until you identify the one that isn't seeing the traffic. To use -``tcpdump`` to listen to ports 67 and 68 on br100, you would do: - -.. code-block:: console - - # tcpdump -i br100 -n port 67 or port 68 - -You should be doing sanity checks on the interfaces using command such -as :command:`ip a` and :command:`brctl show` to ensure that the interfaces are -actually up and configured the way that you think that they are. - -Debugging DNS Issues -~~~~~~~~~~~~~~~~~~~~ - -If you are able to use :term:`SSH ` to log into an -instance, but it takes a very long time (on the order of a minute) to get -a prompt, then you might have a DNS issue. The reason a DNS issue can cause -this problem is that the SSH server does a reverse DNS lookup on the -IP address that you are connecting from. If DNS lookup isn't working on your -instances, then you must wait for the DNS reverse lookup timeout to occur for -the SSH login process to complete. - -When debugging DNS issues, start by making sure that the host where the -dnsmasq process for that instance runs is able to correctly resolve. If -the host cannot resolve, then the instances won't be able to either. - -A quick way to check whether DNS is working is to resolve a hostname -inside your instance by using the :command:`host` command. If DNS is working, -you should see: - -.. code-block:: console - - $ host openstack.org - openstack.org has address 174.143.194.225 - openstack.org mail is handled by 10 mx1.emailsrvr.com. - openstack.org mail is handled by 20 mx2.emailsrvr.com. - -If you're running the Cirros image, it doesn't have the "host" program -installed, in which case you can use ping to try to access a machine by -hostname to see whether it resolves. If DNS is working, the first line -of ping would be: - -.. code-block:: console - - $ ping openstack.org - PING openstack.org (174.143.194.225): 56 data bytes - -If the instance fails to resolve the hostname, you have a DNS problem. -For example: - -.. code-block:: console - - $ ping openstack.org - ping: bad address 'openstack.org' - -In an OpenStack cloud, the dnsmasq process acts as the DNS server for -the instances in addition to acting as the DHCP server. A misbehaving -dnsmasq process may be the source of DNS-related issues inside the -instance. As mentioned in the previous section, the simplest way to rule -out a misbehaving dnsmasq process is to kill all the dnsmasq processes -on the machine and restart ``nova-network``. However, be aware that this -command affects everyone running instances on this node, including -tenants that have not seen the issue. As a last resort, as root: - -.. code-block:: console - - # killall dnsmasq - # restart nova-network - -After the dnsmasq processes start again, check whether DNS is working. - -If restarting the dnsmasq process doesn't fix the issue, you might need -to use ``tcpdump`` to look at the packets to trace where the failure is. -The DNS server listens on UDP port 53. You should see the DNS request on -the bridge (such as, br100) of your compute node. Let's say you start -listening with ``tcpdump`` on the compute node: - -.. code-block:: console - - # tcpdump -i br100 -n -v udp port 53 - tcpdump: listening on br100, link-type EN10MB (Ethernet), capture size 65535 - bytes - -Then, if you use SSH to log into your instance and try ``ping openstack.org``, -you should see something like: - -.. code-block:: console - - 16:36:18.807518 IP (tos 0x0, ttl 64, id 56057, offset 0, flags [DF], - proto UDP (17), length 59) - 192.168.100.4.54244 > 192.168.100.1.53: 2+ A? openstack.org. (31) - 16:36:18.808285 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], - proto UDP (17), length 75) - 192.168.100.1.53 > 192.168.100.4.54244: 2 1/0/0 openstack.org. A - 174.143.194.225 (47) - -Troubleshooting Open vSwitch -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Open vSwitch, as used in the previous OpenStack Networking examples is a -full-featured multilayer virtual switch licensed under the open source -Apache 2.0 license. Full documentation can be found at `the project's -website `_. In practice, given the preceding -configuration, the most common issues are being sure that the required -bridges (``br-int``, ``br-tun``, and ``br-ex``) exist and have the -proper ports connected to them. - -The Open vSwitch driver should and usually does manage this -automatically, but it is useful to know how to do this by hand with the -:command:`ovs-vsctl` command. This command has many more subcommands than we -will use here; see the man page or use :command:`ovs-vsctl --help` for the -full listing. - -To list the bridges on a system, use :command:`ovs-vsctl list-br`. -This example shows a compute node that has an internal -bridge and a tunnel bridge. VLAN networks are trunked through the -``eth1`` network interface: - -.. code-block:: console - - # ovs-vsctl list-br - br-int - br-tun - eth1-br - -Working from the physical interface inwards, we can see the chain of -ports and bridges. First, the bridge ``eth1-br``, which contains the -physical network interface ``eth1`` and the virtual interface -``phy-eth1-br``: - -.. code-block:: console - - # ovs-vsctl list-ports eth1-br - eth1 - phy-eth1-br - -Next, the internal bridge, ``br-int``, contains ``int-eth1-br``, which -pairs with ``phy-eth1-br`` to connect to the physical network shown in -the previous bridge, ``patch-tun``, which is used to connect to the GRE -tunnel bridge and the TAP devices that connect to the instances -currently running on the system: - -.. code-block:: console - - # ovs-vsctl list-ports br-int - int-eth1-br - patch-tun - tap2d782834-d1 - tap690466bc-92 - tap8a864970-2d - -The tunnel bridge, ``br-tun``, contains the ``patch-int`` interface and -``gre-`` interfaces for each peer it connects to via GRE, one for -each compute and network node in your cluster: - -.. code-block:: console - - # ovs-vsctl list-ports br-tun - patch-int - gre-1 - . - . - . - gre- - -If any of these links is missing or incorrect, it suggests a -configuration error. Bridges can be added with ``ovs-vsctl add-br``, -and ports can be added to bridges with -``ovs-vsctl add-port``. While running these by hand can be useful -debugging, it is imperative that manual changes that you intend to keep -be reflected back into your configuration files. - -.. _dealing_with_network_namespaces: - -Dealing with Network Namespaces -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Linux network namespaces are a kernel feature the networking service -uses to support multiple isolated layer-2 networks with overlapping IP -address ranges. The support may be disabled, but it is on by default. If -it is enabled in your environment, your network nodes will run their -dhcp-agents and l3-agents in isolated namespaces. Network interfaces and -traffic on those interfaces will not be visible in the default -namespace. - -To see whether you are using namespaces, run :command:`ip netns`: - -.. code-block:: console - - # ip netns - qdhcp-e521f9d0-a1bd-4ff4-bc81-78a60dd88fe5 - qdhcp-a4d00c60-f005-400e-a24c-1bf8b8308f98 - qdhcp-fe178706-9942-4600-9224-b2ae7c61db71 - qdhcp-0a1d0a27-cffa-4de3-92c5-9d3fd3f2e74d - qrouter-8a4ce760-ab55-4f2f-8ec5-a2e858ce0d39 - -L3-agent router namespaces are named ``qrouter-``, and -dhcp-agent name spaces are named ``qdhcp-``\ ````. This output -shows a network node with four networks running dhcp-agents, one of -which is also running an l3-agent router. It's important to know which -network you need to be working in. A list of existing networks and their -UUIDs can be obtained by running ``neutron net-list`` with administrative -credentials. - - -Once you've determined which namespace you need to work in, you can use -any of the debugging tools mention earlier by prefixing the command with -``ip netns exec ``. For example, to see what network -interfaces exist in the first qdhcp namespace returned above, do this: - -.. code-block:: console - - # ip netns exec qdhcp-e521f9d0-a1bd-4ff4-bc81-78a60dd88fe5 ip a - 10: tape6256f7d-31: mtu 1500 qdisc noqueue state UNKNOWN - link/ether fa:16:3e:aa:f7:a1 brd ff:ff:ff:ff:ff:ff - inet 10.0.1.100/24 brd 10.0.1.255 scope global tape6256f7d-31 - inet 169.254.169.254/16 brd 169.254.255.255 scope global tape6256f7d-31 - inet6 fe80::f816:3eff:feaa:f7a1/64 scope link - valid_lft forever preferred_lft forever - 28: lo: mtu 16436 qdisc noqueue state UNKNOWN - link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 - inet 127.0.0.1/8 scope host lo - inet6 ::1/128 scope host - valid_lft forever preferred_lft forever - -From this you see that the DHCP server on that network is using the -tape6256f7d-31 device and has an IP address of 10.0.1.100. Seeing the -address 169.254.169.254, you can also see that the dhcp-agent is running -a metadata-proxy service. Any of the commands mentioned previously in -this chapter can be run in the same way. It is also possible to run a -shell, such as ``bash``, and have an interactive session within the -namespace. In the latter case, exiting the shell returns you to the -top-level default namespace. - -Summary -~~~~~~~ - -The authors have spent too much time looking at packet dumps in order to -distill this information for you. We trust that, following the methods -outlined in this chapter, you will have an easier time! Aside from -working with the tools and steps above, don't forget that sometimes an -extra pair of eyes goes a long way to assist. - diff --git a/doc/ops-guide/source/ops_projects_users.rst b/doc/ops-guide/source/ops_projects_users.rst deleted file mode 100644 index 253a5f8b..00000000 --- a/doc/ops-guide/source/ops_projects_users.rst +++ /dev/null @@ -1,778 +0,0 @@ -=========================== -Managing Projects and Users -=========================== - -An OpenStack cloud does not have much value without users. This chapter -covers topics that relate to managing users, projects, and quotas. This -chapter describes users and projects as described by version 2 of the -OpenStack Identity API. - -.. warning:: - - While version 3 of the Identity API is available, the client tools - do not yet implement those calls, and most OpenStack clouds are - still implementing Identity API v2.0. - -Projects or Tenants? -~~~~~~~~~~~~~~~~~~~~ - -In OpenStack user interfaces and documentation, a group of users is -referred to as a :term:`project` or :term:`tenant`. -These terms are interchangeable. - -The initial implementation of OpenStack Compute had its own -authentication system and used the term ``project``. When authentication -moved into the OpenStack Identity (keystone) project, it used the term -``tenant`` to refer to a group of users. Because of this legacy, some of -the OpenStack tools refer to projects and some refer to tenants. - -.. note:: - - This guide uses the term ``project``, unless an example shows - interaction with a tool that uses the term ``tenant``. - -Managing Projects -~~~~~~~~~~~~~~~~~ - -Users must be associated with at least one project, though they may -belong to many. Therefore, you should add at least one project before -adding users. - -Adding Projects ---------------- - -To create a project through the OpenStack dashboard: - -#. Log in as an administrative user. - -#. Select the :guilabel:`Identity` tab in the left navigation bar. - -#. Under Identity tab, click :guilabel:`Projects`. - -#. Click the :guilabel:`Create Project` button. - -You are prompted for a project name and an optional, but recommended, -description. Select the checkbox at the bottom of the form to enable -this project. By default, it is enabled, as shown in -:ref:`figure_create_project`. - -.. _figure_create_project: - -.. figure:: figures/osog_0901.png - :alt: Dashboard's Create Project form - - Figure Dashboard's Create Project form - -It is also possible to add project members and adjust the project -quotas. We'll discuss those actions later, but in practice, it can be -quite convenient to deal with all these operations at one time. - -To add a project through the command line, you must use the OpenStack -command line client. - -.. code-block:: console - - # openstack project create demo - -This command creates a project named "demo." Optionally, you can add a -description string by appending :option:`--description tenant-description`, -which can be very useful. You can also -create a group in a disabled state by appending :option:`--disable` to the -command. By default, projects are created in an enabled state. - -Quotas -~~~~~~ - -To prevent system capacities from being exhausted without notification, -you can set up :term:`quotas `. Quotas are operational limits. For example, -the number of gigabytes allowed per tenant can be controlled to ensure that -a single tenant cannot consume all of the disk space. Quotas are -currently enforced at the tenant (or project) level, rather than the -user level. - -.. warning:: - - Because without sensible quotas a single tenant could use up all the - available resources, default quotas are shipped with OpenStack. You - should pay attention to which quota settings make sense for your - hardware capabilities. - -Using the command-line interface, you can manage quotas for the -OpenStack Compute service and the Block Storage service. - -Typically, default values are changed because a tenant requires more -than the OpenStack default of 10 volumes per tenant, or more than the -OpenStack default of 1 TB of disk space on a compute node. - -.. note:: - - To view all tenants, run: - - .. code-block:: console - - $ openstack project list - +---------------------------------+----------+ - | ID | Name | - +---------------------------------+----------+ - | a981642d22c94e159a4a6540f70f9f8 | admin | - | 934b662357674c7b9f5e4ec6ded4d0e | tenant01 | - | 7bc1dbfd7d284ec4a856ea1eb82dca8 | tenant02 | - | 9c554aaef7804ba49e1b21cbd97d218 | services | - +---------------------------------+----------+ - -Set Image Quotas ----------------- - -You can restrict a project's image storage by total number of bytes. -Currently, this quota is applied cloud-wide, so if you were to set an -Image quota limit of 5 GB, then all projects in your cloud will be able -to store only 5 GB of images and snapshots. - -To enable this feature, edit the ``/etc/glance/glance-api.conf`` file, -and under the ``[DEFAULT]`` section, add: - -.. code-block:: ini - - user_storage_quota = - -For example, to restrict a project's image storage to 5 GB, do this: - -.. code-block:: ini - - user_storage_quota = 5368709120 - -.. note:: - - There is a configuration option in ``glance-api.conf`` that limits - the number of members allowed per image, called - ``image_member_quota``, set to 128 by default. That setting is a - different quota from the storage quota. - -Set Compute Service Quotas --------------------------- - -As an administrative user, you can update the Compute service quotas for -an existing tenant, as well as update the quota defaults for a new -tenant.Compute Compute service See :ref:`table_compute_quota`. - -.. _table_compute_quota: - -.. list-table:: Compute quota descriptions - :widths: 30 40 30 - :header-rows: 1 - - * - Quota - - Description - - Property name - * - Fixed IPs - - Number of fixed IP addresses allowed per tenant. - This number must be equal to or greater than the number - of allowed instances. - - fixed-ips - * - Floating IPs - - Number of floating IP addresses allowed per tenant. - - floating-ips - * - Injected file content bytes - - Number of content bytes allowed per injected file. - - injected-file-content-bytes - * - Injected file path bytes - - Number of bytes allowed per injected file path. - - injected-file-path-bytes - * - Injected files - - Number of injected files allowed per tenant. - - injected-files - * - Instances - - Number of instances allowed per tenant. - - instances - * - Key pairs - - Number of key pairs allowed per user. - - key-pairs - * - Metadata items - - Number of metadata items allowed per instance. - - metadata-items - * - RAM - - Megabytes of instance RAM allowed per tenant. - - ram - * - Security group rules - - Number of rules per security group. - - security-group-rules - * - Security groups - - Number of security groups per tenant. - - security-groups - * - VCPUs - - Number of instance cores allowed per tenant. - - cores - -View and update compute quotas for a tenant (project) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -As an administrative user, you can use the :command:`nova quota-*` -commands, which are provided by the -``python-novaclient`` package, to view and update tenant quotas. - -**To view and update default quota values** - -#. List all default quotas for all tenants, as follows: - - .. code-block:: console - - $ nova quota-defaults - - For example: - - .. code-block:: console - - $ nova quota-defaults - +-----------------------------+-------+ - | Property | Value | - +-----------------------------+-------+ - | metadata_items | 128 | - | injected_file_content_bytes | 10240 | - | ram | 51200 | - | floating_ips | 10 | - | key_pairs | 100 | - | instances | 10 | - | security_group_rules | 20 | - | injected_files | 5 | - | cores | 20 | - | fixed_ips | -1 | - | injected_file_path_bytes | 255 | - | security_groups | 10 | - +-----------------------------+-------+ - -#. Update a default value for a new tenant, as follows: - - .. code-block:: console - - $ nova quota-class-update default key value - - For example: - - .. code-block:: console - - $ nova quota-class-update default --instances 15 - -**To view quota values for a tenant (project)** - -#. Place the tenant ID in a variable: - - .. code-block:: console - - $ tenant=$(openstack project list | awk '/tenantName/ {print $2}') - -#. List the currently set quota values for a tenant, as follows: - - .. code-block:: console - - $ nova quota-show --tenant $tenant - - For example: - - .. code-block:: console - - $ nova quota-show --tenant $tenant - +-----------------------------+-------+ - | Property | Value | - +-----------------------------+-------+ - | metadata_items | 128 | - | injected_file_content_bytes | 10240 | - | ram | 51200 | - | floating_ips | 12 | - | key_pairs | 100 | - | instances | 10 | - | security_group_rules | 20 | - | injected_files | 5 | - | cores | 20 | - | fixed_ips | -1 | - | injected_file_path_bytes | 255 | - | security_groups | 10 | - +-----------------------------+-------+ - -**To update quota values for a tenant (project)** - -#. Obtain the tenant ID, as follows: - - .. code-block:: console - - $ tenant=$(openstack project list | awk '/tenantName/ {print $2}') - -#. Update a particular quota value, as follows: - - .. code-block:: console - - # nova quota-update --quotaName quotaValue tenantID - - For example: - - .. code-block:: console - - # nova quota-update --floating-ips 20 $tenant - # nova quota-show --tenant $tenant - +-----------------------------+-------+ - | Property | Value | - +-----------------------------+-------+ - | metadata_items | 128 | - | injected_file_content_bytes | 10240 | - | ram | 51200 | - | floating_ips | 20 | - | key_pairs | 100 | - | instances | 10 | - | security_group_rules | 20 | - | injected_files | 5 | - | cores | 20 | - | fixed_ips | -1 | - | injected_file_path_bytes | 255 | - | security_groups | 10 | - +-----------------------------+-------+ - - .. note:: - - To view a list of options for the ``quota-update`` command, run: - - .. code-block:: console - - $ nova help quota-update - -Set Object Storage Quotas -------------------------- - -There are currently two categories of quotas for Object Storage: - -Container quotas - Limit the total size (in bytes) or number of objects that can be - stored in a single container. - -Account quotas - Limit the total size (in bytes) that a user has available in the - Object Storage service. - -To take advantage of either container quotas or account quotas, your -Object Storage proxy server must have ``container_quotas`` or -``account_quotas`` (or both) added to the ``[pipeline:main]`` pipeline. -Each quota type also requires its own section in the -``proxy-server.conf`` file: - -.. code-block:: ini - - [pipeline:main] - pipeline = catch_errors [...] slo dlo account_quotas proxy-server - - [filter:account_quotas] - use = egg:swift#account_quotas - - [filter:container_quotas] - use = egg:swift#container_quotas - -To view and update Object Storage quotas, use the :command:`swift` command -provided by the ``python-swiftclient`` package. Any user included in the -project can view the quotas placed on their project. To update Object -Storage quotas on a project, you must have the role of ResellerAdmin in -the project that the quota is being applied to. - -To view account quotas placed on a project: - -.. code-block:: console - - $ swift stat - Account: AUTH_b36ed2d326034beba0a9dd1fb19b70f9 - Containers: 0 - Objects: 0 - Bytes: 0 - Meta Quota-Bytes: 214748364800 - X-Timestamp: 1351050521.29419 - Content-Type: text/plain; charset=utf-8 - Accept-Ranges: bytes - -To apply or update account quotas on a project: - -.. code-block:: console - - $ swift post -m quota-bytes: - - -For example, to place a 5 GB quota on an account: - -.. code-block:: console - - $ swift post -m quota-bytes: - 5368709120 - -To verify the quota, run the :command:`swift stat` command again: - -.. code-block:: console - - $ swift stat - Account: AUTH_b36ed2d326034beba0a9dd1fb19b70f9 - Containers: 0 - Objects: 0 - Bytes: 0 - Meta Quota-Bytes: 5368709120 - X-Timestamp: 1351541410.38328 - Content-Type: text/plain; charset=utf-8 - Accept-Ranges: bytes - -Set Block Storage Quotas ------------------------- - -As an administrative user, you can update the Block Storage service -quotas for a tenant, as well as update the quota defaults for a new -tenant. See :ref:`table_block_storage_quota`. - -.. _table_block_storage_quota: - -.. list-table:: Table: Block Storage quota descriptions - :widths: 50 50 - :header-rows: 1 - - * - Property name - - Description - * - gigabytes - - Number of volume gigabytes allowed per tenant - * - snapshots - - Number of Block Storage snapshots allowed per tenant. - * - volumes - - Number of Block Storage volumes allowed per tenant - -View and update Block Storage quotas for a tenant (project) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -As an administrative user, you can use the :command:`cinder quota-*` -commands, which are provided by the -``python-cinderclient`` package, to view and update tenant quotas. - -**To view and update default Block Storage quota values** - -#. List all default quotas for all tenants, as follows: - - .. code-block:: console - - $ cinder quota-defaults - - For example: - - .. code-block:: console - - $ cinder quota-defaults - +-----------+-------+ - | Property | Value | - +-----------+-------+ - | gigabytes | 1000 | - | snapshots | 10 | - | volumes | 10 | - +-----------+-------+ - -#. To update a default value for a new tenant, update the property in the - ``/etc/cinder/cinder.conf`` file. - -**To view Block Storage quotas for a tenant (project)** - -#. View quotas for the tenant, as follows: - - .. code-block:: console - - # cinder quota-show tenantName - - For example: - - .. code-block:: console - - # cinder quota-show tenant01 - +-----------+-------+ - | Property | Value | - +-----------+-------+ - | gigabytes | 1000 | - | snapshots | 10 | - | volumes | 10 | - +-----------+-------+ - -**To update Block Storage quotas for a tenant (project)** - -#. Place the tenant ID in a variable: - - .. code-block:: console - - $ tenant=$(openstack project list | awk '/tenantName/ {print $2}') - -#. Update a particular quota value, as follows: - - .. code-block:: console - - # cinder quota-update --quotaName NewValue tenantID - - For example: - - .. code-block:: console - - # cinder quota-update --volumes 15 $tenant - # cinder quota-show tenant01 - +-----------+-------+ - | Property | Value | - +-----------+-------+ - | gigabytes | 1000 | - | snapshots | 10 | - | volumes | 15 | - +-----------+-------+ - -User Management -~~~~~~~~~~~~~~~ - -The command-line tools for managing users are inconvenient to use -directly. They require issuing multiple commands to complete a single -task, and they use UUIDs rather than symbolic names for many items. In -practice, humans typically do not use these tools directly. Fortunately, -the OpenStack dashboard provides a reasonable interface to this. In -addition, many sites write custom tools for local needs to enforce local -policies and provide levels of self-service to users that aren't -currently available with packaged tools. - -Creating New Users -~~~~~~~~~~~~~~~~~~ - -To create a user, you need the following information: - -* Username -* Email address -* Password -* Primary project -* Role -* Enabled - -Username and email address are self-explanatory, though your site may -have local conventions you should observe. The primary project is simply -the first project the user is associated with and must exist prior to -creating the user. Role is almost always going to be "member." Out of -the box, OpenStack comes with two roles defined: - -member - A typical user - -admin - An administrative super user, which has full permissions across all - projects and should be used with great care - -It is possible to define other roles, but doing so is uncommon. - -Once you've gathered this information, creating the user in the -dashboard is just another web form similar to what we've seen before and -can be found by clicking the Users link in the Identity navigation bar -and then clicking the Create User button at the top right. - -Modifying users is also done from this Users page. If you have a large -number of users, this page can get quite crowded. The Filter search box -at the top of the page can be used to limit the users listing. A form -very similar to the user creation dialog can be pulled up by selecting -Edit from the actions dropdown menu at the end of the line for the user -you are modifying. - -Associating Users with Projects -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Many sites run with users being associated with only one project. This -is a more conservative and simpler choice both for administration and -for users. Administratively, if a user reports a problem with an -instance or quota, it is obvious which project this relates to. Users -needn't worry about what project they are acting in if they are only in -one project. However, note that, by default, any user can affect the -resources of any other user within their project. It is also possible to -associate users with multiple projects if that makes sense for your -organization. - -Associating existing users with an additional project or removing them -from an older project is done from the Projects page of the dashboard by -selecting Modify Users from the Actions column, as shown in -:ref:`figure_edit_project_members`. - -From this view, you can do a number of useful things, as well as a few -dangerous ones. - -The first column of this form, named All Users, includes a list of all -the users in your cloud who are not already associated with this -project. The second column shows all the users who are. These lists can -be quite long, but they can be limited by typing a substring of the -username you are looking for in the filter field at the top of the -column. - -From here, click the :guilabel:`+` icon to add users to the project. -Click the :guilabel:`-` to remove them. - -.. _figure_edit_project_members: - -.. figure:: figures/osog_0902.png - :alt: Edit Project Members tab - - Edit Project Members tab - -The dangerous possibility comes with the ability to change member roles. -This is the dropdown list below the username in the -:guilabel:`Project Members` list. In virtually all cases, -this value should be set to Member. This example purposefully shows -an administrative user where this value is admin. - -.. warning:: - - The admin is global, not per project, so granting a user the admin - role in any project gives the user administrative rights across the - whole cloud. - -Typical use is to only create administrative users in a single project, -by convention the admin project, which is created by default during -cloud setup. If your administrative users also use the cloud to launch -and manage instances, it is strongly recommended that you use separate -user accounts for administrative access and normal operations and that -they be in distinct projects. - -Customizing Authorization -------------------------- - -The default :term:`authorization` settings allow administrative users -only to create resources on behalf of a different project. -OpenStack handles two kinds of authorization policies: - -Operation based - Policies specify access criteria for specific operations, possibly - with fine-grained control over specific attributes. - -Resource based - Whether access to a specific resource might be granted or not - according to the permissions configured for the resource (currently - available only for the network resource). The actual authorization - policies enforced in an OpenStack service vary from deployment to - deployment. - -The policy engine reads entries from the ``policy.json`` file. The -actual location of this file might vary from distribution to -distribution: for nova, it is typically in ``/etc/nova/policy.json``. -You can update entries while the system is running, and you do not have -to restart services. Currently, the only way to update such policies is -to edit the policy file. - -The OpenStack service's policy engine matches a policy directly. A rule -indicates evaluation of the elements of such policies. For instance, in -a ``compute:create: [["rule:admin_or_owner"]]`` statement, the policy is -``compute:create``, and the rule is ``admin_or_owner``. - -Policies are triggered by an OpenStack policy engine whenever one of -them matches an OpenStack API operation or a specific attribute being -used in a given operation. For instance, the engine tests the -``create:compute`` policy every time a user sends a -``POST /v2/{tenant_id}/servers`` request to the OpenStack Compute API -server. Policies can be also related to specific :term:`API extensions -`. For instance, if a user needs an extension like -``compute_extension:rescue``, the attributes defined by the provider -extensions trigger the rule test for that operation. - -An authorization policy can be composed by one or more rules. If more -rules are specified, evaluation policy is successful if any of the rules -evaluates successfully; if an API operation matches multiple policies, -then all the policies must evaluate successfully. Also, authorization -rules are recursive. Once a rule is matched, the rule(s) can be resolved -to another rule, until a terminal rule is reached. These are the rules -defined: - -Role-based rules - Evaluate successfully if the user submitting the request has the - specified role. For instance, ``"role:admin"`` is successful if the - user submitting the request is an administrator. - -Field-based rules - Evaluate successfully if a field of the resource specified in the - current request matches a specific value. For instance, - ``"field:networks:shared=True"`` is successful if the attribute - shared of the network resource is set to ``true``. - -Generic rules - Compare an attribute in the resource with an attribute extracted - from the user's security credentials and evaluates successfully if - the comparison is successful. For instance, - ``"tenant_id:%(tenant_id)s"`` is successful if the tenant identifier - in the resource is equal to the tenant identifier of the user - submitting the request. - -Here are snippets of the default nova ``policy.json`` file: - -.. code-block:: json - - { - "context_is_admin": [["role:admin"]], - "admin_or_owner": [["is_admin:True"], ["project_id:%(project_id)s"]], ~~~~(1)~~~~ - "default": [["rule:admin_or_owner"]], ~~~~(2)~~~~ - "compute:create": [ ], - "compute:create:attach_network": [ ], - "compute:create:attach_volume": [ ], - "compute:get_all": [ ], - "admin_api": [["is_admin:True"]], - "compute_extension:accounts": [["rule:admin_api"]], - "compute_extension:admin_actions": [["rule:admin_api"]], - "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], - ... - "compute_extension:admin_actions:migrate": [["rule:admin_api"]], - "compute_extension:aggregates": [["rule:admin_api"]], - "compute_extension:certificates": [ ], - ... - "compute_extension:flavorextraspecs": [ ], - "compute_extension:flavormanage": [["rule:admin_api"]], ~~~~(3)~~~~ - } - - -1. Shows a rule that evaluates successfully if the current user is an - administrator or the owner of the resource specified in the request - (tenant identifier is equal). - -2. Shows the default policy, which is always evaluated if an API - operation does not match any of the policies in ``policy.json``. - -3. Shows a policy restricting the ability to manipulate flavors to - administrators using the Admin API only.admin API - -In some cases, some operations should be restricted to administrators -only. Therefore, as a further example, let us consider how this sample -policy file could be modified in a scenario where we enable users to -create their own flavors: - -.. code-block:: console - - "compute_extension:flavormanage": [ ], - -Users Who Disrupt Other Users ------------------------------ - -Users on your cloud can disrupt other users, sometimes intentionally and -maliciously and other times by accident. Understanding the situation -allows you to make a better decision on how to handle the -disruption. - -For example, a group of users have instances that are utilizing a large -amount of compute resources for very compute-intensive tasks. This is -driving the load up on compute nodes and affecting other users. In this -situation, review your user use cases. You may find that high compute -scenarios are common, and should then plan for proper segregation in -your cloud, such as host aggregation or regions. - -Another example is a user consuming a very large amount of -bandwidthbandwidth recognizing DDOS attacks. Again, the key is to -understand what the user is doing. If she naturally needs a high amount -of bandwidth, you might have to limit her transmission rate as to not -affect other users or move her to an area with more bandwidth available. -On the other hand, maybe her instance has been hacked and is part of a -botnet launching DDOS attacks. Resolution of this issue is the same as -though any other server on your network has been hacked. Contact the -user and give her time to respond. If she doesn't respond, shut down the -instance. - -A final example is if a user is hammering cloud resources repeatedly. -Contact the user and learn what he is trying to do. Maybe he doesn't -understand that what he's doing is inappropriate, or maybe there is an -issue with the resource he is trying to access that is causing his -requests to queue or lag. - -Summary -~~~~~~~ - -One key element of systems administration that is often overlooked is -that end users are the reason systems administrators exist. Don't go the -BOFH route and terminate every user who causes an alert to go off. Work -with users to understand what they're trying to accomplish and see how -your environment can better assist them in achieving their goals. Meet -your users needs by organizing your users into projects, applying -policies, managing quotas, and working with them. diff --git a/doc/ops-guide/source/ops_upgrades.rst b/doc/ops-guide/source/ops_upgrades.rst deleted file mode 100644 index 0e1f11f0..00000000 --- a/doc/ops-guide/source/ops_upgrades.rst +++ /dev/null @@ -1,550 +0,0 @@ -======== -Upgrades -======== - -With the exception of Object Storage, upgrading from one version of -OpenStack to another can take a great deal of effort. This chapter -provides some guidance on the operational aspects that you should -consider for performing an upgrade for a basic architecture. - -Pre-upgrade considerations -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Upgrade planning ----------------- - -- Thoroughly review the `release - notes `_ to learn - about new, updated, and deprecated features. Find incompatibilities - between versions. - -- Consider the impact of an upgrade to users. The upgrade process - interrupts management of your environment including the dashboard. If - you properly prepare for the upgrade, existing instances, networking, - and storage should continue to operate. However, instances might - experience intermittent network interruptions. - -- Consider the approach to upgrading your environment. You can perform - an upgrade with operational instances, but this is a dangerous - approach. You might consider using live migration to temporarily - relocate instances to other compute nodes while performing upgrades. - However, you must ensure database consistency throughout the process; - otherwise your environment might become unstable. Also, don't forget - to provide sufficient notice to your users, including giving them - plenty of time to perform their own backups. - -- Consider adopting structure and options from the service - configuration files and merging them with existing configuration - files. The `OpenStack Configuration - Reference `_ - contains new, updated, and deprecated options for most services. - -- Like all major system upgrades, your upgrade could fail for one or - more reasons. You should prepare for this situation by having the - ability to roll back your environment to the previous release, - including databases, configuration files, and packages. We provide an - example process for rolling back your environment in - :ref:`rolling_back_a_failed_upgrade`. - -- Develop an upgrade procedure and assess it thoroughly by using a test - environment similar to your production environment. - -Pre-upgrade testing environment -------------------------------- - -The most important step is the pre-upgrade testing. If you are upgrading -immediately after release of a new version, undiscovered bugs might -hinder your progress. Some deployers prefer to wait until the first -point release is announced. However, if you have a significant -deployment, you might follow the development and testing of the release -to ensure that bugs for your use cases are fixed. - -Each OpenStack cloud is different even if you have a near-identical -architecture as described in this guide. As a result, you must still -test upgrades between versions in your environment using an approximate -clone of your environment. - -However, that is not to say that it needs to be the same size or use -identical hardware as the production environment. It is important to -consider the hardware and scale of the cloud that you are upgrading. The -following tips can help you minimise the cost: - -Use your own cloud - The simplest place to start testing the next version of OpenStack is - by setting up a new environment inside your own cloud. This might - seem odd, especially the double virtualization used in running - compute nodes. But it is a sure way to very quickly test your - configuration. - -Use a public cloud - Consider using a public cloud to test the scalability limits of your - cloud controller configuration. Most public clouds bill by the hour, - which means it can be inexpensive to perform even a test with many - nodes. - -Make another storage endpoint on the same system - If you use an external storage plug-in or shared file system with - your cloud, you can test whether it works by creating a second share - or endpoint. This allows you to test the system before entrusting - the new version on to your storage. - -Watch the network - Even at smaller-scale testing, look for excess network packets to - determine whether something is going horribly wrong in - inter-component communication. - -To set up the test environment, you can use one of several methods: - -- Do a full manual install by using the `OpenStack Installation - Guide `_ for - your platform. Review the final configuration files and installed - packages. - -- Create a clone of your automated configuration infrastructure with - changed package repository URLs. - - Alter the configuration until it works. - -Either approach is valid. Use the approach that matches your experience. - -An upgrade pre-testing system is excellent for getting the configuration -to work. However, it is important to note that the historical use of the -system and differences in user interaction can affect the success of -upgrades. - -If possible, we highly recommend that you dump your production database -tables and test the upgrade in your development environment using this -data. Several MySQL bugs have been uncovered during database migrations -because of slight table differences between a fresh installation and -tables that migrated from one version to another. This will have impact -on large real datasets, which you do not want to encounter during a -production outage. - -Artificial scale testing can go only so far. After your cloud is -upgraded, you must pay careful attention to the performance aspects of -your cloud. - -Upgrade Levels --------------- - -Upgrade levels are a feature added to OpenStack Compute since the -Grizzly release to provide version locking on the RPC (Message Queue) -communications between the various Compute services. - -This functionality is an important piece of the puzzle when it comes to -live upgrades and is conceptually similar to the existing API versioning -that allows OpenStack services of different versions to communicate -without issue. - -Without upgrade levels, an X+1 version Compute service can receive and -understand X version RPC messages, but it can only send out X+1 version -RPC messages. For example, if a nova-conductor process has been upgraded -to X+1 version, then the conductor service will be able to understand -messages from X version nova-compute processes, but those compute -services will not be able to understand messages sent by the conductor -service. - -During an upgrade, operators can add configuration options to -``nova.conf`` which lock the version of RPC messages and allow live -upgrading of the services without interruption caused by version -mismatch. The configuration options allow the specification of RPC -version numbers if desired, but release name alias are also supported. -For example: - -.. code-block:: ini - - [upgrade_levels] - compute=X+1 - conductor=X+1 - scheduler=X+1 - -will keep the RPC version locked across the specified services to the -RPC version used in X+1. As all instances of a particular service are -upgraded to the newer version, the corresponding line can be removed -from ``nova.conf``. - -Using this functionality, ideally one would lock the RPC version to the -OpenStack version being upgraded from on nova-compute nodes, to ensure -that, for example X+1 version nova-compute processes will continue to -work with X version nova-conductor processes while the upgrade -completes. Once the upgrade of nova-compute processes is complete, the -operator can move onto upgrading nova-conductor and remove the version -locking for nova-compute in ``nova.conf``. - -General upgrade process -~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes the process to upgrade a basic OpenStack -deployment based on the basic two-node architecture in the `OpenStack -Installation -Guide `_. All -nodes must run a supported distribution of Linux with a recent kernel -and the current release packages. - - -Service specific upgrade instructions -------------------------------------- - -* `Upgrading the Networking Service `_ - -Prerequisites -------------- - -- Perform some cleaning of the environment prior to starting the - upgrade process to ensure a consistent state. For example, instances - not fully purged from the system after deletion might cause - indeterminate behavior. - -- For environments using the OpenStack Networking service (neutron), - verify the release version of the database. For example: - - .. code-block:: console - - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini current" neutron - -Perform a backup ----------------- - -#. Save the configuration files on all nodes. For example: - - .. code-block:: console - - # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \ - do mkdir $i-kilo; \ - done - # for i in keystone glance nova neutron openstack-dashboard cinder heat ceilometer; \ - do cp -r /etc/$i/* $i-kilo/; \ - done - - .. note:: - - You can modify this example script on each node to handle different - services. - -#. Make a full database backup of your production data. As of Kilo, - database downgrades are not supported, and the only method available to - get back to a prior database version will be to restore from backup. - - .. code-block:: console - - # mysqldump -u root -p --opt --add-drop-database --all-databases > icehouse-db-backup.sql - - .. note:: - - Consider updating your SQL server configuration as described in the - `OpenStack Installation - Guide `_. - -Manage repositories -------------------- - -On all nodes: - -#. Remove the repository for the previous release packages. - -#. Add the repository for the new release packages. - -#. Update the repository database. - -Upgrade packages on each node ------------------------------ - -Depending on your specific configuration, upgrading all packages might -restart or break services supplemental to your OpenStack environment. -For example, if you use the TGT iSCSI framework for Block Storage -volumes and the upgrade includes new packages for it, the package -manager might restart the TGT iSCSI services and impact connectivity to -volumes. - -If the package manager prompts you to update configuration files, reject -the changes. The package manager appends a suffix to newer versions of -configuration files. Consider reviewing and adopting content from these -files. - -.. note:: - - You may need to explicitly install the ``ipset`` package if your - distribution does not install it as a dependency. - -Update services ---------------- - -To update a service on each node, you generally modify one or more -configuration files, stop the service, synchronize the database schema, -and start the service. Some services require different steps. We -recommend verifying operation of each service before proceeding to the -next service. - -The order you should upgrade services, and any changes from the general -upgrade process is described below: - -**Controller node** - -#. OpenStack Identity - Clear any expired tokens before synchronizing - the database. - -#. OpenStack Image service - -#. OpenStack Compute, including networking components. - -#. OpenStack Networking - -#. OpenStack Block Storage - -#. OpenStack dashboard - In typical environments, updating the - dashboard only requires restarting the Apache HTTP service. - -#. OpenStack Orchestration - -#. OpenStack Telemetry - In typical environments, updating the - Telemetry service only requires restarting the service. - -#. OpenStack Compute - Edit the configuration file and restart the - service. - -#. OpenStack Networking - Edit the configuration file and restart the - service. - -**Compute nodes** - -- OpenStack Block Storage - Updating the Block Storage service only - requires restarting the service. - -**Storage nodes** - -- OpenStack Networking - Edit the configuration file and restart the - service. - -Final steps ------------ - -On all distributions, you must perform some final tasks to complete the -upgrade process. - -#. Decrease DHCP timeouts by modifying ``/etc/nova/nova.conf`` on the - compute nodes back to the original value for your environment. - -#. Update all ``.ini`` files to match passwords and pipelines as required - for the OpenStack release in your environment. - -#. After migration, users see different results from - :command:`nova image-list` and :command:`glance image-list`. To ensure - users see the same images in the list - commands, edit the ``/etc/glance/policy.json`` and - ``/etc/nova/policy.json`` files to contain - ``"context_is_admin": "role:admin"``, which limits access to private - images for projects. - -#. Verify proper operation of your environment. Then, notify your users - that their cloud is operating normally again. - -.. _rolling_back_a_failed_upgrade: - -Rolling back a failed upgrade -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Upgrades involve complex operations and can fail. Before attempting any -upgrade, you should make a full database backup of your production data. -As of Kilo, database downgrades are not supported, and the only method -available to get back to a prior database version will be to restore -from backup. - -This section provides guidance for rolling back to a previous release of -OpenStack. All distributions follow a similar procedure. - -A common scenario is to take down production management services in -preparation for an upgrade, completed part of the upgrade process, and -discovered one or more problems not encountered during testing. As a -consequence, you must roll back your environment to the original "known -good" state. You also made sure that you did not make any state changes -after attempting the upgrade process; no new instances, networks, -storage volumes, and so on. Any of these new resources will be in a -frozen state after the databases are restored from backup. - -Within this scope, you must complete these steps to successfully roll -back your environment: - -#. Roll back configuration files. - -#. Restore databases from backup. - -#. Roll back packages. - -You should verify that you have the requisite backups to restore. -Rolling back upgrades is a tricky process because distributions tend to -put much more effort into testing upgrades than downgrades. Broken -downgrades take significantly more effort to troubleshoot and, resolve -than broken upgrades. Only you can weigh the risks of trying to push a -failed upgrade forward versus rolling it back. Generally, consider -rolling back as the very last option. - -The following steps described for Ubuntu have worked on at least one -production environment, but they might not work for all environments. - -**To perform the rollback** - -#. Stop all OpenStack services. - -#. Copy contents of configuration backup directories that you created - during the upgrade process back to ``/etc/`` directory. - -#. Restore databases from the ``RELEASE_NAME-db-backup.sql`` backup file - that you created with the :command:`mysqldump` command during the upgrade - process: - - .. code-block:: console - - # mysql -u root -p < RELEASE_NAME-db-backup.sql - -#. Downgrade OpenStack packages. - - .. warning:: - - Downgrading packages is by far the most complicated step; it is - highly dependent on the distribution and the overall administration - of the system. - - #. Determine which OpenStack packages are installed on your system. Use the - :command:`dpkg --get-selections` command. Filter for OpenStack - packages, filter again to omit packages explicitly marked in the - ``deinstall`` state, and save the final output to a file. For example, - the following command covers a controller node with keystone, glance, - nova, neutron, and cinder: - - .. code-block:: console - - # dpkg --get-selections | grep -e keystone -e glance -e nova -e neutron \ - -e cinder | grep -v deinstall | tee openstack-selections - cinder-api install - cinder-common install - cinder-scheduler install - cinder-volume install - glance install - glance-api install - glance-common install - glance-registry install - neutron-common install - neutron-dhcp-agent install - neutron-l3-agent install - neutron-lbaas-agent install - neutron-metadata-agent install - neutron-plugin-openvswitch install - neutron-plugin-openvswitch-agent install - neutron-server install - nova-api install - nova-cert install - nova-common install - nova-conductor install - nova-consoleauth install - nova-novncproxy install - nova-objectstore install - nova-scheduler install - python-cinder install - python-cinderclient install - python-glance install - python-glanceclient install - python-keystone install - python-keystoneclient install - python-neutron install - python-neutronclient install - python-nova install - python-novaclient install - - .. note:: - - Depending on the type of server, the contents and order of your - package list might vary from this example. - - #. You can determine the package versions available for reversion by using - the ``apt-cache policy`` command. If you removed the Grizzly - repositories, you must first reinstall them and run ``apt-get update``: - - .. code-block:: console - - # apt-cache policy nova-common - nova-common: - Installed: 1:2013.2-0ubuntu1~cloud0 - Candidate: 1:2013.2-0ubuntu1~cloud0 - Version table: - *** 1:2013.2-0ubuntu1~cloud0 0 - 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ - precise-updates/havana/main amd64 Packages - 100 /var/lib/dpkg/status - 1:2013.1.4-0ubuntu1~cloud0 0 - 500 http://ubuntu-cloud.archive.canonical.com/ubuntu/ - precise-updates/grizzly/main amd64 Packages - 2012.1.3+stable-20130423-e52e6912-0ubuntu1.2 0 - 500 http://us.archive.ubuntu.com/ubuntu/ - precise-updates/main amd64 Packages - 500 http://security.ubuntu.com/ubuntu/ - precise-security/main amd64 Packages - 2012.1-0ubuntu2 0 - 500 http://us.archive.ubuntu.com/ubuntu/ - precise/main amd64 Packages - - This tells us the currently installed version of the package, newest - candidate version, and all versions along with the repository that - contains each version. Look for the appropriate Grizzly - version— ``1:2013.1.4-0ubuntu1~cloud0`` in this case. The process of - manually picking through this list of packages is rather tedious and - prone to errors. You should consider using the following script to help - with this process: - - .. code-block:: console - - # for i in `cut -f 1 openstack-selections | sed 's/neutron/quantum/;'`; - do echo -n $i ;apt-cache policy $i | grep -B 1 grizzly | - grep -v Packages | awk '{print "="$1}';done | tr '\n' ' ' | - tee openstack-grizzly-versions - cinder-api=1:2013.1.4-0ubuntu1~cloud0 - cinder-common=1:2013.1.4-0ubuntu1~cloud0 - cinder-scheduler=1:2013.1.4-0ubuntu1~cloud0 - cinder-volume=1:2013.1.4-0ubuntu1~cloud0 - glance=1:2013.1.4-0ubuntu1~cloud0 - glance-api=1:2013.1.4-0ubuntu1~cloud0 - glance-common=1:2013.1.4-0ubuntu1~cloud0 - glance-registry=1:2013.1.4-0ubuntu1~cloud0 - quantum-common=1:2013.1.4-0ubuntu1~cloud0 - quantum-dhcp-agent=1:2013.1.4-0ubuntu1~cloud0 - quantum-l3-agent=1:2013.1.4-0ubuntu1~cloud0 - quantum-lbaas-agent=1:2013.1.4-0ubuntu1~cloud0 - quantum-metadata-agent=1:2013.1.4-0ubuntu1~cloud0 - quantum-plugin-openvswitch=1:2013.1.4-0ubuntu1~cloud0 - quantum-plugin-openvswitch-agent=1:2013.1.4-0ubuntu1~cloud0 - quantum-server=1:2013.1.4-0ubuntu1~cloud0 - nova-api=1:2013.1.4-0ubuntu1~cloud0 - nova-cert=1:2013.1.4-0ubuntu1~cloud0 - nova-common=1:2013.1.4-0ubuntu1~cloud0 - nova-conductor=1:2013.1.4-0ubuntu1~cloud0 - nova-consoleauth=1:2013.1.4-0ubuntu1~cloud0 - nova-novncproxy=1:2013.1.4-0ubuntu1~cloud0 - nova-objectstore=1:2013.1.4-0ubuntu1~cloud0 - nova-scheduler=1:2013.1.4-0ubuntu1~cloud0 - python-cinder=1:2013.1.4-0ubuntu1~cloud0 - python-cinderclient=1:1.0.3-0ubuntu1~cloud0 - python-glance=1:2013.1.4-0ubuntu1~cloud0 - python-glanceclient=1:0.9.0-0ubuntu1.2~cloud0 - python-quantum=1:2013.1.4-0ubuntu1~cloud0 - python-quantumclient=1:2.2.0-0ubuntu1~cloud0 - python-nova=1:2013.1.4-0ubuntu1~cloud0 - python-novaclient=1:2.13.0-0ubuntu1~cloud0 - - .. note:: - - If you decide to continue this step manually, don't forget to change - ``neutron`` to ``quantum`` where applicable. - - #. Use the :command:`apt-get install` command to install specific versions of each - package by specifying ``=``. The script in the - previous step conveniently created a list of ``package=version`` pairs - for you: - - .. code-block:: console - - # apt-get install `cat openstack-grizzly-versions` - - This step completes the rollback procedure. You should remove the - upgrade release repository and run :command:`apt-get update` to prevent - accidental upgrades until you solve whatever issue caused you to roll - back your environment. diff --git a/doc/ops-guide/source/ops_upstream.rst b/doc/ops-guide/source/ops_upstream.rst deleted file mode 100644 index 55c6a9af..00000000 --- a/doc/ops-guide/source/ops_upstream.rst +++ /dev/null @@ -1,324 +0,0 @@ -================== -Upstream OpenStack -================== - -OpenStack is founded on a thriving community that is a source of help -and welcomes your contributions. This chapter details some of the ways -you can interact with the others involved. - -Getting Help -~~~~~~~~~~~~ - -There are several avenues available for seeking assistance. The quickest -way is to help the community help you. Search the Q&A sites, mailing -list archives, and bug lists for issues similar to yours. If you can't -find anything, follow the directions for reporting bugs or use one of -the channels for support, which are listed below. - -Your first port of call should be the official OpenStack documentation, -found on http://docs.openstack.org. You can get questions answered on -http://ask.openstack.org. - -`Mailing lists `_ are -also a great place to get help. The wiki page has more information about -the various lists. As an operator, the main lists you should be aware of -are: - -`General list `_ - *openstack@lists.openstack.org*. The scope of this list is the - current state of OpenStack. This is a very high-traffic mailing - list, with many, many emails per day. - -`Operators list `_ - *openstack-operators@lists.openstack.org.* This list is intended for - discussion among existing OpenStack cloud operators, such as - yourself. Currently, this list is relatively low traffic, on the - order of one email a day. - -`Development list `_ - *openstack-dev@lists.openstack.org*. The scope of this list is the - future state of OpenStack. This is a high-traffic mailing list, with - multiple emails per day. - -We recommend that you subscribe to the general list and the operator -list, although you must set up filters to manage the volume for the -general list. You'll also find links to the mailing list archives on the -mailing list wiki page, where you can search through the discussions. - -`Multiple IRC channels `_ are -available for general questions and developer discussions. The general -discussion channel is #openstack on *irc.freenode.net*. - -Reporting Bugs -~~~~~~~~~~~~~~ - -As an operator, you are in a very good position to report unexpected -behavior with your cloud. Since OpenStack is flexible, you may be the -only individual to report a particular issue. Every issue is important -to fix, so it is essential to learn how to easily submit a bug -report. - -All OpenStack projects use `Launchpad `_ -for bug tracking. You'll need to create an account on Launchpad before you -can submit a bug report. - -Once you have a Launchpad account, reporting a bug is as simple as -identifying the project or projects that are causing the issue. -Sometimes this is more difficult than expected, but those working on the -bug triage are happy to help relocate issues if they are not in the -right place initially: - -- Report a bug in - `nova `_. - -- Report a bug in - `python-novaclient `_. - -- Report a bug in - `swift `_. - -- Report a bug in - `python-swiftclient `_. - -- Report a bug in - `glance `_. - -- Report a bug in - `python-glanceclient `_. - -- Report a bug in - `keystone `_. - -- Report a bug in - `python-keystoneclient `_. - -- Report a bug in - `neutron `_. - -- Report a bug in - `python-neutronclient `_. - -- Report a bug in - `cinder `_. - -- Report a bug in - `python-cinderclient `_. - -- Report a bug in - `manila `_. - -- Report a bug in - `python-manilaclient `_. - -- Report a bug in - `python-openstackclient `_. - -- Report a bug in - `horizon `_. - -- Report a bug with the - `documentation `_. - -- Report a bug with the `API - documentation `_. - -To write a good bug report, the following process is essential. First, -search for the bug to make sure there is no bug already filed for the -same issue. If you find one, be sure to click on "This bug affects X -people. Does this bug affect you?" If you can't find the issue, then -enter the details of your report. It should at least include: - -- The release, or milestone, or commit ID corresponding to the software - that you are running - -- The operating system and version where you've identified the bug - -- Steps to reproduce the bug, including what went wrong - -- Description of the expected results instead of what you saw - -- Portions of your log files so that you include only relevant excerpts - -When you do this, the bug is created with: - -- Status: *New* - -In the bug comments, you can contribute instructions on how to fix a -given bug, and set it to *Triaged*. Or you can directly fix it: assign -the bug to yourself, set it to *In progress*, branch the code, implement -the fix, and propose your change for merging. But let's not get ahead of -ourselves; there are bug triaging tasks as well. - -Confirming and Prioritizing ---------------------------- - -This stage is about checking that a bug is real and assessing its -impact. Some of these steps require bug supervisor rights (usually -limited to core teams). If the bug lacks information to properly -reproduce or assess the importance of the bug, the bug is set to: - -- Status: *Incomplete* - -Once you have reproduced the issue (or are 100 percent confident that -this is indeed a valid bug) and have permissions to do so, set: - -- Status: *Confirmed* - -Core developers also prioritize the bug, based on its impact: - -- Importance: - -The bug impacts are categorized as follows: - -#. *Critical* if the bug prevents a key feature from working properly - (regression) for all users (or without a simple workaround) or - results in data loss - -#. *High* if the bug prevents a key feature from working properly for - some users (or with a workaround) - -#. *Medium* if the bug prevents a secondary feature from working - properly - -#. *Low* if the bug is mostly cosmetic - -#. *Wishlist* if the bug is not really a bug but rather a welcome change - in behavior - -If the bug contains the solution, or a patch, set the bug status to -*Triaged*. - -Bug Fixing ----------- - -At this stage, a developer works on a fix. During that time, to avoid -duplicating the work, the developer should set: - -- Status: *In Progress* - -- Assignee: - -When the fix is ready, the developer proposes a change and gets the -change reviewed. - -After the Change Is Accepted ----------------------------- - -After the change is reviewed, accepted, and lands in master, it -automatically moves to: - -- Status: *Fix Committed* - -When the fix makes it into a milestone or release branch, it -automatically moves to: - -- Milestone: Milestone the bug was fixed in - -- Status: \ *Fix Released* - -Join the OpenStack Community -~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Since you've made it this far in the book, you should consider becoming -an official individual member of the community and `join the OpenStack -Foundation `_. The OpenStack -Foundation is an independent body providing shared resources to help -achieve the OpenStack mission by protecting, empowering, and promoting -OpenStack software and the community around it, including users, -developers, and the entire ecosystem. We all share the responsibility to -make this community the best it can possibly be, and signing up to be a -member is the first step to participating. Like the software, individual -membership within the OpenStack Foundation is free and accessible to -anyone. - -How to Contribute to the Documentation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack documentation efforts encompass operator and administrator -docs, API docs, and user docs. - -The genesis of this book was an in-person event, but now that the book -is in your hands, we want you to contribute to it. OpenStack -documentation follows the coding principles of iterative work, with bug -logging, investigating, and fixing. - -Just like the code, http://docs.openstack.org is updated constantly -using the Gerrit review system, with source stored in git.openstack.org -in the `openstack-manuals -repository `_ -and the `api-site -repository `_. - -To review the documentation before it's published, go to the OpenStack -Gerrit server at \ http://review.openstack.org and search for -`project:openstack/openstack-manuals `_ -or -`project:openstack/api-site `_. - -See the `How To Contribute page on the -wiki `_ for more -information on the steps you need to take to submit your first -documentation review or change. - -Security Information -~~~~~~~~~~~~~~~~~~~~ - -As a community, we take security very seriously and follow a specific -process for reporting potential issues. We vigilantly pursue fixes and -regularly eliminate exposures. You can report security issues you -discover through this specific process. The OpenStack Vulnerability -Management Team is a very small group of experts in vulnerability -management drawn from the OpenStack community. The team's job is -facilitating the reporting of vulnerabilities, coordinating security -fixes and handling progressive disclosure of the vulnerability -information. Specifically, the team is responsible for the following -functions: - -Vulnerability management - All vulnerabilities discovered by community members (or users) can - be reported to the team. - -Vulnerability tracking - The team will curate a set of vulnerability related issues in the - issue tracker. Some of these issues are private to the team and the - affected product leads, but once remediation is in place, all - vulnerabilities are public. - -Responsible disclosure - As part of our commitment to work with the security community, the - team ensures that proper credit is given to security researchers who - responsibly report issues in OpenStack. - -We provide two ways to report issues to the OpenStack Vulnerability -Management Team, depending on how sensitive the issue is: - -- Open a bug in Launchpad and mark it as a "security bug." This makes - the bug private and accessible to only the Vulnerability Management - Team. - -- If the issue is extremely sensitive, send an encrypted email to one - of the team's members. Find their GPG keys at `OpenStack - Security `_. - -You can find the full list of security-oriented teams you can join at -`Security Teams `_. The -vulnerability management process is fully documented at `Vulnerability -Management `_. - -Finding Additional Information -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In addition to this book, there are many other sources of information -about OpenStack. The -`OpenStack website `_ -is a good starting point, with -`OpenStack Docs `_ and `OpenStack API -Docs `_ providing technical -documentation about OpenStack. The `OpenStack -wiki `_ contains a lot of -general information that cuts across the OpenStack projects, including a -list of `recommended -tools `_. Finally, -there are a number of blogs aggregated at \ `Planet -OpenStack `_.OpenStack community -additional information diff --git a/doc/ops-guide/source/ops_user_facing_operations.rst b/doc/ops-guide/source/ops_user_facing_operations.rst deleted file mode 100644 index ddf13c94..00000000 --- a/doc/ops-guide/source/ops_user_facing_operations.rst +++ /dev/null @@ -1,2322 +0,0 @@ -====================== -User-Facing Operations -====================== - -This guide is for OpenStack operators and does not seek to be an -exhaustive reference for users, but as an operator, you should have a -basic understanding of how to use the cloud facilities. This chapter -looks at OpenStack from a basic user perspective, which helps you -understand your users' needs and determine, when you get a trouble -ticket, whether it is a user issue or a service issue. The main concepts -covered are images, flavors, security groups, block storage, shared file -system storage, and instances. - -Images -~~~~~~ - -OpenStack images can often be thought of as "virtual machine templates." -Images can also be standard installation media such as ISO images. -Essentially, they contain bootable file systems that are used to launch -instances. - -Adding Images -------------- - -Several pre-made images exist and can easily be imported into the Image -service. A common image to add is the CirrOS image, which is very small -and used for testing purposes.images adding To add this image, simply -do: - -.. code-block:: console - - $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img - $ glance image-create --name='cirros image' --is-public=true \ - --container-format=bare --disk-format=qcow2 < cirros-0.3.4-x86_64-disk.img - -The :command:`glance image-create` command provides a large set of options for -working with your image. For example, the `` min-disk`` option is useful -for images that require root disks of a certain size (for example, large -Windows images). To view these options, do: - -.. code-block:: console - - $ glance help image-create - -The ``location`` option is important to note. It does not copy the -entire image into the Image service, but references an original location -where the image can be found. Upon launching an instance of that image, -the Image service accesses the image from the location specified. - -The ``copy-from`` option copies the image from the location specified -into the ``/var/lib/glance/images`` directory. The same thing is done -when using the STDIN redirection with <, as shown in the example. - -Run the following command to view the properties of existing images: - -.. code-block:: console - - $ glance image-show - -Adding Signed Images --------------------- - -To provide a chain of trust from an end user to the Image service, and -the Image service to Compute, an end user can import signed images into -the Image service that can be verified in Compute. Appropriate Image -service properties need to be set to enable signature verification. -Currently, signature verification is provided in Compute only, but an -accompanying feature in the Image service is targeted for :term:`Mitaka`. - -.. note:: - - Prior to the steps below, an asymmetric keypair and certificate must - be generated. In this example, these are called private_key.pem and - new_cert.crt, respectively, and both reside in the current - directory. Also note that the image in this example is - cirros-0.3.4-x86_64-disk.img, but any image can be used. - -The following are steps needed to create the signature used for the -signed images: - -#. Retrieve image for upload - - .. code-block:: console - - $ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img > cirros-0.3.4-x86_64-disk.img - -#. Use private key to create a signature of the image - - .. note:: - - The following implicit values are being used to create the signature - in this example: - - - Signature hash method = SHA-256 - - - Signature key type = RSA-PSS - - .. note:: - - The following options are currently supported: - - - Signature hash methods: SHA-224, SHA-256, SHA-384, and SHA-512 - - - Signature key types: DSA, ECC_SECT571K1, ECC_SECT409K1, - ECC_SECT571R1, ECC_SECT409R1, ECC_SECP521R1, ECC_SECP384R1, - and RSA-PSS - - Generate signature of image and convert it to a base64 representation: - - .. code-block:: console - - $ openssl dgst -sha256 -sign private_key.pem -sigopt rsa_padding_mode:\ - pss -out image-file.signature cirros-0.3.4-x86_64-disk.img - - .. code-block:: console - - $ base64 image-file.signature > signature_64 - - .. code-block:: console - - $ cat signature_64 - 'c4br5f3FYQV6Nu20cRUSnx75R/VcW3diQdsUN2nhPw+UcQRDoGx92hwMgRxzFYeUyydRTWCcUS2ZLudPR9X7rM - THFInA54Zj1TwEIbJTkHwlqbWBMU4+k5IUIjXxHO6RuH3Z5f/SlSt7ajsNVXaIclWqIw5YvEkgXTIEuDPE+C4=' - -#. Create context - - .. code-block:: python - - $ python - >>> from keystoneclient.v3 import client - >>> keystone_client = client.Client(username='demo', - user_domain_name='Default', - password='password', - project_name='demo', - auth_url='http://localhost:5000/v3') - - >>> from oslo_context import context - >>> context = context.RequestContext(auth_token=keystone_client.auth_token, - tenant=keystone_client.project_id) - -#. Encode certificate in DER format - - .. code-block:: python - - >>> from cryptography import x509 as cryptography_x509 - >>> from cryptography.hazmat import backends - >>> from cryptography.hazmat.primitives import serialization - >>> with open("new_cert.crt", "rb") as cert_file: - >>> cert = cryptography_x509.load_pem_x509_certificate( - cert_file.read(), - backend=backends.default_backend() - ) - >>> certificate_der = cert.public_bytes(encoding=serialization.Encoding.DER) - -#. Upload Certificate in DER format to Castellan - - .. code-block:: python - - >>> from castellan.common.objects import x_509 - >>> from castellan import key_manager - >>> castellan_cert = x_509.X509(certificate_der) - >>> key_API = key_manager.API() - >>> cert_uuid = key_API.store(context, castellan_cert) - >>> cert_uuid - u'62a33f41-f061-44ba-9a69-4fc247d3bfce' - -#. Upload Image to Image service, with Signature Metadata - - .. note:: - - The following signature properties are used: - - - img_signature uses the signature called signature_64 - - - img_signature_certificate_uuid uses the value from cert_uuid - in section 5 above - - - img_signature_hash_method matches 'SHA-256' in section 2 above - - - img_signature_key_type matches 'RSA-PSS' in section 2 above - - .. code-block:: console - - $ source openrc demo - $ export OS_IMAGE_API_VERSION=2 - $ glance image-create\ - --property name=cirrosSignedImage_goodSignature\ - --property is-public=true\ - --container-format bare\ - --disk-format qcow2\ - --property img_signature='c4br5f3FYQV6Nu20cRUSnx75R/VcW3diQdsUN2nhPw+UcQRDoGx92hwM - gRxzFYeUyydRTWCcUS2ZLudPR9X7rMTHFInA54Zj1TwEIbJTkHwlqbWBMU4+k5IUIjXxHO6RuH3Z5f/ - SlSt7ajsNVXaIclWqIw5YvEkgXTIEuDPE+C4='\ - --property img_signature_certificate_uuid='62a33f41-f061-44ba-9a69-4fc247d3bfce'\ - --property img_signature_hash_method='SHA-256'\ - --property img_signature_key_type='RSA-PSS'\ - < ~/cirros-0.3.4-x86_64-disk.img - -#. Signature verification will occur when Compute boots the signed image - - .. note:: - - As of the Mitaka release, Compute supports instance signature - validation. This is enabled by setting the - verify_glance_signatures flag in nova.conf to TRUE. When enabled, - Compute will automatically validate signed instances prior to its - launch. - -Sharing Images Between Projects -------------------------------- - -In a multi-tenant cloud environment, users sometimes want to share their -personal images or snapshots with other projects.projects sharing images -betweenimages sharing between projects This can be done on the command -line with the ``glance`` tool by the owner of the image. - -To share an image or snapshot with another project, do the following: - -#. Obtain the UUID of the image: - - .. code-block:: console - - $ glance image-list - -#. Obtain the UUID of the project with which you want to share your image. - Unfortunately, non-admin users are unable to use the :command:`keystone` - command to do this. The easiest solution is to obtain the UUID either - from an administrator of the cloud or from a user located in the - project. - -#. Once you have both pieces of information, run - the :command:`glance` command: - - .. code-block:: console - - $ glance member-create - - For example: - - .. code-block:: console - - $ glance member-create 733d1c44-a2ea-414b-aca7-69decf20d810 \ - 771ed149ef7e4b2b88665cc1c98f77ca - - Project 771ed149ef7e4b2b88665cc1c98f77ca will now have access to image - 733d1c44-a2ea-414b-aca7-69decf20d810. - -Deleting Images ---------------- - -To delete an image,images deleting just execute: - -.. code-block:: console - - $ glance image-delete - -.. note:: - - Deleting an image does not affect instances or snapshots that were - based on the image. - -Other CLI Options ------------------ - -A full set of options can be found using:images CLI options for - -.. code-block:: console - - $ glance help - -or the `Command-Line Interface -Reference `__. - -The Image service and the Database ----------------------------------- - -The only thing that the Image service does not store in a database is -the image itself. The Image service database has two main -tables: - -- ``images`` - -- ``image_properties`` - -Working directly with the database and SQL queries can provide you with -custom lists and reports of images. Technically, you can update -properties about images through the database, although this is not -generally recommended. - -Example Image service Database Queries --------------------------------------- - -One interesting example is modifying the table of images and the owner -of that image. This can be easily done if you simply display the unique -ID of the owner. Image service database queriesThis example goes one -step further and displays the readable name of the owner: - -.. code-block:: console - - mysql> select glance.images.id, - glance.images.name, keystone.tenant.name, is_public from - glance.images inner join keystone.tenant on - glance.images.owner=keystone.tenant.id; - -Another example is displaying all properties for a certain image: - -.. code-block:: console - - mysql> select name, value from - image_properties where id = - -Flavors -~~~~~~~ - -Virtual hardware templates are called "flavors" in OpenStack, defining -sizes for RAM, disk, number of cores, and so on. The default install -provides five flavors. - -These are configurable by admin users (the rights may also be delegated -to other users by redefining the access controls for -``compute_extension:flavormanage`` in ``/etc/nova/policy.json`` on the -``nova-api`` server). To get the list of available flavors on your -system, run: - -.. code-block:: console - - $ nova flavor-list - +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ - | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | - +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ - | 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True | - | 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True | - | 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True | - | 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True | - | 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True | - +-----+-----------+-----------+------+-----------+------+-------+-------------+-----------+ - -The :command:`nova flavor-create` command allows authorized users to create -new flavors. Additional flavor manipulation commands can be shown with the -command: - -.. code-block:: console - - $ nova help | grep flavor - -Flavors define a number of parameters, resulting in the user having a -choice of what type of virtual machine to run—just like they would have -if they were purchasing a physical server. The below table lists the elements -that can be set. Note in particular ``extra_specs``, which can be used to -define free-form characteristics, giving a lot of flexibility beyond just the -size of RAM, CPU, and Disk. - -.. list-table:: Flavor parameters - :widths: 50 50 - :header-rows: 1 - - * - **Column** - - **Description** - * - ID - - Unique ID (integer or UUID) for the flavor. - * - Name - - A descriptive name, such as xx.size\_name, is conventional but not required, though some third-party tools may rely on it. - * - Memory\_MB - - Virtual machine memory in megabytes. - * - Disk - - Virtual root disk size in gigabytes. This is an ephemeral disk the base image is copied into. You don't use it when you boot from a persistent volume. The "0" size is a special case that uses the native base image size as the size of the ephemeral root volume. - * - Ephemeral - - Specifies the size of a secondary ephemeral data disk. This is an empty, unformatted disk and exists only for the life of the instance. - * - Swap - - Optional swap space allocation for the instance. - * - VCPUs - - Number of virtual CPUs presented to the instance. - * - RXTX_Factor - - Optional property that allows created servers to have a different - bandwidthbandwidth capping cap from that defined in the network - they are attached to. This factor is multiplied by the rxtx\_base - property of the network. - Default value is 1.0 (that is, the same as the attached network). - * - Is_Public - - Boolean value that indicates whether the flavor is available to - all users or private. Private flavors do not get the current - tenant assigned to them. Defaults to ``True``. - * - extra_specs - - Additional optional restrictions on which compute nodes the - flavor can run on. This is implemented as key-value pairs that must - match against the corresponding key-value pairs on compute nodes. - Can be used to implement things like special resources (such as - flavors that can run only on compute nodes with GPU hardware). - - -Private Flavors ---------------- - -A user might need a custom flavor that is uniquely tuned for a project -she is working on. For example, the user might require 128 GB of memory. -If you create a new flavor as described above, the user would have -access to the custom flavor, but so would all other tenants in your -cloud. Sometimes this sharing isn't desirable. In this scenario, -allowing all users to have access to a flavor with 128 GB of memory -might cause your cloud to reach full capacity very quickly. To prevent -this, you can restrict access to the custom flavor using the -:command:`nova` command: - -.. code-block:: console - - $ nova flavor-access-add - -To view a flavor's access list, do the following: - -.. code-block:: console - - $ nova flavor-access-list - -.. note:: - - Once access to a flavor has been restricted, no other projects - besides the ones granted explicit access will be able to see the - flavor. This includes the admin project. Make sure to add the admin - project in addition to the original project. - - It's also helpful to allocate a specific numeric range for custom - and private flavors. On UNIX-based systems, nonsystem accounts - usually have a UID starting at 500. A similar approach can be taken - with custom flavors. This helps you easily identify which flavors - are custom, private, and public for the entire cloud. - -**How Do I Modify an Existing Flavor?** - -The OpenStack dashboard simulates the ability to modify a flavor by -deleting an existing flavor and creating a new one with the same name. - -Security Groups -~~~~~~~~~~~~~~~ - -A common new-user issue with OpenStack is failing to set an appropriate -security group when launching an instance. As a result, the user is -unable to contact the instance on the network. - -Security groups are sets of IP filter rules that are applied to an -instance's networking. They are project specific, and project members -can edit the default rules for their group and add new rules sets. All -projects have a "default" security group, which is applied to instances -that have no other security group defined. Unless changed, this security -group denies all incoming traffic. - -General Security Groups Configuration -------------------------------------- - -The ``nova.conf`` option ``allow_same_net_traffic`` (which defaults to -``true``) globally controls whether the rules apply to hosts that share -a network. When set to ``true``, hosts on the same subnet are not -filtered and are allowed to pass all types of traffic between them. On a -flat network, this allows all instances from all projects unfiltered -communication. With VLAN networking, this allows access between -instances within the same project. If ``allow_same_net_traffic`` is set -to ``false``, security groups are enforced for all connections. In this -case, it is possible for projects to simulate ``allow_same_net_traffic`` -by configuring their default security group to allow all traffic from -their subnet. - -.. note:: - - As noted in the previous chapter, the number of rules per security - group is controlled by the ``quota_security_group_rules``, and the - number of allowed security groups per project is controlled by the - ``quota_security_groups`` quota. - -End-User Configuration of Security Groups ------------------------------------------ - -Security groups for the current project can be found on the OpenStack -dashboard under :guilabel:`Access & Security`. To see details of an -existing group, select the :guilabel:`edit` action for that security group. -Obviously, modifying existing groups can be done from this edit interface. -There is a :guilabel:`Create Security Group` button on the main -**Access & Security** page for creating new groups. We discuss the terms -used in these fields when we explain the command-line equivalents. - -**Setting with nova command** - -From the command line, you can get a list of security groups for the -project you're acting in using the :command:`nova` command: - -.. code-block:: console - - $ nova secgroup-list - +---------+-------------+ - | Name | Description | - +---------+-------------+ - | default | default | - | open | all ports | - +---------+-------------+ - -To view the details of the "open" security group: - -.. code-block:: console - - $ nova secgroup-list-rules open - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | icmp | -1 | 255 | 0.0.0.0/0 | | - | tcp | 1 | 65535 | 0.0.0.0/0 | | - | udp | 1 | 65535 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -These rules are all "allow" type rules, as the default is deny. The -first column is the IP protocol (one of icmp, tcp, or udp), and the -second and third columns specify the affected port range. The fourth -column specifies the IP range in CIDR format. This example shows the -full port range for all protocols allowed from all IPs. - -When adding a new security group, you should pick a descriptive but -brief name. This name shows up in brief descriptions of the instances -that use it where the longer description field often does not. Seeing -that an instance is using security group ``http`` is much easier to -understand than ``bobs_group`` or ``secgrp1``. - -As an example, let's create a security group that allows web traffic -anywhere on the Internet. We'll call this group ``global_http``, which -is clear and reasonably concise, encapsulating what is allowed and from -where. From the command line, do: - -.. code-block:: console - - $ nova secgroup-create \ - global_http "allow web traffic from the Internet" - +-------------+-------------------------------------+ - | Name | Description | - +-------------+-------------------------------------+ - | global_http | allow web traffic from the Internet | - +-------------+-------------------------------------+ - -This creates the empty security group. To make it do what we want, we -need to add some rules: - -.. code-block:: console - - $ nova secgroup-add-rule - $ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0 - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 80 | 80 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -Note that the arguments are positional, and the ``from-port`` and -``to-port`` arguments specify the allowed local port range connections. -These arguments are not indicating source and destination ports of the -connection. More complex rule sets can be built up through multiple -invocations of :command:`nova secgroup-add-rule`. For example, if you want to -pass both http and https traffic, do this: - -.. code-block:: console - - $ nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0 - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 443 | 443 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -Despite only outputting the newly added rule, this operation is -additive: - -.. code-block:: console - - $ nova secgroup-list-rules global_http - +-------------+-----------+---------+-----------+--------------+ - | IP Protocol | From Port | To Port | IP Range | Source Group | - +-------------+-----------+---------+-----------+--------------+ - | tcp | 80 | 80 | 0.0.0.0/0 | | - | tcp | 443 | 443 | 0.0.0.0/0 | | - +-------------+-----------+---------+-----------+--------------+ - -The inverse operation is called :command:`secgroup-delete-rule`, using the -same format. Whole security groups can be removed with -:command:`secgroup-delete`. - -To create security group rules for a cluster of instances, you want to -use SourceGroups. - -SourceGroups are a special dynamic way of defining the CIDR of allowed -sources. The user specifies a SourceGroup (security group name) and then -all the users' other instances using the specified SourceGroup are -selected dynamically. This dynamic selection alleviates the need for -individual rules to allow each new member of the cluster. - -The code is structured like this: - -.. code-block:: console - - nova secgroup-add-group-rule - -An example usage is shown here: - -.. code-block:: console - - $ nova secgroup-add-group-rule cluster global-http tcp 22 22 - -The "cluster" rule allows SSH access from any other instance that uses -the ``global-http`` group. - -**Setting with neutron command** - -If your environment is using Neutron, you can configure security groups -settings using the :command:`neutron` command. Get a list of security groups -for the project you are acting in, by using following command: - -.. code-block:: console - - $ neutron security-group-list - +--------------------------------------+---------+-------------+ - | id | name | description | - +--------------------------------------+---------+-------------+ - | 6777138a-deb7-4f10-8236-6400e7aff5b0 | default | default | - | 750acb39-d69b-4ea0-a62d-b56101166b01 | open | all ports | - +--------------------------------------+---------+-------------+ - -To view the details of the "open" security group: - -.. code-block:: console - - $ neutron security-group-show open - +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Field | Value | - +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | description | all ports | - | id | 750acb39-d69b-4ea0-a62d-b56101166b01 | - | name | open | - | security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "607ec981611a4839b7b06f6dfa81317d", "port_range_max": null, "security_group_id": "750acb39-d69b-4e0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv4", "id": "361a1b62-95dd-46e1-8639-c3b2000aab60"} | - | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "udp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 65535, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": 1, "ethertype": "IPv4", "id": "496ba8b7-d96e-4655-920f-068a3d4ddc36"} | - | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "icmp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv4", "id": "50642a56-3c4e-4b31-9293-0a636759a156"} | - | | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "607ec981611a4839b7b06f6dfa81317d", "port_range_max": null, "security_group_id": "750acb39-d69b-4e0-a62d-b56101166b01", "port_range_min": null, "ethertype": "IPv6", "id": "f46f35eb-8581-4ca1-bbc9-cf8d0614d067"} | - | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 65535, "security_group_id": "750acb9-d69b-4ea0-a62d-b56101166b01", "port_range_min": 1, "ethertype": "IPv4", "id": "fb6f2d5e-8290-4ed8-a23b-c6870813c921"} | - | tenant_id | 607ec981611a4839b7b06f6dfa81317d | - +----------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -These rules are all "allow" type rules, as the default is deny. This -example shows the full port range for all protocols allowed from all -IPs. This section describes the most common security-group-rule -parameters: - -direction - The direction in which the security group rule is applied. Valid - values are ``ingress`` or ``egress``. - -remote_ip_prefix - This attribute value matches the specified IP prefix as the source - IP address of the IP packet. - -protocol - The protocol that is matched by the security group rule. Valid - values are ``null``, ``tcp``, ``udp``, ``icmp``, and ``icmpv6``. - -port_range_min - The minimum port number in the range that is matched by the security - group rule. If the protocol is TCP or UDP, this value must be less - than or equal to the ``port_range_max`` attribute value. If the - protocol is ICMP or ICMPv6, this value must be an ICMP or ICMPv6 - type, respectively. - -port_range_max - The maximum port number in the range that is matched by the security - group rule. The ``port_range_min`` attribute constrains the - ``port_range_max`` attribute. If the protocol is ICMP or ICMPv6, - this value must be an ICMP or ICMPv6 type, respectively. - -ethertype - Must be ``IPv4`` or ``IPv6``, and addresses represented in CIDR must - match the ingress or egress rules. - -When adding a new security group, you should pick a descriptive but -brief name. This name shows up in brief descriptions of the instances -that use it where the longer description field often does not. Seeing -that an instance is using security group ``http`` is much easier to -understand than ``bobs_group`` or ``secgrp1``. - -This example creates a security group that allows web traffic anywhere -on the Internet. We'll call this group ``global_http``, which is clear -and reasonably concise, encapsulating what is allowed and from where. -From the command line, do: - -.. code-block:: console - - $ neutron security-group-create \ - global_http --description "allow web traffic from the Internet" - Created a new security_group: - +----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Field | Value | - +----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | description | allow web traffic from the Internet | - | id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | - | name | global_http | - | security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv4", "id": "b2e56b3a-890b-48d3-9380-8a9f6f8b1b36"} | - | | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv6", "id": "153d84ba-651d-45fd-9015-58807749efc5"} | - | tenant_id | 341f49145ec7445192dc3c2abc33500d | - +----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -Immediately after create, the security group has only an allow egress -rule. To make it do what we want, we need to add some rules: - -.. code-block:: console - - $ neutron security-group-rule-create [-h] - [-f {html,json,json,shell,table,value,yaml,yaml}] - [-c COLUMN] [--max-width ] - [--noindent] [--prefix PREFIX] - [--request-format {json,xml}] - [--tenant-id TENANT_ID] - [--direction {ingress,egress}] - [--ethertype ETHERTYPE] - [--protocol PROTOCOL] - [--port-range-min PORT_RANGE_MIN] - [--port-range-max PORT_RANGE_MAX] - [--remote-ip-prefix REMOTE_IP_PREFIX] - [--remote-group-id REMOTE_GROUP] - SECURITY_GROUP - $ neutron security-group-rule-create --direction ingress --ethertype IPv4 --protocol tcp --port-range-min 80 --port-range-max 80 --remote-ip-prefix 0.0.0.0/0 global_http - Created a new security_group_rule: - +-------------------+--------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------+ - | direction | ingress | - | ethertype | IPv4 | - | id | 88ec4762-239e-492b-8583-e480e9734622 | - | port_range_max | 80 | - | port_range_min | 80 | - | protocol | tcp | - | remote_group_id | | - | remote_ip_prefix | 0.0.0.0/0 | - | security_group_id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | - | tenant_id | 341f49145ec7445192dc3c2abc33500d | - +-------------------+--------------------------------------+ - -More complex rule sets can be built up through multiple invocations of -:command:`neutron security-group-rule-create`. For example, if you want to pass -both http and https traffic, do this: - -.. code-block:: console - - $ neutron security-group-rule-create --direction ingress --ethertype ipv4 --protocol tcp --port-range-min 443 --port-range-max 443 --remote-ip-prefix 0.0.0.0/0 global_http - Created a new security_group_rule: - +-------------------+--------------------------------------+ - | Field | Value | - +-------------------+--------------------------------------+ - | direction | ingress | - | ethertype | IPv4 | - | id | c50315e5-29f3-408e-ae15-50fdc03fb9af | - | port_range_max | 443 | - | port_range_min | 443 | - | protocol | tcp | - | remote_group_id | | - | remote_ip_prefix | 0.0.0.0/0 | - | security_group_id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | - | tenant_id | 341f49145ec7445192dc3c2abc33500d | - +-------------------+--------------------------------------+ - -Despite only outputting the newly added rule, this operation is -additive: - -.. code-block:: console - - $ neutron security-group-show global_http - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | Field | Value | - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - | description | allow web traffic from the Internet | - | id | c6d78d56-7c56-4c82-abcb-05aa9839d1e7 | - | name | global_http | - | security_group_rules | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv6", "id": "153d84ba-651d-45fd-9015-58807749efc5"} | - | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 80, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": 80, "ethertype": "IPv4", "id": "88ec4762-239e-492b-8583-e480e9734622"} | - | | {"remote_group_id": null, "direction": "egress", "remote_ip_prefix": null, "protocol": null, "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": null, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": null, "ethertype": "IPv4", "id": "b2e56b3a-890b-48d3-9380-8a9f6f8b1b36"} | - | | {"remote_group_id": null, "direction": "ingress", "remote_ip_prefix": "0.0.0.0/0", "protocol": "tcp", "tenant_id": "341f49145ec7445192dc3c2abc33500d", "port_range_max": 443, "security_group_id": "c6d78d56-7c56-4c82-abcb-05aa9839d1e7", "port_range_min": 443, "ethertype": "IPv4", "id": "c50315e5-29f3-408e-ae15-50fdc03fb9af"} | - | tenant_id | 341f49145ec7445192dc3c2abc33500d | - +----------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ - -The inverse operation is called :command:`security-group-rule-delete`, -specifying security-group-rule ID. Whole security groups can be removed -with :command:`security-group-delete`. - -To create security group rules for a cluster of instances, use -RemoteGroups. - -RemoteGroups are a dynamic way of defining the CIDR of allowed sources. -The user specifies a RemoteGroup (security group name) and then all the -users' other instances using the specified RemoteGroup are selected -dynamically. This dynamic selection alleviates the need for individual -rules to allow each new member of the cluster. - -The code is similar to the above example of -:command:`security-group-rule-create`. To use RemoteGroup, specify -:option:`--remote-group-id` instead of :option:`--remote-ip-prefix`. -For example: - -.. code-block:: console - - $ neutron security-group-rule-create --direction ingress \ - --ethertype IPv4 --protocol tcp --port-range-min 22 --port-range-max 22 --remote-group-id global_http cluster - -The "cluster" rule allows SSH access from any other instance that uses -the ``global-http`` group. - -Block Storage -~~~~~~~~~~~~~ - -OpenStack volumes are persistent block-storage devices that may be -attached and detached from instances, but they can be attached to only -one instance at a time. Similar to an external hard drive, they do not -provide shared storage in the way a network file system or object store -does. It is left to the operating system in the instance to put a file -system on the block device and mount it, or not. - -As with other removable disk technology, it is important that the -operating system is not trying to make use of the disk before removing -it. On Linux instances, this typically involves unmounting any file -systems mounted from the volume. The OpenStack volume service cannot -tell whether it is safe to remove volumes from an instance, so it does -what it is told. If a user tells the volume service to detach a volume -from an instance while it is being written to, you can expect some level -of file system corruption as well as faults from whatever process within -the instance was using the device. - -There is nothing OpenStack-specific in being aware of the steps needed -to access block devices from within the instance operating system, -potentially formatting them for first use and being cautious when -removing them. What is specific is how to create new volumes and attach -and detach them from instances. These operations can all be done from -the **Volumes** page of the dashboard or by using the ``cinder`` -command-line client. - -To add new volumes, you need only a name and a volume size in gigabytes. -Either put these into the **create volume** web form or use the command -line: - -.. code-block:: console - - $ cinder create --display-name test-volume 10 - -This creates a 10 GB volume named ``test-volume``. To list existing -volumes and the instances they are connected to, if any: - -.. code-block:: console - - $ cinder list - +------------+---------+--------------------+------+-------------+-------------+ - | ID | Status | Display Name | Size | Volume Type | Attached to | - +------------+---------+--------------------+------+-------------+-------------+ - | 0821...19f | active | test-volume | 10 | None | | - +------------+---------+--------------------+------+-------------+-------------+ - -OpenStack Block Storage also allows creating snapshots of volumes. -Remember that this is a block-level snapshot that is crash consistent, -so it is best if the volume is not connected to an instance when the -snapshot is taken and second best if the volume is not in use on the -instance it is attached to. If the volume is under heavy use, the -snapshot may have an inconsistent file system. In fact, by default, the -volume service does not take a snapshot of a volume that is attached to -an image, though it can be forced to. To take a volume snapshot, either -select :guilabel:`Create Snapshot` from the :guilabel:`actions` column -next to the :guilabel:`volume` name on the **dashboard** volume page, -or run this from the command line: - -.. code-block:: console - - usage: cinder snapshot-create [--force ] - [--display-name ] - [--display-description ] - - Add a new snapshot. - Positional arguments: ID of the volume to snapshot - Optional arguments: --force Optional flag to indicate whether to - snapshot a volume even if its - attached to an instance. - (Default=False) - --display-name Optional snapshot name. - (Default=None) - --display-description - Optional snapshot description. (Default=None) - -.. note:: - - For more information about updating Block Storage volumes (for - example, resizing or transferring), see the `OpenStack End User - Guide `__. - -Block Storage Creation Failures -------------------------------- - -If a user tries to create a volume and the volume immediately goes into -an error state, the best way to troubleshoot is to grep the cinder log -files for the volume's UUID. First try the log files on the cloud -controller, and then try the storage node where the volume was attempted -to be created: - -.. code-block:: console - - # grep 903b85d0-bacc-4855-a261-10843fc2d65b /var/log/cinder/*.log - -Shared File Systems Service -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Similar to Block Storage, the Shared File System is a persistent -storage, called share, that can be used in multi-tenant environments. -Users create and mount a share as a remote file system on any machine -that allows mounting shares, and has network access to share exporter. -This share can then be used for storing, sharing, and exchanging files. -The default configuration of the Shared File Systems service depends on -the back-end driver the admin chooses when starting the Shared File -Systems service. For more information about existing back-end drivers, -see section `"Share -Backends" `__ -of Shared File Systems service Developer Guide. For example, in case of -OpenStack Block Storage based back-end is used, the Shared File Systems -service cares about everything, including VMs, networking, keypairs, and -security groups. Other configurations require more detailed knowledge of -shares functionality to set up and tune specific parameters and modes of -shares functioning. - -Shares are a remote mountable file system, so users can mount a share to -multiple hosts, and have it accessed from multiple hosts by multiple -users at a time. With the Shared File Systems service, you can perform a -large number of operations with shares: - -- Create, update, delete, and force-delete shares - -- Change access rules for shares, reset share state - -- Specify quotas for existing users or tenants - -- Create share networks - -- Define new share types - -- Perform operations with share snapshots: create, change name, create - a share from a snapshot, delete - -- Operate with consistency groups - -- Use security services - -For more information on share management see section `“Share -management” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide. -As to Security services, you should remember that different drivers -support different authentication methods, while generic driver does not -support Security Services at all (see section `“Security -services” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide). - -You can create a share in a network, list shares, and show information -for, update, and delete a specified share. You can also create snapshots -of shares (see section `“Share -snapshots” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide). - -There are default and specific share types that allow you to filter or -choose back-ends before you create a share. Functions and behaviour of -share type is similar to Block Storage volume type (see section `“Share -types” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide). - -To help users keep and restore their data, Shared File Systems service -provides a mechanism to create and operate snapshots (see section -`“Share -snapshots” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide). - -A security service stores configuration information for clients for -authentication and authorization. Inside Manila a share network can be -associated with up to three security types (for detailed information see -section `“Security -services” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide): - -- LDAP - -- Kerberos - -- Microsoft Active Directory - -Shared File Systems service differs from the principles implemented in -Block Storage. Shared File Systems service can work in two modes: - -- Without interaction with share networks, in so called "no share - servers" mode. - -- Interacting with share networks. - -Networking service is used by the Shared File Systems service to -directly operate with share servers. For switching interaction with -Networking service on, create a share specifying a share network. To use -"share servers" mode even being out of OpenStack, a network plugin -called StandaloneNetworkPlugin is used. In this case, provide network -information in the configuration: IP range, network type, and -segmentation ID. Also you can add security services to a share network -(see section -`“Networking” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide). - - -The main idea of consistency groups is to enable you to create snapshots -at the exact same point in time from multiple file system shares. Those -snapshots can be then used for restoring all shares that were associated -with the consistency group (see section `“Consistency -groups” `__ -of chapter “Shared File Systems” in OpenStack Administrator Guide). - -Shared File System storage allows administrators to set limits and -quotas for specific tenants and users. Limits are the resource -limitations that are allowed for each tenant or user. Limits consist of: - -- Rate limits - -- Absolute limits - -Rate limits control the frequency at which users can issue specific API -requests. Rate limits are configured by administrators in a config file. -Also, administrator can specify quotas also known as max values of -absolute limits per tenant. Whereas users can see only the amount of -their consumed resources. Administrator can specify rate limits or -quotas for the following resources: - -- Max amount of space awailable for all shares - -- Max number of shares - -- Max number of shared networks - -- Max number of share snapshots - -- Max total amount of all snapshots - -- Type and number of API calls that can be made in a specific time - interval - -User can see his rate limits and absolute limits by running commands -:command:`manila rate-limits` and :command:`manila absolute-limits` -respectively. For more details on limits and quotas see subsection `"Quotas -and limits" `__ -of "Share management" section of OpenStack Administrator Guide document. - -This section lists several of the most important Use Cases that -demonstrate the main functions and abilities of Shared File Systems -service: - -- Create share - -- Operating with a share - -- Manage access to shares - -- Create snapshots - -- Create a share network - -- Manage a share network - -.. note:: - - Shared File Systems service cannot warn you beforehand if it is safe - to write a specific large amount of data onto a certain share or to - remove a consistency group if it has a number of shares assigned to - it. In such a potentially erroneous situations, if a mistake - happens, you can expect some error message or even failing of shares - or consistency groups into an incorrect status. You can also expect - some level of system corruption if a user tries to unmount an - unmanaged share while a process is using it for data transfer. - - -.. _create_share: - -Create Share ------------- - -In this section, we examine the process of creating a simple share. It -consists of several steps: - -- Check if there is an appropriate share type defined in the Shared - File Systems service - -- If such a share type does not exist, an Admin should create it using - :command:`manila type-create` command before other users are able to use it - -- Using a share network is optional. However if you need one, check if - there is an appropriate network defined in Shared File Systems - service by using :command:`manila share-network-list` command. For the - information on creating a share network, see - :ref:`create_a_share_network` below in this chapter. - -- Create a public share using :command:`manila create`. - -- Make sure that the share has been created successfully and is ready - to use (check the share status and see the share export location) - -Below is the same whole procedure described step by step and in more -detail. - -.. note:: - - Before you start, make sure that Shared File Systems service is - installed on your OpenStack cluster and is ready to use. - -By default, there are no share types defined in Shared File Systems -service, so you can check if a required one has been already created: - -.. code-block:: console - - $ manila type-list - +------+--------+-----------+-----------+----------------------------------+----------------------+ - | ID | Name | Visibility| is_default| required_extra_specs | optional_extra_specs | - +------+--------+-----------+-----------+----------------------------------+----------------------+ - | c0...| default| public | YES | driver_handles_share_servers:True| snapshot_support:True| - +------+--------+-----------+-----------+----------------------------------+----------------------+ - -If the share types list is empty or does not contain a type you need, -create the required share type using this command: - -.. code-block:: console - - $ manila type-create netapp1 False --is_public True - -This command will create a public share with the following parameters: -``name = netapp1``, ``spec_driver_handles_share_servers = False`` - -You can now create a public share with my_share_net network, default -share type, NFS shared file systems protocol, and 1 GB size: - -.. code-block:: console - - $ manila create nfs 1 --name "Share1" --description "My first share" --share-type default --share-network my_share_net --metadata aim=testing --public - +-----------------------------+--------------------------------------+ - | Property | Value | - +-----------------------------+--------------------------------------+ - | status | creating | - | share_type_name | default | - | description | My first share | - | availability_zone | None | - | share_network_id | 9c187d23-7e1d-4d91-92d0-77ea4b9b9496 | - | share_server_id | None | - | host | | - | access_rules_status | active | - | snapshot_id | None | - | is_public | True | - | task_state | None | - | snapshot_support | True | - | id | edd82179-587e-4a87-9601-f34b2ca47e5b | - | size | 1 | - | name | Share1 | - | share_type | e031d5e9-f113-491a-843f-607128a5c649 | - | has_replicas | False | - | replication_type | None | - | created_at | 2016-03-20T00:00:00.000000 | - | share_proto | NFS | - | consistency_group_id | None | - | source_cgsnapshot_member_id | None | - | project_id | e81908b1bfe8468abb4791eae0ef6dd9 | - | metadata | {u'aim': u'testing'} | - +-----------------------------+--------------------------------------+ - -To confirm that creation has been successful, see the share in the share -list: - -.. code-block:: console - - $ manila list - +----+-------+-----+------------+-----------+-------------------------------+----------------------+ - | ID | Name | Size| Share Proto| Share Type| Export location | Host | - +----+-------+-----+------------+-----------+-------------------------------+----------------------+ - | a..| Share1| 1 | NFS | c0086... | 10.254.0.3:/shares/share-2d5..| manila@generic1#GEN..| - +----+-------+-----+------------+-----------+-------------------------------+----------------------+ - -Check the share status and see the share export location. After -creation, the share status should become ``available``: - -.. code-block:: console - - $ manila show Share1 - +-----------------------------+----------------------------------------------------------------------+ - | Property | Value | - +-----------------------------+----------------------------------------------------------------------+ - | status | available | - | share_type_name | default | - | description | My first share | - | availability_zone | nova | - | share_network_id | 9c187d23-7e1d-4d91-92d0-77ea4b9b9496 | - | export_locations | | - | | path = 10.254.0.3:/shares/share-18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | | preferred = False | - | | is_admin_only = False | - | | id = d6a82c0d-36b0-438b-bf34-63f3932ddf4e | - | | share_instance_id = 18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | | path = 10.0.0.3:/shares/share-18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | | preferred = False | - | | is_admin_only = True | - | | id = 51672666-06b8-4741-99ea-64f2286f52e2 | - | | share_instance_id = 18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | share_server_id | ea8b3a93-ab41-475e-9df1-0f7d49b8fa54 | - | host | manila@generic1#GENERIC1 | - | access_rules_status | active | - | snapshot_id | None | - | is_public | True | - | task_state | None | - | snapshot_support | True | - | id | e7364bcc-3821-49bf-82d6-0c9f0276d4ce | - | size | 1 | - | name | Share1 | - | share_type | e031d5e9-f113-491a-843f-607128a5c649 | - | has_replicas | False | - | replication_type | None | - | created_at | 2016-03-20T00:00:00.000000 | - | share_proto | NFS | - | consistency_group_id | None | - | source_cgsnapshot_member_id | None | - | project_id | e81908b1bfe8468abb4791eae0ef6dd9 | - | metadata | {u'aim': u'testing'} | - +-----------------------------+----------------------------------------------------------------------+ - -The value ``is_public`` defines the level of visibility for the share: -whether other tenants can or cannot see the share. By default, the share -is private. Now you can mount the created share like a remote file -system and use it for your purposes. - -.. note:: - - See subsection `“Share - Management” `__ - of “Shared File Systems” section of Administrator Guide - document for the details on share management operations. - -Manage Access To Shares ------------------------ - -Currently, you have a share and would like to control access to this -share for other users. For this, you have to perform a number of steps -and operations. Before getting to manage access to the share, pay -attention to the following important parameters. To grant or deny access -to a share, specify one of these supported share access levels: - -- ``rw``: read and write (RW) access. This is the default value. - -- ``ro:`` read-only (RO) access. - -Additionally, you should also specify one of these supported -authentication methods: - -- ``ip``: authenticates an instance through its IP address. A valid - format is XX.XX.XX.XX orXX.XX.XX.XX/XX. For example 0.0.0.0/0. - -- ``cert``: authenticates an instance through a TLS certificate. - Specify the TLS identity as the IDENTKEY. A valid value is any string - up to 64 characters long in the common name (CN) of the certificate. - The meaning of a string depends on its interpretation. - -- ``user``: authenticates by a specified user or group name. A valid - value is an alphanumeric string that can contain some special - characters and is from 4 to 32 characters long. - -.. note:: - - Do not mount a share without an access rule! This can lead to an - exception. - -Allow access to the share with IP access type and 10.254.0.4 IP address: - -.. code-block:: console - - $ manila access-allow Share1 ip 10.254.0.4 --access-level rw - +--------------+--------------------------------------+ - | Property | Value | - +--------------+--------------------------------------+ - | share_id | 7bcd888b-681b-4836-ac9c-c3add4e62537 | - | access_type | ip | - | access_to | 10.254.0.4 | - | access_level | rw | - | state | new | - | id | de715226-da00-4cfc-b1ab-c11f3393745e | - +--------------+--------------------------------------+ - -Mount the Share: - -.. code-block:: console - - $ sudo mount -v -t nfs 10.254.0.5:/shares/share-5789ddcf-35c9-4b64-a28a-7f6a4a574b6a /mnt/ - -Then check if the share mounted successfully and according to the -specified access rules: - -.. code-block:: console - - $ manila access-list Share1 - +--------------------------------------+-------------+------------+--------------+--------+ - | id | access type | access to | access level | state | - +--------------------------------------+-------------+------------+--------------+--------+ - | 4f391c6b-fb4f-47f5-8b4b-88c5ec9d568a | user | demo | rw | error | - | de715226-da00-4cfc-b1ab-c11f3393745e | ip | 10.254.0.4 | rw | active | - +--------------------------------------+-------------+------------+--------------+--------+ - -.. note:: - - Different share features are supported by different share drivers. - In these examples there was used generic (Cinder as a back-end) - driver that does not support ``user`` and ``cert`` authentication - methods. - -.. note:: - - For the details of features supported by different drivers see - section `“Manila share features support - mapping” `__ - of Manila Developer Guide document. - -Manage Shares -------------- - -There are several other useful operations you would perform when working -with shares. - -Update Share ------------- - -To change the name of a share, or update its description, or level of -visibility for other tenants, use this command: - -.. code-block:: console - - $ manila update Share1 --description "My first share. Updated" --is-public False - -Check the attributes of the updated Share1: - -.. code-block:: console - - $ manila show Share1 - +-----------------------------+----------------------------------------------------------------------+ - | Property | Value | - +-----------------------------+----------------------------------------------------------------------+ - | status | available | - | share_type_name | default | - | description | My first share. Updated | - | availability_zone | nova | - | share_network_id | 9c187d23-7e1d-4d91-92d0-77ea4b9b9496 | - | export_locations | | - | | path = 10.254.0.3:/shares/share-18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | | preferred = False | - | | is_admin_only = False | - | | id = d6a82c0d-36b0-438b-bf34-63f3932ddf4e | - | | share_instance_id = 18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | | path = 10.0.0.3:/shares/share-18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | | preferred = False | - | | is_admin_only = True | - | | id = 51672666-06b8-4741-99ea-64f2286f52e2 | - | | share_instance_id = 18cb05be-eb69-4cb2-810f-91c75ef30f90 | - | share_server_id | ea8b3a93-ab41-475e-9df1-0f7d49b8fa54 | - | host | manila@generic1#GENERIC1 | - | access_rules_status | active | - | snapshot_id | None | - | is_public | False | - | task_state | None | - | snapshot_support | True | - | id | e7364bcc-3821-49bf-82d6-0c9f0276d4ce | - | size | 1 | - | name | Share1 | - | share_type | e031d5e9-f113-491a-843f-607128a5c649 | - | has_replicas | False | - | replication_type | None | - | created_at | 2016-03-20T00:00:00.000000 | - | share_proto | NFS | - | consistency_group_id | None | - | source_cgsnapshot_member_id | None | - | project_id | e81908b1bfe8468abb4791eae0ef6dd9 | - | metadata | {u'aim': u'testing'} | - +-----------------------------+----------------------------------------------------------------------+ - -Reset Share State ------------------ - -Sometimes a share may appear and then hang in an erroneous or a -transitional state. Unprivileged users do not have the appropriate -access rights to correct this situation. However, having cloud -administrator's permissions, you can reset the share's state by using - -.. code-block:: console - - $ manila reset-state [–state state] share_name - -command to reset share state, where state indicates which state to -assign the share to. Options include: -``available, error, creating, deleting, error_deleting`` states. - -After running - -.. code-block:: console - - $ manila reset-state Share2 --state deleting - -check the share's status: - -.. code-block:: console - - $ manila show Share2 - +-----------------------------+-------------------------------------------+ - | Property | Value | - +-----------------------------+-------------------------------------------+ - | status | deleting | - | share_type_name | default | - | description | share from a snapshot. | - | availability_zone | nova | - | share_network_id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | - | export_locations | [] | - | share_server_id | 41b7829d-7f6b-4c96-aea5-d106c2959961 | - | host | manila@generic1#GENERIC1 | - | snapshot_id | 962e8126-35c3-47bb-8c00-f0ee37f42ddd | - | is_public | False | - | task_state | None | - | snapshot_support | True | - | id | b6b0617c-ea51-4450-848e-e7cff69238c7 | - | size | 1 | - | name | Share2 | - | share_type | c0086582-30a6-4060-b096-a42ec9d66b86 | - | created_at | 2015-09-25T06:25:50.000000 | - | export_location | 10.254.0.3:/shares/share-1dc2a471-3d47-...| - | share_proto | NFS | - | consistency_group_id | None | - | source_cgsnapshot_member_id | None | - | project_id | 20787a7ba11946adad976463b57d8a2f | - | metadata | {u'source': u'snapshot'} | - +-----------------------------+-------------------------------------------+ - -Delete Share ------------- - -If you do not need a share any more, you can delete it using -:command:`manila delete share_name_or_ID` command like: - -.. code-block:: console - - $ manila delete Share2 - -.. note:: - - If you specified the consistency group while creating a share, you - should provide the --consistency-group parameter to delete the - share: - -.. code-block:: console - - $ manila delete ba52454e-2ea3-47fa-a683-3176a01295e6 --consistency-group ffee08d9-c86c-45e5-861e-175c731daca2 - -Sometimes it appears that a share hangs in one of transitional states -(i.e. -``creating, deleting, managing, unmanaging, extending, and shrinking``). -In that case, to delete it, you need -:command:`manila force-delete share_name_or_ID` command and administrative -permissions to run it: - -.. code-block:: console - - $ manila force-delete b6b0617c-ea51-4450-848e-e7cff69238c7 - -.. note:: - - For more details and additional information about other cases, - features, API commands etc, see subsection `“Share - Management” `__ - of “Shared File Systems” section of Administrator Guide document. - -Create Snapshots ----------------- - -The Shared File Systems service provides a mechanism of snapshots to -help users to restore their own data. To create a snapshot, use -:command:`manila snapshot-create` command like: - -.. code-block:: console - - $ manila snapshot-create Share1 --name Snapshot1 --description "Snapshot of Share1" - +-------------------+--------------------------------------+ - | Property | Value | - +-------------------+--------------------------------------+ - | status | creating | - | share_id | e7364bcc-3821-49bf-82d6-0c9f0276d4ce | - | description | Snapshot of Share1 | - | created_at | 2016-03-20T00:00:00.000000 | - | share_proto | NFS | - | provider_location | None | - | id | a96cf025-92d1-4012-abdd-bb0f29e5aa8f | - | size | 1 | - | share_size | 1 | - | name | Snapshot1 | - +-------------------+--------------------------------------+ - -Then, if needed, update the name and description of the created -snapshot: - -.. code-block:: console - - $ manila snapshot-rename Snapshot1 Snapshot_1 --description "Snapshot of Share1. Updated." - -To make sure that the snapshot is available, run: - -.. code-block:: console - - $ manila snapshot-show Snapshot1 - +-------------------+--------------------------------------+ - | Property | Value | - +-------------------+--------------------------------------+ - | status | available | - | share_id | e7364bcc-3821-49bf-82d6-0c9f0276d4ce | - | description | Snapshot of Share1 | - | created_at | 2016-03-30T10:53:19.000000 | - | share_proto | NFS | - | provider_location | 3ca7a3b2-9f9f-46af-906f-6a565bf8ee37 | - | id | a96cf025-92d1-4012-abdd-bb0f29e5aa8f | - | size | 1 | - | share_size | 1 | - | name | Snapshot1 | - +-------------------+--------------------------------------+ - -.. note:: - - For more details and additional information on snapshots, see - subsection `“Share - Snapshots” `__ - of “Shared File Systems” section of “Administrator Guide” document. - - -.. _create_a_share_network: - -Create a Share Network ----------------------- - -To control a share network, Shared File Systems service requires -interaction with Networking service to manage share servers on its own. -If the selected driver runs in a mode that requires such kind of -interaction, you need to specify the share network when a share is -created. For the information on share creation, -see :ref:`create_share` earlier in this chapter. Initially, check -the existing share networks type list by: - -.. code-block:: console - - $ manila share-network-list - +--------------------------------------+--------------+ - | id | name | - +--------------------------------------+--------------+ - +--------------------------------------+--------------+ - -If share network list is empty or does not contain a required network, -just create, for example, a share network with a private network and -subnetwork. - -.. code-block:: console - - $ manila share-network-create --neutron-net-id 5ed5a854-21dc-4ed3-870a-117b7064eb21 --neutron-subnet-id 74dcfb5a-b4d7-4855-86f5-a669729428dc --name my_share_net --description "My first share network" - +-------------------+--------------------------------------+ - | Property | Value | - +-------------------+--------------------------------------+ - | name | my_share_net | - | segmentation_id | None | - | created_at | 2015-09-24T12:06:32.602174 | - | neutron_subnet_id | 74dcfb5a-b4d7-4855-86f5-a669729428dc | - | updated_at | None | - | network_type | None | - | neutron_net_id | 5ed5a854-21dc-4ed3-870a-117b7064eb21 | - | ip_version | None | - | nova_net_id | None | - | cidr | None | - | project_id | 20787a7ba11946adad976463b57d8a2f | - | id | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | - | description | My first share network | - +-------------------+--------------------------------------+ - -The ``segmentation_id``, ``cidr``, ``ip_version``, and ``network_type`` -share network attributes are automatically set to the values determined -by the network provider. - -Then check if the network became created by requesting the networks list -once again: - -.. code-block:: console - - $ manila share-network-list - +--------------------------------------+--------------+ - | id | name | - +--------------------------------------+--------------+ - | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net | - +--------------------------------------+--------------+ - -Finally, to create a share that uses this share network, get to Create -Share use case described earlier in this chapter. - -.. note:: - - See subsection `“Share - Networks” `__ - of “Shared File Systems” section of Administrator Guide - document for more details. - -Manage a Share Network ----------------------- - -There is a pair of useful commands that help manipulate share networks. -To start, check the network list: - -.. code-block:: console - - $ manila share-network-list - +--------------------------------------+--------------+ - | id | name | - +--------------------------------------+--------------+ - | 5c3cbabb-f4da-465f-bc7f-fadbe047b85a | my_share_net | - +--------------------------------------+--------------+ - -If you configured the back-end with -``driver_handles_share_servers = True`` (with the share servers) and had -already some operations in the Shared File Systems service, you can see -``manila_service_network`` in the neutron list of networks. This network -was created by the share driver for internal usage. - -.. code-block:: console - - $ neutron net-list - +--------------+------------------------+------------------------------------+ - | id | name | subnets | - +--------------+------------------------+------------------------------------+ - | 3b5a629a-e...| manila_service_network | 4f366100-50... 10.254.0.0/28 | - | bee7411d-d...| public | 884a6564-01... 2001:db8::/64 | - | | | e6da81fa-55... 172.24.4.0/24 | - | 5ed5a854-2...| private | 74dcfb5a-bd... 10.0.0.0/24 | - | | | cc297be2-51... fd7d:177d:a48b::/64 | - +--------------+------------------------+------------------------------------+ - -You also can see detailed information about the share network including -``network_type, segmentation_id`` fields: - -.. code-block:: console - - $ neutron net-show manila_service_network - +---------------------------+--------------------------------------+ - | Field | Value | - +---------------------------+--------------------------------------+ - | admin_state_up | True | - | availability_zone_hints | | - | availability_zones | nova | - | created_at | 2016-03-20T00:00:00 | - | description | | - | id | ef5282ab-dbf9-4d47-91d4-b0cc9b164567 | - | ipv4_address_scope | | - | ipv6_address_scope | | - | mtu | 1450 | - | name | manila_service_network | - | port_security_enabled | True | - | provider:network_type | vxlan | - | provider:physical_network | | - | provider:segmentation_id | 1047 | - | router:external | False | - | shared | False | - | status | ACTIVE | - | subnets | aba49c7d-c7eb-44b9-9c8f-f6112b05a2e0 | - | tags | | - | tenant_id | f121b3ee03804266af2959e56671b24a | - | updated_at | 2016-03-20T00:00:00 | - +---------------------------+--------------------------------------+ - -You also can add and remove the security services to the share network. - -.. note:: - - For details, see subsection `"Security - Services" `__ - of “Shared File Systems” section of Administrator Guide document. - -Instances -~~~~~~~~~ - -Instances are the running virtual machines within an OpenStack cloud. -This section deals with how to work with them and their underlying -images, their network properties, and how they are represented in the -database. - -Starting Instances ------------------- - -To launch an instance, you need to select an image, a flavor, and a -name. The name needn't be unique, but your life will be simpler if it is -because many tools will use the name in place of the UUID so long as the -name is unique. You can start an instance from the dashboard from the -:guilabel:`Launch Instance` button on the **Instances page** or by selecting -the :guilabel:`Launch Instance action` next to an :guilabel:`image` -or :guilabel:`snapshot` on the **Images** page. - -On the command line, do this: - -.. code-block:: console - - $ nova boot --flavor --image - -There are a number of optional items that can be specified. You should -read the rest of this section before trying to start an instance, but -this is the base command that later details are layered upon. - -To delete instances from the dashboard, select the -:guilabel:`Delete instance action` next to the -:guilabel:`instance` on the **Instances** page. - -.. note:: - - In releases prior to Mitaka, select the equivalent :guilabel:`Terminate - instance` action. - -From the command line, do this: - -.. code-block:: console - - $ nova delete - -It is important to note that powering off an instance does not terminate -it in the OpenStack sense. - -Instance Boot Failures ----------------------- - -If an instance fails to start and immediately moves to an error state, -there are a few different ways to track down what has gone wrong. Some -of these can be done with normal user access, while others require -access to your log server or compute nodes. - -The simplest reasons for nodes to fail to launch are quota violations or -the scheduler being unable to find a suitable compute node on which to -run the instance. In these cases, the error is apparent when you run a -:command:`nova show` on the faulted instance: - -.. code-block:: console - - $ nova show test-instance - -.. code-block:: console - - +------------------------+-----------------------------------------------------\ - | Property | Value / - +------------------------+-----------------------------------------------------\ - | OS-DCF:diskConfig | MANUAL / - | OS-EXT-STS:power_state | 0 \ - | OS-EXT-STS:task_state | None / - | OS-EXT-STS:vm_state | error \ - | accessIPv4 | / - | accessIPv6 | \ - | config_drive | / - | created | 2013-03-01T19:28:24Z \ - | fault | {u'message': u'NoValidHost', u'code': 500, u'created/ - | flavor | xxl.super (11) \ - | hostId | / - | id | 940f3b2f-bd74-45ad-bee7-eb0a7318aa84 \ - | image | quantal-test (65b4f432-7375-42b6-a9b8-7f654a1e676e) / - | key_name | None \ - | metadata | {} / - | name | test-instance \ - | security_groups | [{u'name': u'default'}] / - | status | ERROR \ - | tenant_id | 98333a1a28e746fa8c629c83a818ad57 / - | updated | 2013-03-01T19:28:26Z \ - | user_id | a1ef823458d24a68955fec6f3d390019 / - +------------------------+-----------------------------------------------------\ - - -In this case, looking at the ``fault`` message shows ``NoValidHost``, -indicating that the scheduler was unable to match the instance -requirements. - -If :command:`nova show` does not sufficiently explain the failure, searching -for the instance UUID in the ``nova-compute.log`` on the compute node it -was scheduled on or the ``nova-scheduler.log`` on your scheduler hosts -is a good place to start looking for lower-level problems. - -Using :command:`nova show` as an admin user will show the compute node the -instance was scheduled on as ``hostId``. If the instance failed during -scheduling, this field is blank. - -Using Instance-Specific Data ----------------------------- - -There are two main types of instance-specific data: metadata and user -data. - -Instance metadata ------------------ - -For Compute, instance metadata is a collection of key-value pairs -associated with an instance. Compute reads and writes to these key-value -pairs any time during the instance lifetime, from inside and outside the -instance, when the end user uses the Compute API to do so. However, you -cannot query the instance-associated key-value pairs with the metadata -service that is compatible with the Amazon EC2 metadata service. - -For an example of instance metadata, users can generate and register SSH -keys using the :command:`nova` command: - -.. code-block:: console - - $ nova keypair-add mykey > mykey.pem - -This creates a key named ``mykey``, which you can associate with -instances. The file ``mykey.pem`` is the private key, which should be -saved to a secure location because it allows root access to instances -the ``mykey`` key is associated with. - -Use this command to register an existing key with OpenStack: - -.. code-block:: console - - $ nova keypair-add --pub-key mykey.pub mykey - -.. note:: - - You must have the matching private key to access instances - associated with this key. - -To associate a key with an instance on boot, add :option:`--key_name mykey` to -your command line. For example: - -.. code-block:: console - - $ nova boot --image ubuntu-cloudimage --flavor 2 --key_name mykey myimage - -When booting a server, you can also add arbitrary metadata so that you -can more easily identify it among other running instances. Use the -:option:`--meta` option with a key-value pair, where you can make up the -string for both the key and the value. For example, you could add a -description and also the creator of the server: - -.. code-block:: console - - $ nova boot --image=test-image --flavor=1 \ - --meta description='Small test image' smallimage - -When viewing the server information, you can see the metadata included -on the metadata line: - -.. code-block:: console - - $ nova show smallimage - +------------------------+-----------------------------------------+ - | Property | Value | - +------------------------+-----------------------------------------+ - | OS-DCF:diskConfig | MANUAL | - | OS-EXT-STS:power_state | 1 | - | OS-EXT-STS:task_state | None | - | OS-EXT-STS:vm_state | active | - | accessIPv4 | | - | accessIPv6 | | - | config_drive | | - | created | 2012-05-16T20:48:23Z | - | flavor | m1.small | - | hostId | de0...487 | - | id | 8ec...f915 | - | image | natty-image | - | key_name | | - | metadata | {u'description': u'Small test image'} | - | name | smallimage | - | private network | 172.16.101.11 | - | progress | 0 | - | public network | 10.4.113.11 | - | status | ACTIVE | - | tenant_id | e83...482 | - | updated | 2012-05-16T20:48:35Z | - | user_id | de3...0a9 | - +------------------------+-----------------------------------------+ - -Instance user data ------------------- - -The ``user-data`` key is a special key in the metadata service that -holds a file that cloud-aware applications within the guest instance can -access. For example, -`cloudinit `__ is an open -source package from Ubuntu, but available in most distributions, that -handles early initialization of a cloud instance that makes use of this -user data. - -This user data can be put in a file on your local system and then passed -in at instance creation with the flag -:option:`--user-data` ````. - -For example - -.. code-block:: console - - $ nova boot --image ubuntu-cloudimage --flavor 1 --user-data mydata.file mydatainstance - -To understand the difference between user data and metadata, realize -that user data is created before an instance is started. User data is -accessible from within the instance when it is running. User data can be -used to store configuration, a script, or anything the tenant wants. - -File injection --------------- - -Arbitrary local files can also be placed into the instance file system -at creation time by using the :option:`--file` ```` option. -You may store up to five files. - -For example, let's say you have a special ``authorized_keys`` file named -special_authorized_keysfile that for some reason you want to put on -the instance instead of using the regular SSH key injection. In this -case, you can use the following command: - -.. code-block:: console - - $ nova boot --image ubuntu-cloudimage --flavor 1 \ - --file /root/.ssh/authorized_keys=special_authorized_keysfile authkeyinstance - -Associating Security Groups -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Security groups, as discussed earlier, are typically required to allow -network traffic to an instance, unless the default security group for a -project has been modified to be more permissive. - -Adding security groups is typically done on instance boot. When -launching from the dashboard, you do this on the -:guilabel:`Access & Security` tab of the Launch Instance dialog. -When launching from the command line, append ``--security-groups`` -with a comma-separated list of security groups. - -It is also possible to add and remove security groups when an instance -is running. Currently this is only available through the command-line -tools. Here is an example: - -.. code-block:: console - - $ nova add-secgroup - -.. code-block:: console - - $ nova remove-secgroup - -Floating IPs -~~~~~~~~~~~~ - -Where floating IPs are configured in a deployment, each project will -have a limited number of floating IPs controlled by a quota. However, -these need to be allocated to the project from the central pool prior to -their use—usually by the administrator of the project. To allocate a -floating IP to a project, use the :guilabel:`Allocate IP To Project` button -on the :guilabel:`Floating IPs` tab of the :guilabel:`Access & Security` page -of the dashboard. The command line can also be used: - -.. code-block:: console - - $ nova floating-ip-create - -Once allocated, a floating IP can be assigned to running instances from -the dashboard either by selecting :guilabel:`Associate Floating IP` from the -actions drop-down next to the :guilabel:`IP on the Floating IPs` tab of the -**Access & Security page** or by making this selection next to the instance -you want to associate it with on the Instances page. The inverse action, -Dissociate Floating IP, is available from the :guilabel:`Floating IPs` tab -of the **Access & Security** page and from the **Instances** page. - -To associate or disassociate a floating IP with a server from the -command line, use the following commands: - -.. code-block:: console - - $ nova add-floating-ip
- -.. code-block:: console - - $ nova remove-floating-ip
- -Attaching Block Storage -~~~~~~~~~~~~~~~~~~~~~~~ - -You can attach block storage to instances from the dashboard on the -Volumes page. Click the Manage Attachments action next to the volume you -want to attach. - -To perform this action from command line, run the following command: - -.. code-block:: console - - $ nova volume-attach - -You can also specify block deviceblock device mapping at instance boot -time through the nova command-line client with this option set: - -.. code-block:: console - - --block-device-mapping - -The block device mapping format is -``=:::``, -where: - -dev-name - A device name where the volume is attached in the system at - ``/dev/dev_name`` - -id - The ID of the volume to boot from, as shown in the output of - :command:`nova volume-list` - -type - Either ``snap``, which means that the volume was created from a - snapshot, or anything other than ``snap`` (a blank string is valid). - In the preceding example, the volume was not created from a - snapshot, so we leave this field blank in our following example. - -size (GB) - The size of the volume in gigabytes. It is safe to leave this blank - and have the Compute Service infer the size. - -delete-on-terminate - A boolean to indicate whether the volume should be deleted when the - instance is terminated. True can be specified as ``True`` or ``1``. - False can be specified as ``False`` or ``0``. - -The following command will boot a new instance and attach a volume at -the same time. The volume of ID 13 will be attached as ``/dev/vdc``. It -is not a snapshot, does not specify a size, and will not be deleted when -the instance is terminated: - -.. code-block:: console - - $ nova boot --image 4042220e-4f5e-4398-9054-39fbd75a5dd7 \ - --flavor 2 --key-name mykey --block-device-mapping vdc=13:::0 \ - boot-with-vol-test - -If you have previously prepared block storage with a bootable file -system image, it is even possible to boot from persistent block storage. -The following command boots an image from the specified volume. It is -similar to the previous command, but the image is omitted and the volume -is now attached as ``/dev/vda``: - -.. code-block:: console - - $ nova boot --flavor 2 --key-name mykey \ - --block-device-mapping vda=13:::0 boot-from-vol-test - -Read more detailed instructions for launching an instance from a -bootable volume in the `OpenStack End User -Guide `__. - -To boot normally from an image and attach block storage, map to a device -other than vda. You can find instructions for launching an instance and -attaching a volume to the instance and for copying the image to the -attached volume in the `OpenStack End User -Guide `__. - -Taking Snapshots -~~~~~~~~~~~~~~~~ - -The OpenStack snapshot mechanism allows you to create new images from -running instances. This is very convenient for upgrading base images or -for taking a published image and customizing it for local use. To -snapshot a running instance to an image using the CLI, do this: - -.. code-block:: console - - $ nova image-create - -The dashboard interface for snapshots can be confusing because the -snapshots and images are displayed in the **Images** page. However, an -instance snapshot *is* an image. The only difference between an image -that you upload directly to the Image Service and an image that you -create by snapshot is that an image created by snapshot has additional -properties in the glance database. These properties are found in the -``image_properties`` table and include: - -.. list-table:: - :widths: 50 50 - :header-rows: 1 - - * - Name - - Value - * - ``image_type`` - - snapshot - * - ``instance_uuid`` - - - * - ``base_image_ref`` - - - * - ``image_location`` - - snapshot - -Live Snapshots --------------- - -Live snapshots is a feature that allows users to snapshot the running -virtual machines without pausing them. These snapshots are simply -disk-only snapshots. Snapshotting an instance can now be performed with -no downtime (assuming QEMU 1.3+ and libvirt 1.0+ are used). - -.. note:: - - If you use libvirt version ``1.2.2``, you may experience - intermittent problems with live snapshot creation. - - To effectively disable the libvirt live snapshotting, until the - problem is resolved, add the below setting to nova.conf. - - .. code-block:: ini - - [workarounds] - disable_libvirt_livesnapshot = True - -**Ensuring Snapshots of Linux Guests Are Consistent** - -The following section is from Sébastien Han's `“OpenStack: Perform -Consistent Snapshots” blog -entry `__. - -A snapshot captures the state of the file system, but not the state of -the memory. Therefore, to ensure your snapshot contains the data that -you want, before your snapshot you need to ensure that: - -- Running programs have written their contents to disk - -- The file system does not have any "dirty" buffers: where programs - have issued the command to write to disk, but the operating system - has not yet done the write - -To ensure that important services have written their contents to disk -(such as databases), we recommend that you read the documentation for -those applications to determine what commands to issue to have them sync -their contents to disk. If you are unsure how to do this, the safest -approach is to simply stop these running services normally. - -To deal with the "dirty" buffer issue, we recommend using the sync -command before snapshotting: - -.. code-block:: console - - # sync - -Running ``sync`` writes dirty buffers (buffered blocks that have been -modified but not written yet to the disk block) to disk. - -Just running ``sync`` is not enough to ensure that the file system is -consistent. We recommend that you use the ``fsfreeze`` tool, which halts -new access to the file system, and create a stable image on disk that is -suitable for snapshotting. The ``fsfreeze`` tool supports several file -systems, including ext3, ext4, and XFS. If your virtual machine instance -is running on Ubuntu, install the util-linux package to get -``fsfreeze``: - -.. note:: - - In the very common case where the underlying snapshot is done via - LVM, the filesystem freeze is automatically handled by LVM. - -.. code-block:: console - - # apt-get install util-linux - -If your operating system doesn't have a version of ``fsfreeze`` -available, you can use ``xfs_freeze`` instead, which is available on -Ubuntu in the xfsprogs package. Despite the "xfs" in the name, -xfs_freeze also works on ext3 and ext4 if you are using a Linux kernel -version 2.6.29 or greater, since it works at the virtual file system -(VFS) level starting at 2.6.29. The xfs_freeze version supports the -same command-line arguments as ``fsfreeze``. - -Consider the example where you want to take a snapshot of a persistent -block storage volume, detected by the guest operating system as -``/dev/vdb`` and mounted on ``/mnt``. The fsfreeze command accepts two -arguments: - --f - Freeze the system - --u - Thaw (unfreeze) the system - -To freeze the volume in preparation for snapshotting, you would do the -following, as root, inside the instance: - -.. code-block:: console - - # fsfreeze -f /mnt - -You *must mount the file system* before you run the :command:`fsfreeze` -command. - -When the :command:`fsfreeze -f` command is issued, all ongoing transactions in -the file system are allowed to complete, new write system calls are -halted, and other calls that modify the file system are halted. Most -importantly, all dirty data, metadata, and log information are written -to disk. - -Once the volume has been frozen, do not attempt to read from or write to -the volume, as these operations hang. The operating system stops every -I/O operation and any I/O attempts are delayed until the file system has -been unfrozen. - -Once you have issued the :command:`fsfreeze` command, it is safe to perform -the snapshot. For example, if your instance was named ``mon-instance`` and -you wanted to snapshot it to an image named ``mon-snapshot``, you could -now run the following: - -.. code-block:: console - - $ nova image-create mon-instance mon-snapshot - -When the snapshot is done, you can thaw the file system with the -following command, as root, inside of the instance: - -.. code-block:: console - - # fsfreeze -u /mnt - -If you want to back up the root file system, you can't simply run the -preceding command because it will freeze the prompt. Instead, run the -following one-liner, as root, inside the instance: - -.. code-block:: console - - # fsfreeze -f / && read x; fsfreeze -u / - -After this command it is common practice to call :command:`nova image-create` -from your workstation, and once done press enter in your instance shell -to unfreeze it. Obviously you could automate this, but at least it will -let you properly synchronize. - - -**Ensuring Snapshots of Windows Guests Are Consistent** - -Obtaining consistent snapshots of Windows VMs is conceptually similar to -obtaining consistent snapshots of Linux VMs, although it requires -additional utilities to coordinate with a Windows-only subsystem -designed to facilitate consistent backups. - -Windows XP and later releases include a Volume Shadow Copy Service (VSS) -which provides a framework so that compliant applications can be -consistently backed up on a live filesystem. To use this framework, a -VSS requestor is run that signals to the VSS service that a consistent -backup is needed. The VSS service notifies compliant applications -(called VSS writers) to quiesce their data activity. The VSS service -then tells the copy provider to create a snapshot. Once the snapshot has -been made, the VSS service unfreezes VSS writers and normal I/O activity -resumes. - -QEMU provides a guest agent that can be run in guests running on KVM -hypervisors. This guest agent, on Windows VMs, coordinates with the -Windows VSS service to facilitate a workflow which ensures consistent -snapshots. This feature requires at least QEMU 1.7. The relevant guest -agent commands are: - -guest-file-flush - Write out "dirty" buffers to disk, similar to the Linux ``sync`` - operation. - -guest-fsfreeze - Suspend I/O to the disks, similar to the Linux ``fsfreeze -f`` - operation. - -guest-fsfreeze-thaw - Resume I/O to the disks, similar to the Linux ``fsfreeze -u`` - operation. - -To obtain snapshots of a Windows VM these commands can be scripted in -sequence: flush the filesystems, freeze the filesystems, snapshot the -filesystems, then unfreeze the filesystems. As with scripting similar -workflows against Linux VMs, care must be used when writing such a -script to ensure error handling is thorough and filesystems will not be -left in a frozen state. - -Instances in the Database -~~~~~~~~~~~~~~~~~~~~~~~~~ - -While instance information is stored in a number of database tables, the -table you most likely need to look at in relation to user instances is -the instances table. - -The instances table carries most of the information related to both -running and deleted instances. It has a bewildering array of fields; for -an exhaustive list, look at the database. These are the most useful -fields for operators looking to form queries: - -- The ``deleted`` field is set to ``1`` if the instance has been - deleted and ``NULL`` if it has not been deleted. This field is - important for excluding deleted instances from your queries. - -- The ``uuid`` field is the UUID of the instance and is used throughout - other tables in the database as a foreign key. This ID is also - reported in logs, the dashboard, and command-line tools to uniquely - identify an instance. - -- A collection of foreign keys are available to find relations to the - instance. The most useful of these — ``user_id`` and ``project_id`` - are the UUIDs of the user who launched the instance - and the project it was launched in. - -- The ``host`` field tells which compute node is hosting the instance. - -- The ``hostname`` field holds the name of the instance when it is - launched. The display-name is initially the same as hostname but can - be reset using the nova rename command. - -A number of time-related fields are useful for tracking when state -changes happened on an instance: - -- ``created_at`` - -- ``updated_at`` - -- ``deleted_at`` - -- ``scheduled_at`` - -- ``launched_at`` - -- ``terminated_at`` - -Good Luck! -~~~~~~~~~~ - -This section was intended as a brief introduction to some of the most -useful of many OpenStack commands. For an exhaustive list, please refer -to the `Administrator Guide `__. -We hope your users remain happy and recognize your hard work! -(For more hard work, turn the page to the next chapter, where we discuss -the system-facing operations: maintenance, failures and debugging.) diff --git a/doc/ops-guide/source/preface_ops.rst b/doc/ops-guide/source/preface_ops.rst deleted file mode 100644 index 3c5ca8f5..00000000 --- a/doc/ops-guide/source/preface_ops.rst +++ /dev/null @@ -1,500 +0,0 @@ -======= -Preface -======= - -OpenStack is an open source platform that lets you build an -:term:`Infrastructure-as-a-Service (IaaS)` cloud that runs on commodity -hardware. - -Introduction to OpenStack -~~~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack believes in open source, open design, and open development, -all in an open community that encourages participation by anyone. The -long-term vision for OpenStack is to produce a ubiquitous open source -cloud computing platform that meets the needs of public and private -cloud providers regardless of size. OpenStack services control large -pools of compute, storage, and networking resources throughout a data -center. - -The technology behind OpenStack consists of a series of interrelated -projects delivering various components for a cloud infrastructure -solution. Each service provides an open API so that all of these -resources can be managed through a dashboard that gives administrators -control while empowering users to provision resources through a web -interface, a command-line client, or software development kits that -support the API. Many OpenStack APIs are extensible, meaning you can -keep compatibility with a core set of calls while providing access to -more resources and innovating through API extensions. The OpenStack -project is a global collaboration of developers and cloud computing -technologists. The project produces an open standard cloud computing -platform for both public and private clouds. By focusing on ease of -implementation, massive scalability, a variety of rich features, and -tremendous extensibility, the project aims to deliver a practical and -reliable cloud solution for all types of organizations. - -Getting Started with OpenStack -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -As an open source project, one of the unique aspects of OpenStack is -that it has many different levels at which you can begin to engage with -it—you don't have to do everything yourself. - -Using OpenStack ---------------- - -You could ask, "Do I even need to build a cloud?" If you want to start -using a compute or storage service by just swiping your credit card, you -can go to eNovance, HP, Rackspace, or other organizations to start using -their public OpenStack clouds. Using their OpenStack cloud resources is -similar to accessing the publicly available Amazon Web Services Elastic -Compute Cloud (EC2) or Simple Storage Solution (S3). - -Plug and Play OpenStack ------------------------ - -However, the enticing part of OpenStack might be to build your own -private cloud, and there are several ways to accomplish this goal. -Perhaps the simplest of all is an appliance-style solution. You purchase -an appliance, unpack it, plug in the power and the network, and watch it -transform into an OpenStack cloud with minimal additional configuration. - -However, hardware choice is important for many applications, so if that -applies to you, consider that there are several software distributions -available that you can run on servers, storage, and network products of -your choosing. Canonical (where OpenStack replaced Eucalyptus as the -default cloud option in 2011), Red Hat, and SUSE offer enterprise -OpenStack solutions and support. You may also want to take a look at -some of the specialized distributions, such as those from Rackspace, -Piston, SwiftStack, or Cloudscaling. - -Alternatively, if you want someone to help guide you through the -decisions about the underlying hardware or your applications, perhaps -adding in a few features or integrating components along the way, -consider contacting one of the system integrators with OpenStack -experience, such as Mirantis or Metacloud. - -If your preference is to build your own OpenStack expertise internally, -a good way to kick-start that might be to attend or arrange a training -session. The OpenStack Foundation has a `Training -Marketplace `_ where you -can look for nearby events. Also, the OpenStack community is `working to -produce `_ open source -training materials. - -Roll Your Own OpenStack ------------------------ - -However, this guide has a different audience—those seeking flexibility -from the OpenStack framework by deploying do-it-yourself solutions. - -OpenStack is designed for horizontal scalability, so you can easily add -new compute, network, and storage resources to grow your cloud over -time. In addition to the pervasiveness of massive OpenStack public -clouds, many organizations, such as PayPal, Intel, and Comcast, build -large-scale private clouds. OpenStack offers much more than a typical -software package because it lets you integrate a number of different -technologies to construct a cloud. This approach provides great -flexibility, but the number of options might be daunting at first. - -Who This Book Is For -~~~~~~~~~~~~~~~~~~~~ - -This book is for those of you starting to run OpenStack clouds as well -as those of you who were handed an operational one and want to keep it -running well. Perhaps you're on a DevOps team, perhaps you are a system -administrator starting to dabble in the cloud, or maybe you want to get -on the OpenStack cloud team at your company. This book is for all of -you. - -This guide assumes that you are familiar with a Linux distribution that -supports OpenStack, SQL databases, and virtualization. You must be -comfortable administering and configuring multiple Linux machines for -networking. You must install and maintain an SQL database and -occasionally run queries against it. - -One of the most complex aspects of an OpenStack cloud is the networking -configuration. You should be familiar with concepts such as DHCP, Linux -bridges, VLANs, and iptables. You must also have access to a network -hardware expert who can configure the switches and routers required in -your OpenStack cloud. - -.. note:: - - Cloud computing is quite an advanced topic, and this book requires a - lot of background knowledge. However, if you are fairly new to cloud - computing, we recommend that you make use of the :doc:`common/glossary` - at the back of the book, as well as the online documentation for OpenStack - and additional resources mentioned in this book in :doc:`app_resources`. - -Further Reading ---------------- - -There are other books on the `OpenStack documentation -website `_ that can help you get the job -done. - -OpenStack Installation Guides - Describes a manual installation process, as in, by hand, without - automation, for multiple distributions based on a packaging system: - - - `Installation Guide for openSUSE 13.2 and SUSE Linux Enterprise - Server - 12 `_ - - - `Installation Guide for Red Hat Enterprise Linux 7 and CentOS - 7 `_ - - - `Installation Guide for Ubuntu 14.04 (LTS) - Server `_ - -`OpenStack Configuration Reference `_ - Contains a reference listing of all configuration options for core - and integrated OpenStack services by release version - -`OpenStack Administrator Guide `_ - Contains how-to information for managing an OpenStack cloud as - needed for your use cases, such as storage, computing, or - software-defined-networking - -`OpenStack High Availability Guide `_ - Describes potential strategies for making your OpenStack services - and related controllers and data stores highly available - -`OpenStack Security Guide `_ - Provides best practices and conceptual information about securing an - OpenStack cloud - -`Virtual Machine Image Guide `_ - Shows you how to obtain, create, and modify virtual machine images - that are compatible with OpenStack - -`OpenStack End User Guide `_ - Shows OpenStack end users how to create and manage resources in an - OpenStack cloud with the OpenStack dashboard and OpenStack client - commands - -`Networking Guide `_ - This guide targets OpenStack administrators seeking to deploy and - manage OpenStack Networking (neutron). - -`OpenStack API Guide `_ - A brief overview of how to send REST API requests to endpoints for - OpenStack services - -How This Book Is Organized -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This book is organized into two parts: the architecture decisions for -designing OpenStack clouds and the repeated operations for running -OpenStack clouds. - -**Part I:** - -:doc:`arch_examples` - Because of all the decisions the other chapters discuss, this - chapter describes the decisions made for this particular book and - much of the justification for the example architecture. - -:doc:`arch_provision` - While this book doesn't describe installation, we do recommend - automation for deployment and configuration, discussed in this - chapter. - -:doc:`arch_cloud_controller` - The cloud controller is an invention for the sake of consolidating - and describing which services run on which nodes. This chapter - discusses hardware and network considerations as well as how to - design the cloud controller for performance and separation of - services. - -:doc:`arch_compute_nodes` - This chapter describes the compute nodes, which are dedicated to - running virtual machines. Some hardware choices come into play here, - as well as logging and networking descriptions. - -:doc:`arch_scaling` - This chapter discusses the growth of your cloud resources through - scaling and segregation considerations. - -:doc:`arch_storage` - As with other architecture decisions, storage concepts within - OpenStack offer many options. This chapter lays out the choices for - you. - -:doc:`arch_network_design` - Your OpenStack cloud networking needs to fit into your existing - networks while also enabling the best design for your users and - administrators, and this chapter gives you in-depth information - about networking decisions. - -**Part II:** - -:doc:`ops_lay_of_the_land` - This chapter is written to let you get your hands wrapped around - your OpenStack cloud through command-line tools and understanding - what is already set up in your cloud. - -:doc:`ops_projects_users` - This chapter walks through user-enabling processes that all admins - must face to manage users, give them quotas to parcel out resources, - and so on. - -:doc:`ops_user_facing_operations` - This chapter shows you how to use OpenStack cloud resources and how - to train your users. - -:doc:`ops_maintenance` - This chapter goes into the common failures that the authors have - seen while running clouds in production, including troubleshooting. - -:doc:`ops_network_troubleshooting` - Because network troubleshooting is especially difficult with virtual - resources, this chapter is chock-full of helpful tips and tricks for - tracing network traffic, finding the root cause of networking - failures, and debugging related services, such as DHCP and DNS. - -:doc:`ops_logging_monitoring` - This chapter shows you where OpenStack places logs and how to best - read and manage logs for monitoring purposes. - -:doc:`ops_backup_recovery` - This chapter describes what you need to back up within OpenStack as - well as best practices for recovering backups. - -:doc:`ops_customize` - For readers who need to get a specialized feature into OpenStack, - this chapter describes how to use DevStack to write custom - middleware or a custom scheduler to rebalance your resources. - -:doc:`ops_upstream` - Because OpenStack is so, well, open, this chapter is dedicated to - helping you navigate the community and find out where you can help - and where you can get help. - -:doc:`ops_advanced_configuration` - Much of OpenStack is driver-oriented, so you can plug in different - solutions to the base set of services. This chapter describes some - advanced configuration topics. - -:doc:`ops_upgrades` - This chapter provides upgrade information based on the architectures - used in this book. - -**Back matter:** - -:doc:`app_usecases` - You can read a small selection of use cases from the OpenStack - community with some technical details and further resources. - -:doc:`app_crypt` - These are shared legendary tales of image disappearances, VM - massacres, and crazy troubleshooting techniques that result in - hard-learned lessons and wisdom. - -:doc:`app_roadmaps` - Read about how to track the OpenStack roadmap through the open and - transparent development processes. - -:doc:`app_resources` - So many OpenStack resources are available online because of the - fast-moving nature of the project, but there are also resources - listed here that the authors found helpful while learning - themselves. - -:doc:`common/glossary` - A list of terms used in this book is included, which is a subset of - the larger OpenStack glossary available online. - -Why and How We Wrote This Book -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -We wrote this book because we have deployed and maintained OpenStack -clouds for at least a year and we wanted to share this knowledge with -others. After months of being the point people for an OpenStack cloud, -we also wanted to have a document to hand to our system administrators -so that they'd know how to operate the cloud on a daily basis—both -reactively and pro-actively. We wanted to provide more detailed -technical information about the decisions that deployers make along the -way. - -We wrote this book to help you: - -- Design and create an architecture for your first nontrivial OpenStack - cloud. After you read this guide, you'll know which questions to ask - and how to organize your compute, networking, and storage resources - and the associated software packages. - -- Perform the day-to-day tasks required to administer a cloud. - -We wrote this book in a book sprint, which is a facilitated, rapid -development production method for books. For more information, see the -`BookSprints site `_. Your authors cobbled -this book together in five days during February 2013, fueled by caffeine -and the best takeout food that Austin, Texas, could offer. - -On the first day, we filled white boards with colorful sticky notes to -start to shape this nebulous book about how to architect and operate -clouds: - -We wrote furiously from our own experiences and bounced ideas between -each other. At regular intervals we reviewed the shape and organization -of the book and further molded it, leading to what you see today. - -The team includes: - -Tom Fifield - After learning about scalability in computing from particle physics - experiments, such as ATLAS at the Large Hadron Collider (LHC) at - CERN, Tom worked on OpenStack clouds in production to support the - Australian public research sector. Tom currently serves as an - OpenStack community manager and works on OpenStack documentation in - his spare time. - -Diane Fleming - Diane works on the OpenStack API documentation tirelessly. She - helped out wherever she could on this project. - -Anne Gentle - Anne is the documentation coordinator for OpenStack and also served - as an individual contributor to the Google Documentation Summit in - 2011, working with the Open Street Maps team. She has worked on book - sprints in the past, with FLOSS Manuals’ Adam Hyde facilitating. - Anne lives in Austin, Texas. - -Lorin Hochstein - An academic turned software-developer-slash-operator, Lorin worked - as the lead architect for Cloud Services at Nimbis Services, where - he deploys OpenStack for technical computing applications. He has - been working with OpenStack since the Cactus release. Previously, he - worked on high-performance computing extensions for OpenStack at - University of Southern California's Information Sciences Institute - (USC-ISI). - -Adam Hyde - Adam facilitated this book sprint. He also founded the book sprint - methodology and is the most experienced book-sprint facilitator - around. See http://www.booksprints.net for more information. Adam - founded FLOSS Manuals—a community of some 3,000 individuals - developing Free Manuals about Free Software. He is also the founder - and project manager for Booktype, an open source project for - writing, editing, and publishing books online and in print. - -Jonathan Proulx - Jon has been piloting an OpenStack cloud as a senior technical - architect at the MIT Computer Science and Artificial Intelligence - Lab for his researchers to have as much computing power as they - need. He started contributing to OpenStack documentation and - reviewing the documentation so that he could accelerate his - learning. - -Everett Toews - Everett is a developer advocate at Rackspace making OpenStack and - the Rackspace Cloud easy to use. Sometimes developer, sometimes - advocate, and sometimes operator, he's built web applications, - taught workshops, given presentations around the world, and deployed - OpenStack for production use by academia and business. - -Joe Topjian - Joe has designed and deployed several clouds at Cybera, a nonprofit - where they are building e-infrastructure to support entrepreneurs - and local researchers in Alberta, Canada. He also actively maintains - and operates these clouds as a systems architect, and his - experiences have generated a wealth of troubleshooting skills for - cloud environments. - -OpenStack community members - Many individual efforts keep a community book alive. Our community - members updated content for this book year-round. Also, a year after - the first sprint, Jon Proulx hosted a second two-day mini-sprint at - MIT with the goal of updating the book for the latest release. Since - the book's inception, more than 30 contributors have supported this - book. We have a tool chain for reviews, continuous builds, and - translations. Writers and developers continuously review patches, - enter doc bugs, edit content, and fix doc bugs. We want to recognize - their efforts! - - The following people have contributed to this book: Akihiro Motoki, - Alejandro Avella, Alexandra Settle, Andreas Jaeger, Andy McCallum, - Benjamin Stassart, Chandan Kumar, Chris Ricker, David Cramer, David - Wittman, Denny Zhang, Emilien Macchi, Gauvain Pocentek, Ignacio - Barrio, James E. Blair, Jay Clark, Jeff White, Jeremy Stanley, K - Jonathan Harker, KATO Tomoyuki, Lana Brindley, Laura Alves, Lee Li, - Lukasz Jernas, Mario B. Codeniera, Matthew Kassawara, Michael Still, - Monty Taylor, Nermina Miller, Nigel Williams, Phil Hopkins, Russell - Bryant, Sahid Orentino Ferdjaoui, Sandy Walsh, Sascha Peilicke, Sean - M. Collins, Sergey Lukjanov, Shilla Saebi, Stephen Gordon, Summer - Long, Uwe Stuehler, Vaibhav Bhatkar, Veronica Musso, Ying Chun - "Daisy" Guo, Zhengguang Ou, and ZhiQiang Fan. - -How to Contribute to This Book -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The genesis of this book was an in-person event, but now that the book -is in your hands, we want you to contribute to it. OpenStack -documentation follows the coding principles of iterative work, with bug -logging, investigating, and fixing. We also store the source content on -GitHub and invite collaborators through the OpenStack Gerrit -installation, which offers reviews. For the O'Reilly edition of this -book, we are using the company's Atlas system, which also stores source -content on GitHub and enables collaboration among contributors. - -Learn more about how to contribute to the OpenStack docs at `OpenStack -Documentation Contributor -Guide `_. - -If you find a bug and can't fix it or aren't sure it's really a doc bug, -log a bug at `OpenStack -Manuals `_. Tag the bug -under Extra options with the ``ops-guide`` tag to indicate that the bug -is in this guide. You can assign the bug to yourself if you know how to -fix it. Also, a member of the OpenStack doc-core team can triage the doc -bug. - -Conventions Used in This Book -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The following typographical conventions are used in this book: - -*Italic* - Indicates new terms, URLs, email addresses, filenames, and file - extensions. - -``Constant width`` - Used for program listings, as well as within paragraphs to refer to - program elements such as variable or function names, databases, data - types, environment variables, statements, and keywords. - -``Constant width bold`` - Shows commands or other text that should be typed literally by the - user. - -Constant width italic - Shows text that should be replaced with user-supplied values or by - values determined by context. - -Command prompts - Commands prefixed with the ``#`` prompt should be executed by the - ``root`` user. These examples can also be executed using the - :command:`sudo` command, if available. - - Commands prefixed with the ``$`` prompt can be executed by any user, - including ``root``. - -.. tip:: - - This element signifies a tip or suggestion. - -.. note:: - - This element signifies a general note. - -.. warning:: - - This element indicates a warning or caution. - -See also: - -.. toctree:: - - common/conventions.rst diff --git a/other-requirements.txt b/other-requirements.txt deleted file mode 100644 index d28aa7a4..00000000 --- a/other-requirements.txt +++ /dev/null @@ -1,17 +0,0 @@ -# This is a cross-platform list tracking distribution packages needed by tests; -# see http://docs.openstack.org/infra/bindep/ for additional information. - -fonts-nanum [platform:dpkg] -fonts-takao [platform:dpkg] -gettext -gnome-doc-utils -libxml2-dev [platform:dpkg] -libxml2-devel [platform:rpm] -libxml2-utils [platform:dpkg] -libxslt-devel [platform:rpm] -libxslt1-dev [platform:dpkg] -python-dev [platform:dpkg] -python-lxml -xsltproc [platform:dpkg] -zlib-devel [platform:rpm] -zlib1g-dev [platform:dpkg] diff --git a/test-requirements.txt b/test-requirements.txt deleted file mode 100644 index b87689eb..00000000 --- a/test-requirements.txt +++ /dev/null @@ -1,7 +0,0 @@ -# The order of packages is significant, because pip processes them in the order -# of appearance. Changing the order has an impact on the overall integration -# process, which may cause wedges in the gate later. -openstack-doc-tools>=0.30 -sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2 -openstackdocstheme>=1.2.3 -doc8 # Apache-2.0 diff --git a/tools/build-all-rst.sh b/tools/build-all-rst.sh deleted file mode 100755 index 48fede60..00000000 --- a/tools/build-all-rst.sh +++ /dev/null @@ -1,6 +0,0 @@ -#!/bin/bash -e - -mkdir -p publish-docs - -doc-tools-build-rst doc/ops-guide --build build \ - --target draft/ops-guide diff --git a/tools/generatepot b/tools/generatepot deleted file mode 100755 index 2e22bb84..00000000 --- a/tools/generatepot +++ /dev/null @@ -1,89 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import os, sys -from xml2po import Main -from xml2po.modes.docbook import docbookXmlMode - -class myDocbookXmlMode(docbookXmlMode): - def __init__(self): - self.lists = ['itemizedlist', 'orderedlist', 'variablelist', - 'segmentedlist', 'simplelist', 'calloutlist', 'varlistentry', 'userinput', - 'computeroutput','prompt','command','screen'] - self.objects = [ 'figure', 'textobject', 'imageobject', 'mediaobject', - 'screenshot','literallayout', 'programlisting', - 'option'] - -default_mode = 'docbook' -operation = 'pot' -options = { - 'mark_untranslated' : False, - 'expand_entities' : True, - 'expand_all_entities' : False, -} - -ignore_folder = {"docbkx-example"} -ignore_file = {"api-examples.xml"} - -root = "./doc" - -def generatePoT (folder): - if (folder==None) : - path = root - else : - generateSinglePoT(folder) - return - - if (not os.path.isdir(path)) : - return - - files = os.listdir(path) - for aFile in files : - if (not (aFile in ignore_folder)): - generateSinglePoT (aFile) - -def generateSinglePoT(folder): - xmlfiles = [] - abspath = os.path.join(root, folder) - if (os.path.isdir(abspath)) : - os.path.walk(abspath, get_all_xml, xmlfiles) - else: - return - - if len(xmlfiles)>0 : - output = os.path.join(abspath,"locale") - if (not os.path.exists(output)) : - os.mkdir(output) - output = os.path.join(output, folder+".pot") - try: - xml2po_main = Main(default_mode, operation, output, options) - xml2po_main.current_mode = myDocbookXmlMode() - except IOError: - print "Error: cannot open aFile %s for writing." % (output) - sys.exit(5) - #print(xmlfiles) - #print(">>>outout: %s ", output) - xml2po_main.to_pot(xmlfiles) - -def get_all_xml (sms, dr, flst): - if ((flst == "target") or (flst == "wadls")) : - return - if (dr.find("target")>-1) : - return - if (dr.find("wadls")>-1) : - return - - for f in flst: - if (f.endswith(".xml") and (f != "pom.xml") and (not (f in ignore_file))) : - sms.append(os.path.join(dr,f)) - -def main(): - try: - folder = sys.argv[1] - except: - folder = None - generatePoT(folder) - -if __name__ == '__main__': - main() - diff --git a/tools/generatepot-rst.sh b/tools/generatepot-rst.sh deleted file mode 100755 index 319805f9..00000000 --- a/tools/generatepot-rst.sh +++ /dev/null @@ -1,42 +0,0 @@ -#!/bin/bash -xe - -# -# Licensed under the Apache License, Version 2.0 (the "License"); you may -# not use this file except in compliance with the License. You may obtain -# a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT -# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the -# License for the specific language governing permissions and limitations -# under the License. - -DOCNAME=$1 - -if [ -z "$DOCNAME" ] ; then - echo "usage $0 DOCNAME" - exit 1 -fi - -# We're not doing anything for this directory. -if [[ "$DOCNAME" = "common" ]] ; then - exit 0 -fi - -rm -f doc/$DOCNAME/source/locale/$DOCNAME.pot -sphinx-build -b gettext doc/$DOCNAME/source/ doc/$DOCNAME/source/locale/ - -# common is translated as part of openstack-manuals, do not -# include the file in the combined tree. -rm doc/$DOCNAME/source/locale/common.pot - -# Take care of deleting all temporary files so that git add -# doc/$DOCNAME/source/locale will only add the single pot file. -# Remove UUIDs, those are not necessary and change too often -msgcat --sort-by-file doc/$DOCNAME/source/locale/*.pot | \ - awk '$0 !~ /^\# [a-z0-9]+$/' > doc/$DOCNAME/source/$DOCNAME.pot -rm doc/$DOCNAME/source/locale/*.pot -rm -rf doc/$DOCNAME/source/locale/.doctrees/ -mv doc/$DOCNAME/source/$DOCNAME.pot doc/$DOCNAME/source/locale/$DOCNAME.pot diff --git a/tox.ini b/tox.ini deleted file mode 100644 index 6aedb887..00000000 --- a/tox.ini +++ /dev/null @@ -1,102 +0,0 @@ -[tox] -minversion = 1.6 -envlist = checkniceness,checksyntax,checkdeletions,checkbuild,checklinks -skipsdist = True - -[testenv] -basepython = python2 -setenv = - VIRTUAL_ENV={envdir} -deps = -r{toxinidir}/test-requirements.txt -whitelist_externals = - bash - cp - mkdir - rm - rsync - sed - -[testenv:venv] -commands = {posargs} - -[testenv:checklinks] -commands = openstack-doc-test --check-links {posargs} - -[testenv:checkniceness] -commands = - openstack-doc-test --check-niceness {posargs} - doc8 doc - -[testenv:checksyntax] -commands = - openstack-doc-test --check-syntax {posargs} - # Check that .po and .pot files are valid: - bash -c "find doc -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null" - -[testenv:checkdeletions] -commands = openstack-doc-test --check-deletions {posargs} - -[testenv:checkbuild] -commands = - # Build and copy RST Guides - #{toxinidir}/tools/build-all-rst.sh - # This generates the current DocBook guide and index page - openstack-doc-test --check-build {posargs} - -[testenv:publishdocs] -# Prepare all documents so that they can get -# published on docs.openstack.org with just copying publish-docs/* -# over. -commands = - openstack-doc-test --check-build --nocreate-index --force - # Build and copy RST Guides - #{toxinidir}/tools/build-all-rst.sh - -[testenv:checklang] -# openstack-generate-docbook needs xml2po which cannot be installed in -# the venv. Since it's installed in the system, let's use -# sitepackages. -sitepackages=True -whitelist_externals = - doc-tools-check-languages - bash -commands = - doc-tools-check-languages doc-tools-check-languages.conf test all - # Check that .po and .pot files are valid: - bash -c "find doc -type f -regex '.*\.pot?' -print0|xargs -0 -n 1 msgfmt --check-format -o /dev/null" - -[testenv:buildlang] -# Run as "tox -e buildlang -- $LANG" -# openstack-generate-docbook needs xml2po which cannot be installed in -# the venv. Since it's installed in the system, let's use -# sitepackages. -sitepackages=True -whitelist_externals = doc-tools-check-languages -commands = doc-tools-check-languages doc-tools-check-languages.conf test {posargs} - -[testenv:publishlang] -# openstack-generate-docbook needs xml2po which cannot be installed in -# the venv. Since it's installed in the system, let's use -# sitepackages. -sitepackages=True -whitelist_externals = doc-tools-check-languages -commands = doc-tools-check-languages doc-tools-check-languages.conf publish all - -[testenv:generatepot-rst] -# Generate POT files for translation, needs {posargs} like: -# tox -e generatepot-rst -- user-guide -commands = {toxinidir}/tools/generatepot-rst.sh {posargs} - -[testenv:docs] -commands = - {toxinidir}/tools/build-all-rst.sh - -[doc8] -# Settings for doc8: -# Ignore target directories -ignore-path = doc/*/target,doc/common -# File extensions to use -extensions = .rst,.txt -# Disable some doc8 checks: -# D000: Check RST validity (cannot handle lineos directive) -ignore = D000