[admin-guide] Use "project" to replace "tenant" term in admin-guide

This patch use "project" to replace "tenant" term in admin-guide for
cleanup.

Change-Id: I879a6c1ecfbbed2d8db0a02457d06375a268b176
Partial-Bug: #1475005
This commit is contained in:
qiaomin 2016-08-29 14:23:52 +00:00
parent 4093606a2d
commit 84b1a2ffed
41 changed files with 209 additions and 208 deletions

View File

@ -4,15 +4,15 @@
Use multitenancy with Bare Metal service
========================================
Multitenancy allows creating a dedicated tenant network that extends the
Multitenancy allows creating a dedicated project network that extends the
current Bare Metal (ironic) service capabilities of providing ``flat``
networks. Multitenancy works in conjunction with Networking (neutron)
service to allow provisioning of a bare metal server onto the tenant network.
Therefore, multiple tenants can get isolated instances after deployment.
service to allow provisioning of a bare metal server onto the project network.
Therefore, multiple projects can get isolated instances after deployment.
Bare Metal service provides the ``local_link_connection`` information to the
Networking service ML2 driver. The ML2 driver uses that information to plug the
specified port to the tenant network.
specified port to the project network.
.. list-table:: ``local_link_connection`` fields
:header-rows: 1

View File

@ -22,7 +22,7 @@ Configure the Internal Tenant
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Image-Volume cache requires that the Internal Tenant be configured for
the Block Storage services. This tenant will own the cached image-volumes so
the Block Storage services. This project will own the cached image-volumes so
they can be managed like normal users including tools like volume quotas. This
protects normal users from having to see the cached image-volumes, but does
not make them globally hidden.
@ -46,7 +46,7 @@ An example ``cinder.conf`` configuration file:
The actual user and project that are configured for the Internal Tenant do
not require any special privileges. They can be the Block Storage service
tenant or can be any normal project and user.
project or can be any normal project and user.
Configure the Image-Volume cache
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

View File

@ -11,7 +11,7 @@ such as file and swift, creating a volume from a Volume-backed image performs
better when the block storage driver supports efficient volume cloning.
If the image is set to public in the Image service, the volume data can be
shared among tenants.
shared among projects.
Configure the Volume-backed image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ -50,15 +50,15 @@ each back-end section of the ``cinder.conf`` file:
image_upload_use_cinder_backend = True
By default, the :command:`openstack image create --volume <volume>` command
creates the Image-Volume in the current tenant. To store the Image-Volume into
the internal tenant, set the following options in each back-end section of the
creates the Image-Volume in the current project. To store the Image-Volume into
the internal project, set the following options in each back-end section of the
``cinder.conf`` file:
.. code-block:: ini
image_upload_use_internal_tenant = True
To make the Image-Volume in the internal tenant accessible from the Image
To make the Image-Volume in the internal project accessible from the Image
service, set the following options in the ``glance_store`` section of
the ``glance-api.conf`` file:

View File

@ -87,7 +87,7 @@ command. Optional arguments to clarify the status of your backups
include: running :option:`--name`, :option:`--status`, and
:option:`--volume-id` to filter through backups by the specified name,
status, or volume-id. Search with :option:`--all-tenants` for details of the
tenants associated with the listed backups.
projects associated with the listed backups.
Because volume backups are dependent on the Block Storage database, you must
also back up your Block Storage database regularly to ensure data recovery.

View File

@ -29,7 +29,7 @@ Administrative users can view Block Storage service quotas.
$ project_id=$(openstack project show -f value -c id PROJECT_NAME)
#. List the default quotas for a project (tenant):
#. List the default quotas for a project:
.. code-block:: console
@ -48,7 +48,7 @@ Administrative users can view Block Storage service quotas.
| volumes | 10 |
+-----------+-------+
#. View Block Storage service quotas for a project (tenant):
#. View Block Storage service quotas for a project:
.. code-block:: console
@ -99,7 +99,7 @@ service quotas.
<http://docs.openstack.org/mitaka/config-reference/block-storage.html>`_
in OpenStack Configuration Reference.
#. To update Block Storage service quotas for an existing project (tenant):
#. To update Block Storage service quotas for an existing project
.. code-block:: console

View File

@ -119,7 +119,7 @@ Create a flavor
$ openstack flavor create --is-public true m1.extra_tiny auto 256 0 1 --rxtx-factor .1
#. If an individual user or group of users needs a custom
flavor that you do not want other tenants to have access to,
flavor that you do not want other projects to have access to,
you can change the flavor's access to make it a private flavor.
See
`Private Flavors in the OpenStack Operations Guide <http://docs.openstack.org/ops-guide/ops-user-facing-operations.html#private-flavors>`_.
@ -132,7 +132,7 @@ Create a flavor
#. After you create a flavor, assign it to a
project by specifying the flavor name or ID and
the tenant ID:
the project ID:
.. code-block:: console

View File

@ -4,7 +4,7 @@ Manage projects, users, and roles
As an administrator, you manage projects, users, and
roles. Projects are organizational units in the cloud to which
you can assign users. Projects are also known as *tenants* or
you can assign users. Projects are also known as *projects* or
*accounts*. Users can be members of one or more projects. Roles
define which actions users can perform. You assign roles to
user-project pairs.
@ -146,8 +146,8 @@ Create a user
^^^^^^^^^^^^^
To create a user, you must specify a name. Optionally, you can
specify a tenant ID, password, and email address. It is recommended
that you include the tenant ID and password because the user cannot
specify a project ID, password, and email address. It is recommended
that you include the project ID and password because the user cannot
log in to the dashboard without this information.
Create the ``new-user`` user:

View File

@ -3,7 +3,7 @@ Manage Networking service quotas
================================
A quota limits the number of available resources. A default
quota might be enforced for all tenants. When you try to create
quota might be enforced for all projects. When you try to create
more resources than the quota allows, an error occurs:
.. code-block:: ini
@ -11,15 +11,15 @@ more resources than the quota allows, an error occurs:
$ neutron net-create test_net
Quota exceeded for resources: ['network']
Per-tenant quota configuration is also supported by the quota
Per-project quota configuration is also supported by the quota
extension API. See :ref:`cfg_quotas_per_tenant` for details.
Basic quota configuration
~~~~~~~~~~~~~~~~~~~~~~~~~
In the Networking default quota mechanism, all tenants have
In the Networking default quota mechanism, all projects have
the same quota values, such as the number of resources that a
tenant can create.
project can create.
The quota value is defined in the OpenStack Networking
``/etc/neutron/neutron.conf`` configuration file. This example shows the
@ -69,33 +69,33 @@ each security group. Add these lines to the
.. _cfg_quotas_per_tenant:
Configure per-tenant quotas
~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Networking also supports per-tenant quota limit by
Configure per-project quotas
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack Networking also supports per-project quota limit by
quota extension API.
Use these commands to manage per-tenant quotas:
Use these commands to manage per-project quotas:
neutron quota-delete
Delete defined quotas for a specified tenant
Delete defined quotas for a specified project
neutron quota-list
Lists defined quotas for all tenants
Lists defined quotas for all projects
neutron quota-show
Shows quotas for a specified tenant
Shows quotas for a specified project
neutron quota-default-show
Show default quotas for a specified tenant
neutron quota-update
Updates quotas for a specified tenant
Updates quotas for a specified project
Only users with the ``admin`` role can change a quota value. By default,
the default set of quotas are enforced for all tenants, so no
the default set of quotas are enforced for all projects, so no
:command:`quota-create` command exists.
#. Configure Networking to show per-tenant quotas
#. Configure Networking to show per-project quotas
Set the ``quota_driver`` option in the ``/etc/neutron/neutron.conf`` file.
@ -114,7 +114,7 @@ the default set of quotas are enforced for all tenants, so no
$ neutron ext-list -c alias -c name
The command shows the ``quotas`` extension, which provides
per-tenant quota management support.
per-project quota management support.
.. code-block:: console
@ -152,17 +152,18 @@ the default set of quotas are enforced for all tenants, so no
.. note::
Only some plug-ins support per-tenant quotas.
Only some plug-ins support per-project quotas.
Specifically, Open vSwitch, Linux Bridge, and VMware NSX
support them, but new versions of other plug-ins might
bring additional functionality. See the documentation for
each plug-in.
#. List tenants who have per-tenant quota support.
#. List projects who have per-project quota support.
The :command:`neutron quota-list` command lists tenants for which the
per-tenant quota is enabled. The command does not list tenants with default
quota support. You must be an administrative user to run this command:
The :command:`neutron quota-list` command lists projects for which the
per-project quota is enabled. The command does not list projects with
default quota support. You must be an administrative user to run this
command:
.. code-block:: console
@ -174,13 +175,13 @@ the default set of quotas are enforced for all tenants, so no
| 25 | 10 | 30 | 10 | 10 | bff5c9455ee24231b5bc713c1b96d422 |
+------------+---------+------+--------+--------+----------------------------------+
#. Show per-tenant quota values.
#. Show per-project quota values.
The :command:`neutron quota-show` command reports the current
set of quota limits for the specified tenant.
set of quota limits for the specified project.
Non-administrative users can run this command without the
:option:`--tenant_id` parameter. If per-tenant quota limits are
not enabled for the tenant, the command shows the default
:option:`--tenant_id` parameter. If per-project quota limits are
not enabled for the project, the command shows the default
set of quotas.
.. code-block:: console
@ -212,10 +213,10 @@ the default set of quotas are enforced for all tenants, so no
| subnet | 5 |
+------------+-------+
#. Update quota values for a specified tenant.
#. Update quota values for a specified project.
Use the :command:`neutron quota-update` command to
update a quota for a specified tenant.
update a quota for a specified project.
.. code-block:: console
@ -251,7 +252,7 @@ the default set of quotas are enforced for all tenants, so no
after the ``--`` directive.
This example updates the limit of the number of floating
IPs for the specified tenant.
IPs for the specified project.
.. code-block:: console
@ -284,9 +285,9 @@ the default set of quotas are enforced for all tenants, so no
| subnet | 3 |
+------------+-------+
#. Delete per-tenant quota values.
#. Delete per-project quota values.
To clear per-tenant quota limits, use the
To clear per-project quota limits, use the
:command:`neutron quota-delete` command.
.. code-block:: console
@ -295,7 +296,7 @@ the default set of quotas are enforced for all tenants, so no
Deleted quota: 6f88036c45344d9999a1f971e4882723
After you run this command, you can see that quota
values for the tenant are reset to the default values.
values for the project are reset to the default values.
.. code-block:: console

View File

@ -38,7 +38,7 @@ List and view current security groups
From the command-line you can get a list of security groups for the
project, using the :command:`nova` command:
#. Ensure your system variables are set for the user and tenant for
#. Ensure your system variables are set for the user and project for
which you are checking security group rules. For example:
.. code-block:: console
@ -92,7 +92,7 @@ that use it where the longer description field often does not. For
example, seeing that an instance is using security group "http" is much
easier to understand than "bobs\_group" or "secgrp1".
#. Ensure your system variables are set for the user and tenant for
#. Ensure your system variables are set for the user and project for
which you are creating security group rules.
#. Add the new security group, as follows:
@ -162,7 +162,7 @@ easier to understand than "bobs\_group" or "secgrp1".
Delete a security group
~~~~~~~~~~~~~~~~~~~~~~~
#. Ensure your system variables are set for the user and tenant for
#. Ensure your system variables are set for the user and project for
which you are deleting a security group.
#. Delete the new security group, as follows:
@ -186,7 +186,7 @@ all the user's other Instances using the specified Source Group are
selected dynamically. This alleviates the need for individual rules to
allow each new member of the cluster.
#. Make sure to set the system variables for the user and tenant for
#. Make sure to set the system variables for the user and project for
which you are creating a security group rule.
#. Add a source group, as follows:

View File

@ -4,8 +4,8 @@ Manage Compute service quotas
As an administrative user, you can use the :command:`nova quota-*`
commands, which are provided by the ``python-novaclient``
package, to update the Compute service quotas for a specific tenant or
tenant user, as well as update the quota defaults for a new tenant.
package, to update the Compute service quotas for a specific project or
project user, as well as update the quota defaults for a new project.
**Compute quota descriptions**
@ -18,7 +18,7 @@ tenant user, as well as update the quota defaults for a new tenant.
* - cores
- Number of instance cores (VCPUs) allowed per project.
* - fixed-ips
- Number of fixed IP addresses allowed per tenant. This number
- Number of fixed IP addresses allowed per project. This number
must be equal to or greater than the number of allowed
instances.
* - floating-ips
@ -46,12 +46,12 @@ tenant user, as well as update the quota defaults for a new tenant.
* - server-group-members
- Number of servers per server group.
View and update Compute quotas for a tenant (project)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
View and update Compute quotas for a project (project)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To view and update default quota values
---------------------------------------
#. List all default quotas for all tenants:
#. List all default quotas for all projects:
.. code-block:: console
@ -81,7 +81,7 @@ To view and update default quota values
| server_group_members | 10 |
+-----------------------------+-------+
#. Update a default value for a new tenant.
#. Update a default value for a new project.
.. code-block:: console
@ -93,16 +93,16 @@ To view and update default quota values
$ nova quota-class-update --instances 15 default
To view quota values for an existing tenant (project)
-----------------------------------------------------
To view quota values for an existing project
--------------------------------------------
#. Place the tenant ID in a usable variable.
#. Place the project ID in a usable variable.
.. code-block:: console
$ tenant=$(openstack project show -f value -c id TENANT_NAME)
#. List the currently set quota values for a tenant.
#. List the currently set quota values for a project.
.. code-block:: console
@ -132,10 +132,10 @@ To view quota values for an existing tenant (project)
| server_group_members | 10 |
+-----------------------------+-------+
To update quota values for an existing tenant (project)
-------------------------------------------------------
To update quota values for an existing project
----------------------------------------------
#. Obtain the tenant ID.
#. Obtain the project ID.
.. code-block:: console
@ -181,11 +181,11 @@ To update quota values for an existing tenant (project)
$ nova help quota-update
View and update Compute quotas for a tenant user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
View and update Compute quotas for a project user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To view quota values for a tenant user
--------------------------------------
To view quota values for a project user
---------------------------------------
#. Place the user ID in a usable variable.
@ -193,13 +193,13 @@ To view quota values for a tenant user
$ tenantUser=$(openstack user show -f value -c id USER_NAME)
#. Place the user's tenant ID in a usable variable, as follows:
#. Place the user's project ID in a usable variable, as follows:
.. code-block:: console
$ tenant=$(openstack project show -f value -c id TENANT_NAME)
#. List the currently set quota values for a tenant user.
#. List the currently set quota values for a project user.
.. code-block:: console
@ -229,8 +229,8 @@ To view quota values for a tenant user
| server_group_members | 10 |
+-----------------------------+-------+
To update quota values for a tenant user
----------------------------------------
To update quota values for a project user
-----------------------------------------
#. Place the user ID in a usable variable.
@ -238,7 +238,7 @@ To update quota values for a tenant user
$ tenantUser=$(openstack user show -f value -c id USER_NAME)
#. Place the user's tenant ID in a usable variable, as follows:
#. Place the user's project ID in a usable variable, as follows:
.. code-block:: console
@ -284,8 +284,8 @@ To update quota values for a tenant user
$ nova help quota-update
To display the current quota usage for a tenant user
----------------------------------------------------
To display the current quota usage for a project user
-----------------------------------------------------
Use :command:`nova absolute-limits` to get a list of the
current quota values and the current quota usage:

View File

@ -7,21 +7,21 @@ Manage quotas
To prevent system capacities from being exhausted without
notification, you can set up quotas. Quotas are operational
limits. For example, the number of gigabytes allowed for each
tenant can be controlled so that cloud resources are optimized.
Quotas can be enforced at both the tenant (or project)
and the tenant-user level.
project can be controlled so that cloud resources are optimized.
Quotas can be enforced at both the project
and the project-user level.
Using the command-line interface, you can manage quotas for
the OpenStack Compute service, the OpenStack Block Storage service,
and the OpenStack Networking service.
The cloud operator typically changes default values because a
tenant requires more than ten volumes or 1 TB on a compute
project requires more than ten volumes or 1 TB on a compute
node.
.. note::
To view all tenants (projects), run:
To view all projects, run:
.. code-block:: console
@ -35,7 +35,7 @@ node.
| f599c5cd1cba4125ae3d7caed08e288c | tenant02 |
+----------------------------------+----------+
To display all current users for a tenant, run:
To display all current users for a project, run:
.. code-block:: console

View File

@ -61,7 +61,7 @@ Flavors define these elements:
| | or NSX based systems. |
+-------------+---------------------------------------------------------------+
| Is Public | Boolean value, whether flavor is available to all users or p\ |
| | rivate to the tenant it was created in. Defaults to ``True``. |
| | rivate to the project it was created in. Defaults to ``True``.|
+-------------+---------------------------------------------------------------+
| Extra Specs | Key and value pairs that define on which compute nodes a fla\ |
| | vor can run. These pairs must match corresponding pairs on t\ |

View File

@ -53,7 +53,7 @@ specific commands might be restricted by the Identity service.
#. Set the required parameters as environment variables to make running
commands easier. For example, you can add :option:`--os-username` as an
``openstack`` option, or set it as an environment variable. To set the user
name, password, and tenant as environment variables, use:
name, password, and project as environment variables, use:
.. code-block:: console

View File

@ -94,28 +94,28 @@ Flat DHCP Network Manager
VLAN Network Manager
This is the default mode for OpenStack Compute. In this mode,
Compute creates a VLAN and bridge for each tenant. For
Compute creates a VLAN and bridge for each project. For
multiple-machine installations, the VLAN Network Mode requires a
switch that supports VLAN tagging (IEEE 802.1Q). The tenant gets a
switch that supports VLAN tagging (IEEE 802.1Q). The project gets a
range of private IPs that are only accessible from inside the VLAN.
In order for a user to access the instances in their tenant, a
In order for a user to access the instances in their project, a
special VPN instance (code named ``cloudpipe``) needs to be created.
Compute generates a certificate and key for the user to access the
VPN and starts the VPN automatically. It provides a private network
segment for each tenant's instances that can be accessed through a
segment for each project's instances that can be accessed through a
dedicated VPN connection from the internet. In this mode, each
tenant gets its own VLAN, Linux networking bridge, and subnet.
project gets its own VLAN, Linux networking bridge, and subnet.
The subnets are specified by the network administrator, and are
assigned dynamically to a tenant when required. A DHCP server is
assigned dynamically to a project when required. A DHCP server is
started for each VLAN to pass out IP addresses to VM instances from
the subnet assigned to the tenant. All instances belonging to one
tenant are bridged into the same VLAN for that tenant. OpenStack
the subnet assigned to the project. All instances belonging to one
project are bridged into the same VLAN for that project. OpenStack
Compute creates the Linux networking bridges and VLANs when
required.
These network managers can co-exist in a cloud system. However, because
you cannot select the type of network for a given tenant, you cannot
you cannot select the type of network for a given project, you cannot
configure multiple network types in a single Compute installation.
All network managers configure the network using network drivers. For
@ -155,7 +155,7 @@ All machines must have a public and internal network interface
interface, and ``flat_interface`` and ``vlan_interface`` for the
internal interface with flat or VLAN managers). This guide refers to the
public network as the external network and the private network as the
internal or tenant network.
internal or project network.
For flat and flat DHCP modes, use the :command:`nova network-create` command
to create a network:
@ -789,7 +789,7 @@ Using multinic
--------------
In order to use multinic, create two networks, and attach them to the
tenant (named ``project`` on the command line):
project (named ``project`` on the command line):
.. code-block:: console

View File

@ -9,15 +9,15 @@ View and manage quotas
To prevent system capacities from being exhausted without notification,
you can set up quotas. Quotas are operational limits. For example, the
number of gigabytes allowed for each tenant can be controlled so that
cloud resources are optimized. Quotas can be enforced at both the tenant
(or project) and the tenant-user level.
number of gigabytes allowed for each project can be controlled so that
cloud resources are optimized. Quotas can be enforced at both the project
and the project-user level.
Typically, you change quotas when a project needs more than ten
volumes or 1 |nbsp| TB on a compute node.
Using the Dashboard, you can view default Compute and Block Storage
quotas for new tenants, as well as update quotas for existing tenants.
quotas for new projects, as well as update quotas for existing projects.
.. note::
@ -26,7 +26,7 @@ quotas for new tenants, as well as update quotas for existing tenants.
the OpenStack Networking service (see `OpenStack Administrator Guide
<http://docs.openstack.org/admin-guide/cli-set-quotas.html>`_).
Additionally, you can update Compute service quotas for
tenant users.
project users.
The following table describes the Compute and Block Storage service quotas:

View File

@ -31,7 +31,7 @@ View resource statistics
#. Click the:
* :guilabel:`Usage Report` tab to view a usage report per tenant (project)
* :guilabel:`Usage Report` tab to view a usage report per project
by specifying the time period (or even use a calendar to define
a date range).

View File

@ -63,8 +63,8 @@ This sample paste config filter makes use of the ``admin_user`` and
.. note::
Using this option requires an admin tenant/role relationship. The
admin user is granted access to the admin role on the admin tenant.
Using this option requires an admin project/role relationship. The
admin user is granted access to the admin role on the admin project.
.. note::

View File

@ -127,17 +127,17 @@ Identity user management examples:
Compute service's ``policy.json`` file to require this role for
Compute operations.
The Identity service assigns a tenant and a role to a user. You might
The Identity service assigns a project and a role to a user. You might
assign the ``compute-user`` role to the ``alice`` user in the ``acme``
tenant:
project:
.. code-block:: console
$ openstack role add --project acme --user alice compute-user
A user can have different roles in different tenants. For example, Alice
might also have the ``admin`` role in the ``Cyberdyne`` tenant. A user
can also have multiple roles in the same tenant.
A user can have different roles in different projects. For example, Alice
might also have the ``admin`` role in the ``Cyberdyne`` project. A user
can also have multiple roles in the same project.
The ``/etc/[SERVICE_CODENAME]/policy.json`` file controls the
tasks that users can perform for a given service. For example, the
@ -149,7 +149,7 @@ the Identity service.
The default ``policy.json`` files in the Compute, Identity, and
Image services recognize only the ``admin`` role. Any user with
any role in a tenant can access all operations that do not require the
any role in a project can access all operations that do not require the
``admin`` role.
To restrict users from performing operations in, for example, the
@ -164,11 +164,11 @@ file does not restrict which users can create volumes:
"volume:create": "",
If the user has any role in a tenant, he can create volumes in that
tenant.
If the user has any role in a project, he can create volumes in that
project.
To restrict the creation of volumes to users who have the
``compute-user`` role in a particular tenant, you add ``"role:compute-user"``:
``compute-user`` role in a particular project, you add ``"role:compute-user"``:
.. code-block:: json
@ -300,7 +300,7 @@ services. It consists of:
The Identity service also maintains a user that corresponds to each
service, such as, a user named ``nova`` for the Compute service, and a
special service tenant called ``service``.
special service project called ``service``.
For information about how to create services and endpoints, see the
`OpenStack Administrator Guide <http://docs.openstack.org/admin-guide/
@ -330,7 +330,7 @@ Identity API V3 provides the following group-related operations:
* List groups for a user
* Assign a role on a tenant to a group
* Assign a role on a project to a group
* Assign a role on a domain to a group
@ -345,8 +345,8 @@ Identity API V3 provides the following group-related operations:
Here are a couple of examples:
* Group A is granted Role A on Tenant A. If User A is a member of Group
A, when User A gets a token scoped to Tenant A, the token also
* Group A is granted Role A on Project A. If User A is a member of Group
A, when User A gets a token scoped to Project A, the token also
includes Role A.
* Group B is granted Role B on Domain B. If User B is a member of

View File

@ -7,7 +7,7 @@ Integrate assignment back end with LDAP
When you configure the OpenStack Identity service to use LDAP servers,
you can split authentication and authorization using the *assignment*
feature. Integrating the *assignment* back end with LDAP allows
administrators to use projects (tenant), roles, domains, and role
administrators to use projects, roles, domains, and role
assignments in LDAP.
.. note::

View File

@ -211,7 +211,7 @@ Identity attribute mapping
user_enabled_invert = false
user_enabled_default = 51
user_default_project_id_attribute =
user_attribute_ignore = default_project_id,tenants
user_attribute_ignore = default_project_id,projects
user_additional_attribute_mapping =
group_id_attribute = cn

View File

@ -24,7 +24,7 @@ The delegation parameters are:
The user IDs for the trustor and trustee.
**Privileges**
The delegated privileges are a combination of a tenant ID and a
The delegated privileges are a combination of a project ID and a
number of roles that must be a subset of the roles assigned to the
trustor.
@ -50,7 +50,7 @@ The delegation parameters are:
This parameter further restricts the delegation to the specified
endpoints only. If you omit the endpoints, the delegation is
useless. A special value of ``all_endpoints`` allows the trust to be
used by all endpoints associated with the delegated tenant.
used by all endpoints associated with the delegated project.
**Duration**
(Optional) Comprised of the start time and end time for the trust.

View File

@ -11,8 +11,8 @@ to be of interest to the OpenStack community.
Provider networks
~~~~~~~~~~~~~~~~~
Networks can be categorized as either tenant networks or provider
networks. Tenant networks are created by normal users and details about
Networks can be categorized as either project networks or provider
networks. Project networks are created by normal users and details about
how they are physically realized are hidden from those users. Provider
networks are created with administrative credentials, specifying the
details of how the network is physically realized, usually to match some
@ -20,7 +20,7 @@ existing network in the data center.
Provider networks enable administrators to create networks that map
directly to the physical networks in the data center.
This is commonly used to give tenants direct access to a public network
This is commonly used to give projects direct access to a public network
that can be used to reach the Internet. It might also be used to
integrate with VLANs in the network that already have a defined meaning
(for example, enable a VM from the marketing department to be placed
@ -62,14 +62,14 @@ configuration of plug-ins supporting the provider extension:
| |extension and the plug-in configurations identify |
| |physical networks using simple string names. |
+----------------------+-----------------------------------------------------+
| **tenant network** |A virtual network that a tenant or an administrator |
| **project network** |A virtual network that a project or an administrator |
| |creates. The physical details of the network are not |
| |exposed to the tenant. |
| |exposed to the project. |
+----------------------+-----------------------------------------------------+
| **provider network** | A virtual network administratively created to map to|
| | a specific network in the data center, typically to |
| | enable direct access to non-OpenStack resources on |
| | that network. Tenants can be given access to |
| | that network. Project can be given access to |
| | provider networks. |
+----------------------+-----------------------------------------------------+
| **VLAN network** | A virtual network implemented as packets on a |
@ -138,7 +138,7 @@ these attributes:
``vxlan``, corresponding to flat networks, VLAN networks, local
networks, GRE networks, and VXLAN networks as defined above.
All types of provider networks can be created by administrators,
while tenant networks can be implemented as ``vlan``, ``gre``,
while project networks can be implemented as ``vlan``, ``gre``,
``vxlan``, or ``local`` network types depending on plug-in
configuration.
* - provider: physical_network
@ -187,7 +187,7 @@ The L3 router provides basic NAT capabilities on gateway ports that
uplink the router to external networks. This router SNATs all traffic by
default and supports floating IPs, which creates a static one-to-one
mapping from a public IP on the external network to a private IP on one
of the other subnets attached to the router. This allows a tenant to
of the other subnets attached to the router. This allows a project to
selectively expose VMs on private networks to other hosts on the
external network (and often to all hosts on the Internet). You can
allocate and map floating IPs from one port to another, as needed.
@ -327,7 +327,7 @@ Security groups
~~~~~~~~~~~~~~~
Security groups and security group rules allow administrators and
tenants to specify the type of traffic and direction
projects to specify the type of traffic and direction
(ingress/egress) that is allowed to pass through a port. A security
group is a container for security group rules.
@ -525,7 +525,7 @@ VMware NSX QoS extension
The VMware NSX QoS extension rate-limits network ports to guarantee a
specific amount of bandwidth for each port. This extension, by default,
is only accessible by a tenant with an admin role but is configurable
is only accessible by a project with an admin role but is configurable
through the ``policy.json`` file. To use this extension, create a queue
and specify the min/max bandwidth rates (kbps) and optionally set the
QoS Marking and DSCP value (if your network fabric uses these values to
@ -736,7 +736,7 @@ This section explains the Big Switch neutron plug-in-specific extension.
Big Switch router rules
^^^^^^^^^^^^^^^^^^^^^^^
Big Switch allows router rules to be added to each tenant router. These
Big Switch allows router rules to be added to each project router. These
rules can be used to enforce routing policies such as denying traffic
between subnets or traffic to external networks. By enforcing these at
the router level, network segmentation policies can be enforced across
@ -745,7 +745,7 @@ many VMs that have differing security groups.
Router rule attributes
''''''''''''''''''''''
Each tenant router has a set of router rules associated with it. Each
Each project router has a set of router rules associated with it. Each
router rule has the attributes in this table. Router rules and their
attributes can be set using the :command:`neutron router-update` command,
through the horizon interface or the Networking API.
@ -832,7 +832,7 @@ traffic that goes through a virtual router.
The L3 metering extension is decoupled from the technology that
implements the measurement. Two abstractions have been added: One is the
metering label that can contain metering rules. Because a metering label
is associated with a tenant, all virtual routers in this tenant are
is associated with a project, all virtual routers in this project are
associated with this label.
Basic L3 metering operations

View File

@ -35,16 +35,16 @@ include the following agents:
| | Certain plug-ins do not require an agent. |
+----------------------------+---------------------------------------------+
|**dhcp agent** | |
|(``neutron-dhcp-agent``) | Provides DHCP services to tenant networks. |
|(``neutron-dhcp-agent``) | Provides DHCP services to project networks. |
| | Required by certain plug-ins. |
+----------------------------+---------------------------------------------+
|**l3 agent** | |
|(``neutron-l3-agent``) | Provides L3/NAT forwarding to provide |
| | external network access for VMs on tenant |
| | external network access for VMs on project |
| | networks. Required by certain plug-ins. |
+----------------------------+---------------------------------------------+
|**metering agent** | |
|(``neutron-metering-agent``)| Provides L3 traffic metering for tenant |
|(``neutron-metering-agent``)| Provides L3 traffic metering for project |
| | networks. |
+----------------------------+---------------------------------------------+
@ -62,7 +62,7 @@ ways:
VM into a particular network.
- The dashboard (horizon) integrates with the Networking API, enabling
administrators and tenant users to create and manage network services
administrators and project users to create and manage network services
through a web-based GUI.
VMware NSX integration

View File

@ -13,11 +13,11 @@ about authentication with the Identity service, see `OpenStack Identity
service API v2.0
Reference <http://developer.openstack.org/api-ref/identity/v2/>`__.
When the Identity service is enabled, it is not mandatory to specify the
tenant ID for resources in create requests because the tenant ID is
project ID for resources in create requests because the project ID is
derived from the authentication token.
The default authorization settings only allow administrative users
to create resources on behalf of a different tenant. Networking uses
to create resources on behalf of a different project. Networking uses
information received from Identity to authorize user requests.
Networking handles two kind of authorization policies:
@ -80,15 +80,15 @@ terminal rules:
- **Generic rules** compare an attribute in the resource with an
attribute extracted from the user's security credentials and
evaluates successfully if the comparison is successful. For instance
``"tenant_id:%(tenant_id)s"`` is successful if the tenant identifier
in the resource is equal to the tenant identifier of the user
``"tenant_id:%(tenant_id)s"`` is successful if the project identifier
in the resource is equal to the project identifier of the user
submitting the request.
This extract is from the default ``policy.json`` file:
- A rule that evaluates successfully if the current user is an
administrator or the owner of the resource specified in the request
(tenant identifier is equal).
(project identifier is equal).
.. code-block:: json
@ -226,7 +226,7 @@ This extract is from the default ``policy.json`` file:
}
In some cases, some operations are restricted to administrators only.
This example shows you how to modify a policy file to permit tenants to
This example shows you how to modify a policy file to permit project to
define networks, see their resources, and permit administrative users to
perform all other operations:

View File

@ -154,7 +154,7 @@ Configure L3 agent
~~~~~~~~~~~~~~~~~~
The OpenStack Networking service has a widely used API extension to
allow administrators and tenants to create routers to interconnect L2
allow administrators and projects to create routers to interconnect L2
networks, and floating IPs to make ports on private networks publicly
accessible.

View File

@ -83,7 +83,7 @@ Configure Identity service for Networking
You must provide admin user credentials that Compute and some internal
Networking components can use to access the Networking API. Create a
special ``service`` tenant and a ``neutron`` user within this tenant,
special ``service`` project and a ``neutron`` user within this project,
and assign an ``admin`` role to this role.
a. Create the ``admin`` role:
@ -100,14 +100,14 @@ Configure Identity service for Networking
--password "$NEUTRON_PASSWORD" --email demo@example.com \
--project service)
c. Create the ``service`` tenant:
c. Create the ``service`` project:
.. code-block:: console
$ SERVICE_TENANT=$(get_id openstack project create service \
--description "Services project")
d. Establish the relationship among the tenant, user, and role:
d. Establish the relationship among the project, user, and role:
.. code-block:: console
@ -133,7 +133,7 @@ most network-related decisions to Networking.
Networking can cause problems, as can stale iptables rules pushed
down by previously running ``nova-network``.
Compute proxies tenant-facing API calls to manage security groups and
Compute proxies project-facing API calls to manage security groups and
floating IPs to Networking APIs. However, operator-facing tools such
as ``nova-manage``, are not proxied and should not be used.
@ -174,8 +174,8 @@ happen, you must configure the following items in the ``nova.conf`` file
for this deployment.
* - ``[neutron] auth_strategy``
- Keep the default ``keystone`` value for all production deployments.
* - ``[neutron] admin_tenant_name``
- Update to the name of the service tenant created in the above section on
* - ``[neutron] admin_project_name``
- Update to the name of the service project created in the above section on
Identity configuration.
* - ``[neutron] admin_username``
- Update to the name of the user created in the above section on Identity
@ -248,7 +248,7 @@ To enable proxying the requests, you must update the following fields in
As a precaution, even when using ``metadata_proxy_shared_secret``,
we recommend that you do not expose metadata using the same
nova-api instances that are used for tenants. Instead, you should
nova-api instances that are used for projects. Instead, you should
run a dedicated set of nova-api instances for metadata that are
available only on your management network. Whether a given nova-api
instance exposes metadata APIs is determined by the value of

View File

@ -148,7 +148,7 @@ formerly known as Nicira NVP.
load balancing requests on the various API endpoints.
- The UUID of the NSX-mh transport zone that should be used by default
when a tenant creates a network. You can get this value from the
when a project creates a network. You can get this value from the
Transport Zones page for the NSX-mh manager:
Alternatively the transport zone identifier can be retrieved by query

View File

@ -53,9 +53,9 @@ To configure rich network topologies, you can create and configure
networks and subnets and instruct other OpenStack services like Compute
to attach virtual devices to ports on these networks.
In particular, Networking supports each tenant having multiple private
networks and enables tenants to choose their own IP addressing scheme,
even if those IP addresses overlap with those that other tenants use.
In particular, Networking supports each project having multiple private
networks and enables projects to choose their own IP addressing scheme,
even if those IP addresses overlap with those that other projects use.
The Networking service:
@ -317,7 +317,7 @@ an IP address between two instances to enable fast data plane failover.
Virtual-Private-Network-as-a-Service (VPNaaS)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The VPNaaS extension enables OpenStack tenants to extend private networks
The VPNaaS extension enables OpenStack projects to extend private networks
across the internet.
VPNaas is a :term:`service`. It is a parent object that associates a VPN
@ -338,7 +338,7 @@ The current implementation of the VPNaaS extension provides:
- Site-to-site VPN that connects two private networks.
- Multiple VPN connections per tenant.
- Multiple VPN connections per project.
- IKEv1 policy support with 3des, aes-128, aes-256, or aes-192 encryption.

View File

@ -16,14 +16,14 @@ Log files are in the ``/var/log/neutron`` directory.
Configuration files are in the ``/etc/neutron`` directory.
Administrators and tenants can use OpenStack Networking to build
Administrators and projects can use OpenStack Networking to build
rich network topologies. Administrators can create network
connectivity on behalf of tenants.
connectivity on behalf of projects.
Core Networking API features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
After you install and configure Networking, tenants and administrators
After you install and configure Networking, projects and administrators
can perform create-read-update-delete (CRUD) API networking operations
by using the Networking API directly or neutron command-line interface
(CLI). The neutron CLI is a wrapper around the Networking API. Every
@ -57,12 +57,12 @@ basic network operations:
| | ``net1 10.0.0.0/24`` |
+-------------------------+-------------------------------------------------+
|Lists ports for a | |
|specified tenant. | |
|specified project. | |
| | |
| | ``$ neutron port-list`` |
+-------------------------+-------------------------------------------------+
|Lists ports for a | |
|specified tenant | |
|specified project | |
|and displays the ``id``, | |
|``fixed_ips``, | |
|and ``device_owner`` | |
@ -91,7 +91,7 @@ Administrative operations
-------------------------
The administrator can run any :command:`neutron` command on behalf of
tenants by specifying an Identity ``tenant_id`` in the command, as
projects by specifying an Identity ``tenant_id`` in the command, as
follows:
.. code-block:: console
@ -106,7 +106,7 @@ For example:
.. note::
To view all tenant IDs in Identity, run the following command as an
To view all project IDs in Identity, run the following command as an
Identity service admin user:
.. code-block:: console
@ -123,7 +123,7 @@ advanced network operations:
| Operation | Command |
+===============================+============================================+
|Creates a network that | |
|all tenants can use. | |
|all projects can use. | |
| | |
| | ``$ neutron net-create`` |
| | ``--shared public-net`` |
@ -258,7 +258,7 @@ complete advanced VM creation operations:
+-------------------------------------+--------------------------------------+
|Boots a VM that connects to all | |
|networks that are accessible to the | |
|tenant who submits the request | |
|projectt who submits the request | |
|(without the ``--nic`` option). | |
| | |
| |``$ nova boot --image IMAGE --flavor``|

View File

@ -1,18 +1,18 @@
=============================================================
Configure tenant-specific image locations with Object Storage
=============================================================
==============================================================
Configure project-specific image locations with Object Storage
==============================================================
For some deployers, it is not ideal to store all images in one place to
enable all tenants and users to access them. You can configure the Image
service to store image data in tenant-specific image locations. Then,
only the following tenants can use the Image service to access the
enable all projects and users to access them. You can configure the Image
service to store image data in project-specific image locations. Then,
only the following projects can use the Image service to access the
created image:
- The tenant who owns the image
- Tenants that are defined in ``swift_store_admin_tenants`` and that
- The project who owns the image
- Projects that are defined in ``swift_store_admin_tenants`` and that
have admin-level accounts
**To configure tenant-specific image locations**
**To configure project-specific image locations**
#. Configure swift as your ``default_store`` in the
``glance-api.conf`` file.

View File

@ -373,12 +373,12 @@ status share should have status ``available``:
+----------------------+----------------------------------------------------------------------+
``is_public`` defines the level of visibility for the share: whether other
tenants can or cannot see the share. By default, the share is private.
projects can or cannot see the share. By default, the share is private.
Update share
------------
Update the name, or description, or level of visibility for all tenants for
Update the name, or description, or level of visibility for all projects for
the share if you need:
.. code-block:: console
@ -602,7 +602,7 @@ state using soft-deletion you'll get an error:
A share cannot be deleted in a transitional status, that it why an error from
``python-manilaclient`` appeared.
Print the list of all shares for all tenants:
Print the list of all shares for all projects:
.. code-block:: console

View File

@ -89,9 +89,9 @@ Share Networks
~~~~~~~~~~~~~~
A ``share network`` is an object that defines a relationship between a
tenant network and subnet, as defined in an OpenStack Networking service or
project network and subnet, as defined in an OpenStack Networking service or
Compute service. The ``share network`` is also defined in ``shares``
created by the same tenant. A tenant may find it desirable to
created by the same project. A project may find it desirable to
provision ``shares`` such that only instances connected to a particular
OpenStack-defined network have access to the ``share``. Also,
``security services`` can be attached to ``share networks``,

View File

@ -6,7 +6,7 @@ Network plug-ins
The Shared File Systems service architecture defines an abstraction layer for
network resource provisioning and allowing administrators to choose from a
different options for how network resources are assigned to their tenants
different options for how network resources are assigned to their projects
networked storage. There are a set of network plug-ins that provide a variety
of integration approaches with the network services that are available with
OpenStack.
@ -36,7 +36,7 @@ Shared File Systems service:
the ``neutron_subnet_id`` to be provided when defining the share network
that will be used for the creation of share servers. The user may define
any number of share networks corresponding to the various physical
network segments in a tenant environment.
network segments in a project environment.
b) ``manila.network.neutron.neutron_network_plugin.
NeutronSingleNetworkPlugin``. This is a simplification of the previous

View File

@ -7,7 +7,7 @@ Quotas and limits
Limits
~~~~~~
Limits are the resource limitations that are allowed for each tenant (project).
Limits are the resource limitations that are allowed for each project.
An administrator can configure limits in the ``manila.conf`` file.
Users can query their rate and absolute limits.
@ -85,22 +85,22 @@ Quotas
Quota sets provide quota management support.
To list the quotas for a tenant or user, use the :command:`manila quota-show`
To list the quotas for a project or user, use the :command:`manila quota-show`
command. If you specify the optional :option:`--user` parameter, you get the
quotas for this user in the specified tenant. If you omit this parameter,
quotas for this user in the specified project. If you omit this parameter,
you get the quotas for the specified project.
.. note::
The Shared File Systems service does not perform mapping of usernames and
tenant/project names to IDs. Provide only ID values to get correct setup
of quotas. Setting it by names you set quota for nonexistent tenant/user.
In case quota is not set explicitly by tenant/user ID,
project names to IDs. Provide only ID values to get correct setup
of quotas. Setting it by names you set quota for nonexistent project/user.
In case quota is not set explicitly by project/user ID,
The Shared File Systems service just applies default quotas.
.. code-block:: console
$ manila quota-show --tenant %tenant_id% --user %user_id%
$ manila quota-show --tenant %project_id% --user %user_id%
+--------------------+-------+
| Property | Value |
+--------------------+-------+
@ -117,7 +117,7 @@ the :command:`manila quota-defaults` command:
.. code-block:: console
$ manila quota-defaults --tenant %tenant_id%
$ manila quota-defaults --tenant %project_id%
+--------------------+-------+
| Property | Value |
+--------------------+-------+
@ -128,14 +128,14 @@ the :command:`manila quota-defaults` command:
| share_networks | 10 |
+--------------------+-------+
The administrator can update the quotas for a specific tenant, or for a
The administrator can update the quotas for a specific project, or for a
specific user by providing both the ``--tenant`` and ``--user`` optional
arguments. It is possible to update the ``shares``, ``snapshots``,
``gigabytes``, ``snapshot-gigabytes``, and ``share-networks`` quotas.
.. code-block:: console
$ manila quota-update %tenant_id% --user %user_id% --shares 49 --snapshots 49
$ manila quota-update %project_id% --user %user_id% --shares 49 --snapshots 49
As administrator, you can also permit or deny the force-update of a quota that
is already used, or if the requested value exceeds the configured quota limit.
@ -143,10 +143,10 @@ To force-update a quota, use ``force`` optional key.
.. code-block:: console
$ manila quota-update %tenant_id% --shares 51 --snapshots 51 --force
$ manila quota-update %project_id% --shares 51 --snapshots 51 --force
To revert quotas to default for a project or for a user, delete quotas:
.. code-block:: console
$ manila quota-delete --tenant %tenant_id% --user %user_id%
$ manila quota-delete --tenant %project_id% --user %user_id%

View File

@ -33,9 +33,9 @@ You can add the security service to the
:ref:`share network <shared_file_systems_share_networks>`.
To create a security service, specify the security service type, a
description of a security service, DNS IP address used inside tenant's
description of a security service, DNS IP address used inside project's
network, security service IP address or host name, domain, security
service user or group used by tenant, and a password for the user. The
service user or group used by project, and a password for the user. The
share name is optional.
Create a ``ldap`` security service:

View File

@ -12,7 +12,7 @@ creating a new share network.
How to create share network
~~~~~~~~~~~~~~~~~~~~~~~~~~~
To list networks in a tenant, run:
To list networks in a project, run:
.. code-block:: console

View File

@ -104,4 +104,4 @@ Solution
Some drivers in the Shared File Systems service can create service entities,
like servers and networks. If it is necessary, you can log in to
tenant ``service`` and take manual control over it.
project ``service`` and take manual control over it.

View File

@ -5,7 +5,7 @@ Shared File Systems
===================
Shared File Systems service provides a set of services for management of
shared file systems in a multi-tenant cloud environment. The service resembles
shared file systems in a multi-project cloud environment. The service resembles
OpenStack block-based storage management from the OpenStack Block Storage
service project. With the Shared File Systems service, you can
create a remote file system, mount the file system on your instances, and then

View File

@ -245,7 +245,7 @@ could indicate that:
instance)
* *or*, that the identified instance is not visible to the
user/tenant owning the alarm
user/project owning the alarm
* *or*, simply that an alarm evaluation cycle hasn't kicked off since
the alarm was created (by default, alarms are evaluated once per
@ -259,7 +259,7 @@ could indicate that:
* admin users see *all* alarms, regardless of the owner
* non-admin users see only the alarms associated with their project
(as per the normal tenant segregation in OpenStack)
(as per the normal project segregation in OpenStack)
Alarm update
------------

View File

@ -24,7 +24,7 @@ of meters, alarm definitions and so forth.
The Telemetry API URL can be retrieved from the service catalog provided
by OpenStack Identity, which is populated during the installation
process. The API access needs a valid token and proper permission to
retrieve data, as described in :ref:`telemetry-users-roles-tenants`.
retrieve data, as described in :ref:`telemetry-users-roles-projects`.
Further information about the available API endpoints can be found in
the `Telemetry API Reference
@ -230,7 +230,7 @@ be used:
The :command:`ceilometer` command was run with ``admin`` rights, which means
that all the data is accessible in the database. For more information
about access right see :ref:`telemetry-users-roles-tenants`. As it can be seen
about access right see :ref:`telemetry-users-roles-projects`. As it can be seen
in the above example, there are two VM instances existing in the system, as
there are VM instance related meters on the top of the result list. The
existence of these meters does not indicate that these instances are running at

View File

@ -148,10 +148,10 @@ external networking services:
- `OpenContrail <http://www.opencontrail.org/>`__
.. _telemetry-users-roles-tenants:
.. _telemetry-users-roles-projects:
Users, roles, and tenants
~~~~~~~~~~~~~~~~~~~~~~~~~
Users, roles, and projects
~~~~~~~~~~~~~~~~~~~~~~~~~~
This service of OpenStack uses OpenStack Identity for authenticating and
authorizing users. The required configuration options are listed in the