[arch-guide] Use "project" to replace "tenant" term in "Arch-guide"

This patch use "project" to replace "tenant" term in
"Architecture Design Guide" for cleanup.

Partial-Bug: #1475005
Change-Id: Ic2af0838b033d039ebc84d44a296ea9f7594d3a6
This commit is contained in:
qiaomin 2016-08-29 11:39:37 +00:00 committed by Allen
parent 81939cdd0d
commit 785eca6c3a
14 changed files with 52 additions and 52 deletions

View File

@ -86,7 +86,7 @@ segment providing access to particular resources. The network services
themselves also require network communication paths which should be
separated from the other networks. When designing network services for a
general purpose cloud, plan for either a physical or logical separation
of network segments used by operators and tenants. You can also create
of network segments used by operators and projects. You can also create
an additional network segment for access to internal services such as
the message bus and database used by various services. Segregating these
services onto separate networks helps to protect sensitive data and
@ -105,18 +105,18 @@ Legacy networking (nova-network)
When the network devices in the cloud support segmentation using
VLANs, legacy networking can operate in the second mode. In this
design model, each tenant within the cloud is assigned a network
design model, each project within the cloud is assigned a network
subnet which is mapped to a VLAN on the physical network. It is
especially important to remember the maximum number of 4096 VLANs
which can be used within a spanning tree domain. This places a hard
limit on the amount of growth possible within the data center. When
designing a general purpose cloud intended to support multiple
tenants, we recommend the use of legacy networking with VLANs, and
projects, we recommend the use of legacy networking with VLANs, and
not in flat network mode.
Another consideration regarding network is the fact that legacy
networking is entirely managed by the cloud operator; tenants do not
have control over network resources. If tenants require the ability to
networking is entirely managed by the cloud operator; projects do not
have control over network resources. If projects require the ability to
manage and create network resources such as network segments and
subnets, it will be necessary to install the OpenStack Networking
service to provide network access to instances.
@ -124,9 +124,9 @@ service to provide network access to instances.
Networking (neutron)
OpenStack Networking (neutron) is a first class networking service
that gives full control over creation of virtual network resources
to tenants. This is often accomplished in the form of tunneling
to projects. This is often accomplished in the form of tunneling
protocols which will establish encapsulated communication paths over
existing network infrastructure in order to segment tenant traffic.
existing network infrastructure in order to segment project traffic.
These methods vary depending on the specific implementation, but
some of the more common methods include tunneling over GRE,
encapsulating with VXLAN, and VLAN tags.
@ -134,7 +134,7 @@ Networking (neutron)
We recommend you design at least three network segments:
* The first segment is a public network, used for access to REST APIs
by tenants and operators. The controller nodes and swift proxies are
by projects and operators. The controller nodes and swift proxies are
the only devices connecting to this network segment. In some cases,
this network might also be serviced by hardware load balancers and
other network devices.
@ -204,10 +204,10 @@ Designing Block Storage
When designing OpenStack Block Storage resource nodes, it is helpful to
understand the workloads and requirements that will drive the use of
block storage in the cloud. We recommend designing block storage pools
so that tenants can choose appropriate storage solutions for their
so that projects can choose appropriate storage solutions for their
applications. By creating multiple storage pools of different types, in
conjunction with configuring an advanced storage scheduler for the block
storage service, it is possible to provide tenants with a large catalog
storage service, it is possible to provide projects with a large catalog
of storage services with a variety of performance levels and redundancy
options.
@ -218,7 +218,7 @@ ship out-of-the-box with OpenStack Block Storage (and many more
available via third party channels). General purpose clouds are more
likely to use directly attached storage in the majority of block storage
nodes, deeming it necessary to provide additional levels of service to
tenants which can only be provided by enterprise class storage
projects which can only be provided by enterprise class storage
solutions.
Redundancy and availability requirements impact the decision to use a
@ -580,7 +580,7 @@ instance is public, private, or hybrid.
domain to be untrusted. Private cloud providers may want to consider
this network as internal and therefore trusted only if they have
controls in place to assert that they trust instances and all their
tenants.
projects.
* The management security domain is where services interact. Sometimes
referred to as the control plane, the networks in this domain

View File

@ -88,7 +88,7 @@ Connectivity
Durability and resilience
Despite the existence of SLAs, things break: servers go down,
network connections are disrupted, or too many tenants on a server
network connections are disrupted, or too many projects on a server
make a server unusable. An application must be sturdy enough to
contend with these issues.

View File

@ -101,7 +101,7 @@ internet access to instances should consider this domain to be
untrusted. Private cloud providers may want to consider this
network as internal and therefore trusted only if they have
controls in place to assert that they trust instances and all
their tenants.
their projects.
Management security domains
---------------------------
@ -151,7 +151,7 @@ bare metal instance instead of a cloud. In other cases, it is
possible to replicate a second private cloud by integrating
with a private Cloud-as-a-Service deployment. The
organization does not buy the hardware, but also does not share
with other tenants. It is also possible to use a provider that
with other projects. It is also possible to use a provider that
hosts a bare-metal public cloud instance for which the
hardware is dedicated only to one customer, or a provider that
offers private Cloud-as-a-Service.
@ -176,7 +176,7 @@ destinations without crossing through locations that are undesirable.
Consider the following example factors:
* Firewalls
* Overlay interconnects for joining separated tenant networks
* Overlay interconnects for joining separated project networks
* Routing through or avoiding specific networks
How networks attach to hypervisors can expose security
@ -189,7 +189,7 @@ Multi-site security
~~~~~~~~~~~~~~~~~~~
Securing a multi-site OpenStack installation brings
extra challenges. Tenants may expect a tenant-created network
extra challenges. Projects may expect a project-created network
to be secure. In a multi-site installation the use of a
non-private connection between sites may be required. This may
mean that traffic would be visible to third parties and, in
@ -206,16 +206,16 @@ create, read, update, and delete operations. Centralized
authentication is also useful for auditing purposes because
all authentication tokens originate from the same source.
Just as tenants in a single-site deployment need isolation
from each other, so do tenants in multi-site installations.
Just as projects in a single-site deployment need isolation
from each other, so do projects in multi-site installations.
The extra challenges in multi-site designs revolve around
ensuring that tenant networks function across regions.
ensuring that project networks function across regions.
OpenStack Networking (neutron) does not presently support
a mechanism to provide this functionality, therefore an
external system may be necessary to manage these mappings.
Tenant networks may contain sensitive information requiring
Project networks may contain sensitive information requiring
that this mapping be accurate and consistent to ensure that a
tenant in one site does not connect to a different tenant in
project in one site does not connect to a different project in
another site.
OpenStack components

View File

@ -67,7 +67,7 @@ Growth and capacity planning
An important consideration in running at massive scale is projecting growth
and utilization trends in order to plan capital expenditures for the short and
long term. Gather utilization meters for compute, network, and storage, along
with historical records of these meters. While securing major anchor tenants
with historical records of these meters. While securing major anchor projects
can lead to rapid jumps in the utilization rates of all resources, the steady
adoption of the cloud inside an organization or by consumers in a public
offering also creates a steady trend of increased utilization.

View File

@ -91,7 +91,7 @@ packets are transmitted between regions and how the logical network and
addresses present to the application. If there are security or
regulatory requirements, encryption should be implemented to secure the
traffic between regions. For networking inside a region, the overlay
network technology for tenant networks is equally important. The overlay
network technology for project networks is equally important. The overlay
technology and the network traffic that an application generates or
receives can be either complementary or serve cross purposes. For
example, using an overlay technology for an application that transmits a
@ -114,5 +114,5 @@ Ensure that enough storage is allocated to support the data protection
strategy.
Networking decisions include the encapsulation mechanism that can be
used for the tenant networks, how large the broadcast domains should be,
used for the project networks, how large the broadcast domains should be,
and the contracted SLAs for the interconnects.

View File

@ -103,10 +103,10 @@ Quota management
Quotas are used to set operational limits to prevent system capacities
from being exhausted without notification. They are currently enforced
at the tenant (or project) level rather than at the user level.
at the project level rather than at the user level.
Quotas are defined on a per-region basis. Operators can define identical
quotas for tenants in each region of the cloud to provide a consistent
quotas for projects in each region of the cloud to provide a consistent
experience, or even create a process for synchronizing allocated quotas
across regions. It is important to note that only the operational limits
imposed by the quotas will be aligned consumption of quotas by users

View File

@ -24,7 +24,7 @@ Swift endpoint and a shared Object Storage capability between them. An
example of this technique, as well as a configuration walk-through, is
available at
http://docs.openstack.org/developer/swift/replication_network.html#dedicated-replication-network.
Another option in this scenario is to build a dedicated set of tenant
Another option in this scenario is to build a dedicated set of project
private networks across the secondary link, using overlay networks with
a third party mapping the site overlays to each other.
@ -37,8 +37,8 @@ To mitigate this, Identity service call timeouts can be tuned to prevent
issues authenticating against a central Identity service.
Another network capacity consideration for a multi-site deployment is
the amount and performance of overlay networks available for tenant
networks. If using shared tenant networks across zones, it is imperative
the amount and performance of overlay networks available for project
networks. If using shared project networks across zones, it is imperative
that an external overlay manager or controller be used to map these
overlays together. It is necessary to ensure the amount of possible IDs
between the zones are identical.
@ -47,7 +47,7 @@ between the zones are identical.
As of the Kilo release, OpenStack Networking was not capable of
managing tunnel IDs across installations. So if one site runs out of
IDs, but another does not, that tenant's network is unable to reach
IDs, but another does not, that project's network is unable to reach
the other site.
Capacity can take other forms as well. The ability for a region to grow

View File

@ -146,9 +146,9 @@ Auditing
A well thought-out auditing strategy is important in order to be able to
quickly track down issues. Keeping track of changes made to security
groups and tenant changes can be useful in rolling back the changes if
groups and project changes can be useful in rolling back the changes if
they affect production. For example, if all security group rules for a
tenant disappeared, the ability to quickly track down the issue would be
project disappeared, the ability to quickly track down the issue would be
important for operational and legal reasons.
Separation of duties

View File

@ -46,7 +46,7 @@ to outside systems.
Interaction with orchestration services is inevitable in larger-scale
deployments. The Orchestration service is capable of allocating network
resource defined in templates to map to tenant networks and for port
resource defined in templates to map to project networks and for port
creation, as well as allocating floating IPs. If there is a requirement
to define and manage network resources when using orchestration, we
recommend that the design include the Orchestration service to meet the
@ -77,9 +77,9 @@ balancing solution. In the internal scenario, Networking's
Load-Balancer-as-a-Service (LBaaS) can manage load balancing software,
for example HAproxy. This is specifically to manage the Virtual IP (VIP)
while a dual-homed connection from the HAproxy instance connects the
public network with the tenant private network that hosts all of the
public network with the project private network that hosts all of the
content servers. In the external scenario, a load balancer needs to
serve the VIP and also connect to the tenant overlay network through
serve the VIP and also connect to the project overlay network through
external means or through private addresses.
Another kind of NAT that may be useful is protocol NAT. In some cases it

View File

@ -25,7 +25,7 @@ network resources increase, operators add additional IP address blocks
and add additional bandwidth capacity. In addition, consider managing
hardware and software lifecycle events, for example upgrades,
decommissioning, and outages, while avoiding service interruptions for
tenants.
projects.
Factor maintainability into the overall network design. This includes
the ability to manage and maintain IP addresses as well as the use of
@ -43,7 +43,7 @@ follow best practices for storing IP addresses. We recommend you avoid
relying on IPv4 features that did not carry over to the IPv6 protocol or
have differences in implementation.
To segregate traffic, allow applications to create a private tenant
To segregate traffic, allow applications to create a private project
network for database and storage network traffic. Use a public network
for services that require direct client access from the internet. Upon
segregating the traffic, consider :term:`quality of service (QoS)` and

View File

@ -9,10 +9,10 @@ individual servers.
The figure below depicts an example design for this workload. In this
example, a hardware load balancer provides SSL offload functionality and
connects to tenant networks in order to reduce address consumption. This
connects to project networks in order to reduce address consumption. This
load balancer links to the routing architecture as it services the VIP
for the application. The router and load balancer use the GRE tunnel ID
of the application's tenant network and an IP address within the tenant
of the application's project network and an IP address within the project
subnet but outside of the address pool. This is to ensure that the load
balancer can communicate with the application's HTTP servers without
requiring the consumption of a public IP address.
@ -24,7 +24,7 @@ ensure that layer-2 connectivity does not fail. Routers use VRRP and
fully mesh with switches to ensure layer-3 connectivity. Since GRE is
provides an overlay network, Networking is present and uses the Open
vSwitch agent in GRE tunnel mode. This ensures all devices can reach all
other devices and that you can create tenant networks for private
other devices and that you can create project networks for private
addressing links to the load balancer.
.. figure:: figures/Network_Web_Services1.png
@ -52,7 +52,7 @@ requirement for auto-scaling, the design includes the Telemetry service.
Web services tend to be bursty in load, have very defined peak and
valley usage patterns and, as a result, benefit from automatic scaling
of instances based upon traffic. At a network level, a split network
configuration works well with databases residing on private tenant
configuration works well with databases residing on private project
networks since these do not emit a large quantity of broadcast traffic
and may need to interconnect to some databases for content.
@ -110,7 +110,7 @@ from having services local to the consumers of the service. Use a
multi-site approach as well as deploying many copies of the application
to handle load as close as possible to consumers. Since these
applications function independently, they do not warrant running
overlays to interconnect tenant networks. Overlays also have the
overlays to interconnect project networks. Overlays also have the
drawback of performing poorly with rapid flow setup and may incur too
much overhead with large quantities of small packets and therefore we do
not recommend them.

View File

@ -339,7 +339,7 @@ Where appropriate, use a multi-site installation for these situations.
You can implement networking in two separate ways. Legacy networking
(nova-network) provides a flat DHCP network with a single broadcast
domain. This implementation does not support tenant isolation networks
domain. This implementation does not support project isolation networks
or advanced plug-ins, but it is currently the only way to implement a
distributed :term:`layer-3 (L3) agent` using the multi_host configuration.
OpenStack Networking (neutron) is the official networking implementation and

View File

@ -28,7 +28,7 @@ directly manipulates switches, we do not recommend running an
overlay network or a layer-3 agent.
If the controller resides within an OpenStack installation,
it may be necessary to build an ML2 plug-in and schedule the
controller instances to connect to tenant VLANs that they can
controller instances to connect to project VLANs that they can
talk directly to the switch hardware.
Alternatively, depending on the external device support,
use a tunnel that terminates at the switch hardware itself.

View File

@ -100,7 +100,7 @@ characteristics. When deploying multiple pools of storage it is also
important to consider the impact on the Block Storage scheduler which is
responsible for provisioning storage across resource nodes. Ensuring
that applications can schedule volumes in multiple regions, each with
their own network, power, and cooling infrastructure, can give tenants
their own network, power, and cooling infrastructure, can give projects
the ability to build fault tolerant applications that are distributed
across multiple availability zones.
@ -116,7 +116,7 @@ and storing the state of Block Storage volumes. We also recommend
designing a highly available database solution to store the Block
Storage databases. Leverage highly available database solutions such as
Galera and MariaDB to help keep database services online for
uninterrupted access, so that tenants can manage Block Storage volumes.
uninterrupted access, so that projects can manage Block Storage volumes.
In a cloud with extreme demands on Block Storage, the network
architecture should take into account the amount of East-West bandwidth
@ -198,7 +198,7 @@ installing and configuring the appropriate hardware and software and
then allowing that node to report in to the proper storage pool via the
message bus. This is because Block Storage nodes report into the
scheduler service advertising their availability. After the node is
online and available, tenants can make use of those storage resources
online and available, projects can make use of those storage resources
instantly.
In some cases, the demand on Block Storage from instances may exhaust
@ -232,15 +232,15 @@ As you add back-end storage capacity to the system, the partition maps
redistribute data amongst the storage nodes. In some cases, this
replication consists of extremely large data sets. In these cases, we
recommend using back-end replication links that do not contend with
tenants' access to data.
projects' access to data.
As more tenants begin to access data within the cluster and their data
As more projects begin to access data within the cluster and their data
sets grow, it is necessary to add front-end bandwidth to service data
access requests. Adding front-end bandwidth to an Object Storage cluster
requires careful planning and design of the Object Storage proxies that
tenants use to gain access to the data, along with the high availability
projects use to gain access to the data, along with the high availability
solutions that enable easy scaling of the proxy layer. We recommend
designing a front-end load balancing layer that tenants and consumers
designing a front-end load balancing layer that projects and consumers
use to gain access to data stored within the cluster. This load
balancing layer may be distributed across zones, regions or even across
geographic boundaries, which may also require that the design encompass