Converting API endpoints section to RST

+ Concatenated ch_api, and two section_api files into one file
+ And broke them out again
+ Stripped DocBook markup and added RST markup
+ Trying to fix niceness around code tags
+ Cleaning up formatting and removing lingering docbook tags

Change-Id: I79a22c2474e253e329dd68b8b8ae9eea9e53b911
Partial-Bug: #1463111
This commit is contained in:
Nathaniel Dillon 2015-07-20 18:09:27 -07:00
parent bacb1b3aa9
commit 9d98658270
3 changed files with 204 additions and 1 deletions

View File

@ -1,3 +1,17 @@
=============
API Endpoints
API endpoints
=============
The process of engaging an OpenStack cloud is started through the querying of
an API endpoint. While there are different challenges for public and private
endpoints, these are high value assets that can pose a significant risk if
compromised.
This chapter recommends security enhancements for both public and
private-facing API endpoints.
.. toctree::
:maxdepth: 2
api-endpoints/api-endpoint-configuration-recommendations.rst
api-endpoints/case-studies.rst

View File

@ -0,0 +1,138 @@
==========================================
API endpoint configuration recommendations
==========================================
Internal API communications
~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack provides both public facing and private API endpoints. By default,
OpenStack components use the publicly defined endpoints. The recommendation is
to configure these components to use the API endpoint within the proper
security domain.
Services select their respective API endpoints based on the OpenStack service
catalog. These services might not obey the listed public or internal API end
point values. This can lead to internal management traffic being routed to
external API endpoints.
Configure internal URLs in the Identity service catalog
-------------------------------------------------------
The Identity service catalog should be aware of your internal URLs. While this
feature is not utilized by default, it may be leveraged through configuration.
Additionally, it should be forward-compatible with expectant changes once this
behavior becomes the default.
To register an internal URL for an endpoint:
.. code:: console
$ keystone endpoint-create \
--region RegionOne \
--service-id=1ff4ece13c3e48d8a6461faebd9cd38f \
--publicurl='https://public-ip:8776/v1/%(tenant_id)s' \
--internalurl='https://management-ip:8776/v1/%(tenant_id)s' \
--adminurl='https://management-ip:8776/v1/%(tenant_id)s'
Configure applications for internal URLs
----------------------------------------
You can force some services to use specific API endpoints. Therefore, it
is recommended that each OpenStack service communicating to the API of
another service must be explicitly configured to access the proper
internal API endpoint.
Each project may present an inconsistent way of defining target API
endpoints. Future releases of OpenStack seek to resolve these
inconsistencies through consistent use of the Identity service catalog.
Configuration example #1: nova
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: ini
cinder_catalog_info='volume:cinder:internalURL'
glance_protocol='https'
neutron_url='https://neutron-host:9696'
neutron_admin_auth_url='https://neutron-host:9696'
s3_host='s3-host'
s3_use_ssl=True
Configuration example #2: cinder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: ini
glance_host = 'https://glance-server'
Paste and middleware
~~~~~~~~~~~~~~~~~~~~
Most API endpoints and other HTTP services in OpenStack use the Python Paste
Deploy library. From a security perspective, this library enables manipulation
of the request filter pipeline through the application's configuration. Each
element in this chain is referred to as *middleware*. Changing the order of
filters in the pipeline or adding additional middleware might have
unpredictable security impact.
Commonly, implementers add middleware to extend OpenStack's base functionality.
We recommend implementers make careful consideration of the potential exposure
introduced by the addition of non-standard software components to their HTTP
request pipeline.
For more information about Paste Deploy, see
`http://pythonpaste.org/deploy <http://pythonpaste.org/deploy/>`__.
API endpoint process isolation and policy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You should isolate API endpoint processes, especially those that reside within
the public security domain should be isolated as much as possible. Where
deployments allow, API endpoints should be deployed on separate hosts for
increased isolation.
Namespaces
----------
Many operating systems now provide compartmentalization support. Linux supports
namespaces to assign processes into independent domains. Other parts of this
guide cover system compartmentalization in more detail.
Network policy
--------------
Because API endpoints typically bridge multiple security domains, you must pay
particular attention to the compartmentalization of the API processes. See
`Boundaries and threats bridging security domains
<#boundaries-and-threats-bridging-security-domains>`__ for additional
information in this area.
With careful modeling, you can use network ACLs and IDS technologies to enforce
explicit point to point communication between network services. As a critical
cross domain service, this type of explicit enforcement works well for
OpenStack's message queue service.
To enforce policies, you can configure services, host-based firewalls (such as
iptables), local policy (SELinux or AppArmor), and optionally global network
policy.
Mandatory access controls
-------------------------
You should isolate API endpoint processes from each other and other processes
on a machine. The configuration for those processes should be restricted to
those processes not only by Discretionary Access Controls, but through
Mandatory Access Controls. The goal of these enhanced access controls is to
aid in the containment and escalation of API endpoint security breaches. With
mandatory access controls, such breaches severely limit access to resources and
provide earlier alerting on such events.

View File

@ -0,0 +1,51 @@
============
Case studies
============
Earlier in :doc:`../introduction/introduction-to-case-studies` we introduced
the Alice and Bob case studies where Alice is deploying a private government
cloud and Bob is deploying a public cloud each with different security
requirements. Here we discuss how Alice and Bob would address endpoint
configuration to secure their private and public clouds. Alice's cloud is not
publicly accessible, but she is still concerned about securing the endpoints
against improper use. Bob's cloud, being public, must take measures to reduce
the risk of attacks by external adversaries.
Alice's private cloud
~~~~~~~~~~~~~~~~~~~~~
Alice's organization requires that the security architecture protect the access
to the private endpoints, so she elects to use Apache with TLS enabled and
HAProxy for load balancing in front of the web service. As Alice's organization
has implemented its own certificate authority, she configures the services
within both the guest and management security domains to use these
certificates. Since Alice's OpenStack deployment exists entirely on a network
disconnected from the Internet, she makes sure to remove all default CA bundles
that contain external public CA providers to ensure the OpenStack services only
accept client certificates issued by her agency's CA. As she is using HAProxy,
Alice configures SSL offloading on her load balancer, and a virtual server IP
(VIP) on the load balancer with the http to https redirection policy to her API
endpoint systems.
Alice has registered all of the services in the Identity service's catalog,
using the internal URLs for access by internal services. She has installed
host-based intrusion detection (HIDS) to monitor the security events on the
endpoints. On the hosts, Alice also ensures that the API services are confined
to a network namespace while confirming that there is a robust SELinux profile
applied to the services.
Bob's public cloud
~~~~~~~~~~~~~~~~~~
Bob must also protect the access to the public and private endpoints, so he
elects to use the Apache TLS proxy on both public and internal services. On
the public services, he has configured the certificate key files with
certificates signed by a well-known Certificate Authority. He has used his
organization's self-signed CA to sign certificates in the internal services on
the Management network. Bob has registered his services in the Identity
service's catalog, using the internal URLs for access by internal services.
Bob's public cloud runs services on SELinux, which he has configured with a
mandatory access control policy to reduce the impact of any publicly accessible
services that may be compromised. He has also configured the endpoints with a
host-based IDS.