Moving RST format to main security-guide folder

This updates all scripts as well.

For further cleanup, changes for project-config (removal of jobs) and
openstack-manuals (stop sync of DocBook XML files) are needed first.

Implements: bp sec-guide-rst
Co-Authored-By: Andreas Jaeger <aj@suse.de>
Change-Id: I003f56c6d804f70cc74395bd947b053eb4cea769
This commit is contained in:
Andreas Jaeger 2015-08-12 06:33:31 +02:00
parent fcec8a60a7
commit fac521a9ec
329 changed files with 15 additions and 36903 deletions

View File

@ -2,12 +2,11 @@
# Directories to set up
declare -A DIRECTORIES=(
["ja"]="security-guide glossary"
)
# Books to build
declare -A BOOKS=(
["ja"]="security-guide security-guide-rst"
["ja"]="security-guide"
)
# Where does the top-level pom live?
@ -19,7 +18,7 @@ DOC_DIR="./"
# draft books
declare -A DRAFTS=(
["ja"]="security-guide-rst"
["ja"]="security-guide"
)
# Books with special handling
@ -27,7 +26,7 @@ declare -A DRAFTS=(
declare -A SPECIAL_BOOKS
SPECIAL_BOOKS=(
# Directory is using RST
["security-guide-rst"]="RST"
["security-guide"]="RST"
# These are translated in openstack-manuals
["common-rst"]="skip"
)

View File

@ -1,124 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<book xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="os-security-guide">
<title>OpenStack Security Guide</title>
<info>
<author>
<personname>
<firstname/>
<surname/>
</personname>
<affiliation>
<orgname>OpenStack Foundation</orgname>
</affiliation>
</author>
<copyright>
<year>2013</year>
<year>2014</year>
<holder>OpenStack Foundation</holder>
</copyright>
<releaseinfo>current</releaseinfo>
<productname>OpenStack</productname>
<pubdate/>
<legalnotice role="cc-by">
<annotation>
<remark>Copyright details are filled in by the
template.</remark>
</annotation>
</legalnotice>
<abstract>
<para>This book provides best practices and conceptual
information about securing an OpenStack cloud.</para>
</abstract>
<revhistory>
<!-- ... continue adding more revisions here as you change this document using the markup shown below... -->
<revision>
<date>2015-04-29</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Final prep for Kilo release.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2015-02-11</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Chapter on Data processing added.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2014-10-16</date>
<revdescription>
<itemizedlist>
<listitem>
<para>This book has been extensively
reviewed and updated. Chapters have been
rearranged and a glossary has been
added.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-12-02</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Chapter on Object Storage added.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-10-17</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Havana release.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
<revision>
<date>2013-07-02</date>
<revdescription>
<itemizedlist>
<listitem>
<para>Initial creation.</para>
</listitem>
</itemizedlist>
</revdescription>
</revision>
</revhistory>
</info>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/ch_preface.xml" />
<xi:include href="ch_introduction.xml"/>
<xi:include href="ch_documentation.xml"/>
<xi:include href="ch_management.xml"/>
<xi:include href="ch_secure_communication.xml"/>
<xi:include href="ch_api-endpoints.xml"/>
<xi:include href="ch_identity.xml"/>
<xi:include href="ch_dashboard.xml"/>
<xi:include href="ch_compute.xml"/>
<xi:include href="ch_block_storage.xml"/>
<xi:include href="ch_networking.xml"/>
<xi:include href="ch_object-storage.xml"/>
<xi:include href="ch_messaging.xml"/>
<xi:include href="ch_data_processing.xml"/>
<xi:include href="ch_databases.xml"/>
<xi:include href="ch_tenant-data.xml"/>
<xi:include href="ch_instance-management.xml"/>
<xi:include href="ch_monitoring-logging.xml"/>
<xi:include href="ch_compliance.xml"/>
<xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/app_support.xml"/>
<glossary role="auto"/>
</book>

View File

@ -1,23 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="api-endpoints">
<title>API endpoints</title>
<para>
The process of engaging an OpenStack cloud is started through the
querying of an API endpoint. While there are different challenges
for public and private endpoints, these are high value assets that
can pose a significant risk if compromised.
</para>
<para>
This chapter recommends security enhancements for both public and
private-facing API endpoints.
</para>
<xi:include href="section_api-endpoint-configuration-recommendations.xml"/>
<xi:include href="section_case-studies-api-endpoints.xml"/>
</chapter>

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="block-storage">
<?dbhtml stop-chunking?>
<title>Block Storage</title>
<para>OpenStack Block Storage (cinder) is a service that provides
software (services and libraries) to self-service manage
persistent block-level storage devices. This creates
on-demand access to Block Storage resources for use with
OpenStack Compute (nova) instances. This creates
software-defined storage via abstraction by virtualizing
pools of block storage to a variety of back-end storage
devices which can be either software implementations or
traditional hardware storage products. The primary functions
of this is to manage the creation, attaching and detaching of
the block devices. The consumer requires no knowledge of the
type of back-end storage equipment or where it is located.
</para>
<para>Compute instances store and retrieve block storage via
industry-standard storage protocols such as iSCSI, ATA over
Ethernet, or Fibre-Channel. These resources are managed and
configured via OpenStack native standard HTTP RESTful API.
For more details on the API see the
<link
xlink:href="http://developer.openstack.org/api-ref-blockstorage-v2.html"
>OpenStack Block Storage documentation</link>.</para>
<note>
<para>Whilst this chapter is currently sparse on specific
guidance, it is expected that standard hardening practices
will be followed. This section will be expanded with relevant
information.</para>
</note>
</chapter>

View File

@ -1,56 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compliance">
<title>Compliance</title>
<para>
An OpenStack deployment may require compliance activities for many
purposes, such as regulatory and legal requirements, customer need,
privacy considerations, and security best practices. The Compliance
function is important for the business and its customers. Compliance
means adhering to regulations, specifications, standards and laws.
It is also used when describing an organizations status regarding
assessments, audits, and certifications. Compliance, when done
correctly, unifies and strengthens the other security topics
discussed in this guide.
</para>
<para>
This chapter has several objectives:
</para>
<itemizedlist>
<listitem>
<para>
Review common security principles.
</para>
</listitem>
<listitem>
<para>
Discuss common control frameworks and certification resources to
achieve industry certifications or regulator attestations.
</para>
</listitem>
<listitem>
<para>
Act as a reference for auditors when evaluating OpenStack
deployments.
</para>
</listitem>
<listitem>
<para>
Introduce privacy considerations specific to OpenStack and cloud
environments.
</para>
</listitem>
</itemizedlist>
<xi:include href="section_compliance-overview.xml"/>
<xi:include href="section_understanding-the-audit-process.xml"/>
<xi:include href="section_compliance-activities.xml"/>
<xi:include href="section_certification-and-compliance-statements.xml"/>
<xi:include href="section_privacy.xml"/>
<xi:include href="section_case-studies-compliance.xml"/>
</chapter>

View File

@ -1,33 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compute">
<title>Compute</title>
<para>
The OpenStack Compute service (nova) is one of the more complex
OpenStack services. It runs in many locations throughout the cloud
and interacts with a variety of internal services. There are a
variety of configuration options when using the OpenStack Compute
service, and these can be deployment-specific. In this chapter we
will call out general best practice around Compute security as
well as specific known configurations that can lead to security
issues. In general, the <filename>nova.conf</filename> file and
the <filename>/var/lib/nova</filename> locations should be secured.
Controls like centralized logging, the
<filename>policy.json</filename> file, and a mandatory access
control framework should be implemented. Additionally, there are
environmental considerations to keep in mind, depending on what
functionality is desired for your cloud.
</para>
<xi:include href="section_hypervisor-selection.xml"/>
<xi:include href="section_hardening-the-virtualization-layers.xml"/>
<xi:include href="section_compute-hardening-deployments.xml"/>
<xi:include href="section_compute-how-to-select-virtual-consoles.xml"/>
<xi:include href="section_compute-vulnerability-awareness.xml"/>
<xi:include href="section_case-studies-instance-isolation.xml"/>
</chapter>

View File

@ -1,35 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard">
<?dbhtml stop-chunking?>
<title>Dashboard</title>
<para>Horizon is the OpenStack dashboard that provides users a self-service
portal to provision their own resources within the limits set by
administrators. These include provisioning users, defining instance flavors,
uploading VM images, managing networks, setting up security groups, starting
instances, and accessing the instances through a console.</para>
<para>The dashboard is based on the Django web framework, therefore
secure deployment practices for Django apply directly to horizon.
This guide provides a popular set of Django security
recommendations. Further information can be found by reading the
<link
xlink:href="https://docs.djangoproject.com/"
>Django documentation</link>.</para>
<para>The dashboard ships with reasonable default security settings,
and has good <link
xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html"
>deployment and configuration documentation</link>.</para>
<xi:include href="section_dashboard-domains-dashboard-upgrades-basic-web-server-configuration.xml"/>
<xi:include href="section_dashboard-https-hsts-xss-ssrf.xml"/>
<xi:include href="section_dashboard-front-end-caching-session-back-end.xml"/>
<xi:include href="section_dashboard-static-media.xml"/>
<xi:include href="section_dashboard-secret-key.xml"/>
<xi:include href="section_dashboard-cookies.xml"/>
<xi:include href="section_dashboard-cross-origin-resource-sharing-cors.xml"/>
<xi:include href="section_dashboard-debug.xml"/>
<xi:include href="section_case-studies-dashboard.xml"/>
</chapter>

View File

@ -1,28 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data_processing">
<title>Data processing</title>
<para>
The Data processing service for OpenStack (sahara) provides a platform
for the provisioning and management of instance clusters using processing
frameworks such as Hadoop and Spark. Through the OpenStack dashboard
or REST API, users will be able to upload and execute framework
applications which may access data in object storage or external
providers. The data processing controller uses the Orchestration
service to create clusters of instances which may exist as
long-running groups that can grow and shrink as requested, or as
transient groups created for a single workload.
</para>
<xi:include href="section_data-processing-introduction-to-data-processing.xml"/>
<xi:include href="section_data-processing-deployment.xml"/>
<xi:include href="section_data-processing-configuration-and-hardening.xml"/>
<xi:include href="section_data-processing-case-studies.xml"/>
</chapter>

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="databases">
<title>Databases</title>
<para>
The choice of database server is an important consideration in the
security of an OpenStack deployment. Multiple factors should be
considered when deciding on a database server, however for the scope
of this book only security considerations will be discussed.
OpenStack supports a variety of database types (see
<link xlink:href="http://docs.openstack.org/admin-guide-cloud">
<citetitle>OpenStack Cloud Administrator Guide</citetitle></link> for more
information). The Security Guide currently focuses on PostgreSQL and MySQL.
</para>
<xi:include href="section_database-backend-considerations.xml"/>
<xi:include href="section_database-access-control.xml"/>
<xi:include href="section_database-transport-security.xml"/>
<xi:include href="section_case-studies-database.xml"/>
</chapter>

View File

@ -1,21 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="documentation">
<title>System documentation</title>
<para>The system documentation for an OpenStack cloud deployment
should follow the templates and best practices for the Enterprise
Information Technology System in your organization. Organizations
often have compliance requirements which may require an overall
System Security Plan to inventory and document the architecture of a
given system. There are common challenges across the industry
related to documenting the dynamic cloud infrastructure and keeping
the information up-to-date.</para>
<xi:include href="section_system-documentation-requirements.xml"/>
<xi:include href="section_case-studies-system-documentation.xml"/>
</chapter>

View File

@ -1,30 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity">
<title>Identity</title>
<para>Identity service (keystone) provides identity, token, catalog,
and policy services for use specifically by services in the OpenStack
family. Identity service is organized as a group of internal services
exposed on one or many endpoints. Many of these services are used in a
combined fashion by the frontend, for example an authenticate call
will validate user/project credentials with the identity service and,
upon success, create and return a token with the token service.
Further information can be found by reading the <link
xlink:href="http://docs.openstack.org/developer/keystone/index.html">
Keystone Developer Documentation</link>.</para>
<xi:include href="section_identity-authentication.xml"/>
<xi:include href="section_identity-authentication-methods.xml"/>
<xi:include href="section_identity-authorization.xml"/>
<xi:include href="section_identity-policies.xml"/>
<xi:include href="section_identity-tokens.xml"/>
<xi:include href="section_identity-domains.xml"/>
<xi:include href="section_identity-federated-keystone.xml"/>
<xi:include href="section_identity-checklist.xml"/>
<xi:include href="section_case-studies-identity-management.xml"/>
</chapter>

View File

@ -1,72 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="instance-management">
<title>Instance security management</title>
<para>
One of the virtues of running instances in a virtualized environment
is that it opens up new opportunities for security controls that are
not typically available when deploying onto bare metal. There are
several technologies that can be applied to the virtualization stack
that bring improved information assurance for cloud tenants.
</para>
<para>
Deployers or users of OpenStack with strong security requirements
may want to consider deploying these technologies. Not all are
applicable in every situation, indeed in some cases technologies may
be ruled out for use in a cloud because of prescriptive business
requirements. Similarly some technologies inspect instance data such
as run state which may be undesirable to the users of the system.
</para>
<para>
In this chapter we explore these technologies and describe the
situations where they can be used to enhance security for instances
or underlying instances. We also seek to highlight where privacy
concerns may exist. These include data pass through, introspection,
or providing a source of entropy. In this section we highlight the
following additional security services:
</para>
<itemizedlist><listitem>
<para>
Entropy to instances
</para>
</listitem>
<listitem>
<para>
Scheduling instances to nodes
</para>
</listitem>
<listitem>
<para>
Trusted images
</para>
</listitem>
<listitem>
<para>
Instance migrations
</para>
</listitem>
<listitem>
<para>
Monitoring, alerting, and reporting
</para>
</listitem>
<listitem>
<para>
Updates and patches
</para>
</listitem>
<listitem>
<para>
Firewalls and other host-based security controls
</para>
</listitem>
</itemizedlist>
<xi:include href="section_security-services-for-instances.xml"/>
<xi:include href="section_case-studies-instance-management.xml"/>
</chapter>

View File

@ -1,27 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="introduction">
<title>Introduction</title>
<para>
The OpenStack Security Guide is the result of a five day sprint of
collaborative work of many individuals. The purpose of this document
is to provide the best practice guidelines for deploying a secure
OpenStack cloud. It is a living document that is updated as new
changes are merged into the repository, and is meant to reflect the
current state of security within the OpenStack community and provide
frameworks for decision making where listing specific security
controls are not feasible due to complexity or other environment
specific details.
</para>
<xi:include href="section_acknowledgements.xml"/>
<xi:include href="section_why-and-how-we-wrote-this-book.xml"/>
<xi:include href="section_introduction-to-openstack.xml"/>
<xi:include href="section_security-boundaries-and-threats.xml"/>
<xi:include href="section_introduction-to-case-studies.xml"/>
</chapter>

View File

@ -1,36 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="management">
<title>Management</title>
<para>
A cloud deployment is a living system. Machines age and fail,
software becomes outdated, vulnerabilities are discovered. When
errors or omissions are made in configuration, or when software
fixes must be applied, these changes must be made in a secure, but
convenient, fashion. These changes are typically solved through
configuration management.
</para>
<para>
Likewise, it is important to protect the cloud deployment from
being configured or manipulated by malicious entities. With many
systems in a cloud employing compute and networking
virtualization, there are distinct challenges applicable to
OpenStack which must be addressed through integrity lifecycle
management.
</para>
<para>
Finally, administrators must perform command and control over the
cloud for various operational functions. It is important these
command and control facilities are understood and secured.
</para>
<xi:include href="section_continuous-systems-management.xml"/>
<xi:include href="section_integrity-life-cycle.xml"/>
<xi:include href="section_management-interfaces.xml"/>
<xi:include href="section_case-studies-management-interfaces.xml"/>
</chapter>

View File

@ -1,49 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="message_queuing">
<title>Message queuing</title>
<para>Message queuing services facilitate inter-process
communication in OpenStack. OpenStack supports these message
queuing service back ends:</para>
<itemizedlist>
<listitem>
<para>RabbitMQ</para>
</listitem>
<listitem>
<para>Qpid</para>
</listitem>
<listitem>
<para>ZeroMQ or 0MQ</para>
</listitem>
</itemizedlist>
<para>Both RabbitMQ and Qpid are Advanced Message Queuing Protocol
(AMQP) frameworks, which provide message queues for peer-to-peer
communication. Queue implementations are typically deployed as a
centralized or decentralized pool of queue servers. ZeroMQ
provides direct peer-to-peer communication through TCP
sockets.</para>
<para>Message queues effectively facilitate command and control
functions across OpenStack deployments. Once access to the queue
is permitted no further authorization checks are performed.
Services accessible through the queue do validate the contexts and
tokens within the actual message payload. However, you must note
the expiration date of the token because tokens are potentially
re-playable and can authorize other services in the
infrastructure.</para>
<para>OpenStack does not support message-level confidence, such as
message signing. Consequently, you must secure and authenticate
the message transport itself. For high-availability (HA)
configurations, you must perform queue-to-queue authentication and
encryption.</para>
<para>With ZeroMQ messaging, IPC sockets are used on individual
machines. Because these sockets are vulnerable to attack, ensure
that the cloud operator has secured them.</para>
<xi:include href="section_messaging-security.xml"/>
<xi:include href="section_case-studies-messaging.xml"/>
</chapter>

View File

@ -1,27 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="monitoring-logging">
<title>Monitoring and logging</title>
<para>
A lot of activity goes on within a cloud environment. It is a mix of
hardware, operating systems, virtual machine managers, the OpenStack
services, cloud-user activity such as creating instances and
attaching storage, the network underlying the whole, and finally
end-users using the applications running on the various instances.
</para>
<para>
The basics of logging: configuration, setting log level, location of
the log files, and how to use and customize logs, as well as how to
do centralized collections of logs is well covered in the <link
xlink:href="http://docs.openstack.org/ops/"><citetitle>OpenStack
Operations Guide</citetitle></link>.
</para>
<xi:include href="section_forensics-and-incident-response.xml"/>
<xi:include href="section_case-studies-monitoring-and-logging.xml"/>
</chapter>

View File

@ -1,43 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="networking">
<title>Networking</title>
<para>
OpenStack Networking enables the end-user or tenant to define,
utilize, and consume networking resources. OpenStack Networking
provides a tenant-facing API for defining network connectivity and
IP addressing for instances in the cloud in addition to
orchestrating the network configuration. With the transition to an
API-centric networking service, cloud architects and
administrators should take into consideration best practices to
secure physical and virtual network infrastructure and services.
</para>
<para>
OpenStack Networking was designed with a plug-in architecture that
provides extensibility of the API through open source community or
third-party services. As you evaluate your architectural design
requirements, it is important to determine what features are
available in OpenStack Networking core services, any additional
services that are provided by third-party products, and what
supplemental services are required to be implemented in the
physical infrastructure.
</para>
<para>
This section is a high-level overview of what processes and best
practices should be considered when implementing OpenStack
Networking. We will talk about the current state of services that
are available, what future services will be implemented, and the
current limitations in this project.
</para>
<xi:include href="section_networking-architecture.xml"/>
<xi:include href="section_networking-services.xml"/>
<xi:include href="section_securing-openstack-networking-services.xml"/>
<xi:include href="section_networking-services-security-best-practices.xml"/>
<xi:include href="section_case-studies-networking.xml"/>
</chapter>

View File

@ -1,363 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="object-storage">
<?dbhtml stop-chunking?>
<title>Object Storage</title>
<para>OpenStack Object Storage (swift) is a service that provides
software that stores and retrieves data over HTTP. Objects
(blobs of data) are stored in an organizational hierarchy
that offers anonymous read-only access, ACL defined access,
or even temporary access. Object Store supports multiple
token-based authentication mechanisms implemented via
middleware.</para>
<para>Applications store and retrieve data in Object Store via an
industry-standard HTTP RESTful API. Back-end components of
Object Storage follow the same RESTful model however some of the APIs
for managing durability, for example, are kept private to the
cluster. For more details on the API see the
<link
xlink:href="http://docs.openstack.org/api/openstack-object-storage/1.0/content/"
>OpenStack Storage documentation</link>.</para>
<para>For this document the components will be grouped into the
following primary groups:</para>
<orderedlist>
<listitem>
<para>Proxy services</para>
</listitem>
<listitem>
<para>Auth services</para>
</listitem>
<listitem>
<para>Storage services</para>
<itemizedlist>
<listitem>
<para>Account service</para>
</listitem>
<listitem>
<para>Container service</para>
</listitem>
<listitem>
<para>Object service</para>
</listitem>
</itemizedlist>
</listitem>
</orderedlist>
<figure>
<title>An example diagram from the OpenStack Object Storage
Administration Guide (2013)</title>
<mediaobject>
<imageobject>
<imagedata contentdepth="329" contentwidth="494"
fileref="static/swift_network_diagram-1.png"
format="PNG" scalefit="1"/>
</imageobject>
</mediaobject>
</figure>
<note>
<para>An Object Storage installation does not have to
necessarily be on the Internet and could also be a private
cloud with the "Public Switch" being part of the
organization's internal network infrastructure.</para>
</note>
<section xml:id="object-storage-first-thing-to-secure-the-network">
<title>First thing to secure: the network</title>
<para>Securing the Object Storage service begins with securing the
networking component. If you skipped the networking chapter,
go back to <xref linkend="networking"/>.</para>
<para>The rsync protocol is used between storage service nodes to
replicate data for high availability. In addition, the proxy
service communicates with the storage service when relaying data
back and forth between the client end-point and the cloud environment.</para>
<caution>
<para>Object Storage does not employ encryption or authentication
with inter-node communications. This is why you see a
"Private Switch" or private network ([V]LAN) in the architecture
diagrams. This data domain should be separate from other
OpenStack data networks as well. For further discussion on
security domains please see <xref
linkend="security-boundaries-and-threats"
/>.</para>
</caution>
<tip>
<para><emphasis>Rule:</emphasis> Use a private (V)LAN
network segment for your storage nodes in the data
domain.</para>
</tip>
<para>This necessitates that the proxy nodes have dual
interfaces (physical or virtual):</para>
<orderedlist>
<listitem>
<para>One as a "public" interface for consumers to
reach</para>
</listitem>
<listitem>
<para>Another as a "private" interface with access to
the storage nodes</para>
</listitem>
</orderedlist>
<para>The following figure demonstrates one possible network
architecture.</para>
<figure>
<title>Object Storage network architecture with a
management node (OSAM)</title>
<mediaobject>
<imageobject role="html">
<imagedata contentdepth="913" contentwidth="1264"
fileref="static/swift_network_diagram-2.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/swift_network_diagram-2.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</mediaobject>
</figure>
</section>
<!-- First thing to secure - The Network -->
<section xml:id="object-storage-securing-services-general">
<title>Securing services: general</title>
<section xml:id="object-storage-securing-services-general-run-service-as-nonroot-user">
<title>Run services as non-root user</title>
<para>It is recommended that you configure each Object Storage service
to run under a non-root (UID 0) service account. One
recommendation is the user name "swift" with the primary
group "swift." Object Storage services include, for
example, 'proxy-server', 'container-server',
'account-server'. Detailed steps for setup and
configuration can be found in the <link
xlink:href="http://docs.openstack.org/kilo/install-guide/install/apt/content/ch_swift.html">Add
Object Storage chapter</link> of the
<citetitle>Installation Guide</citetitle> in the <link
xlink:href="http://docs.openstack.org" >OpenStack
Documentation index</link>. (The link defaults to the
Ubuntu version.)</para>
</section>
<section xml:id="object-storage-securing-services-general-file-permissions">
<title>File permissions</title>
<para>The <filename>/etc/swift</filename> directory
contains information about the ring topology and
environment configuration. The following permissions are
recommended:</para>
<screen><prompt>#</prompt> <userinput>chown -R root:swift /etc/swift/*</userinput>
<prompt>#</prompt> <userinput>find /etc/swift/ -type f -exec chmod 640 {} \;</userinput>
<prompt>#</prompt> <userinput>find /etc/swift/ -type d -exec chmod 750 {} \;</userinput></screen>
<para>This restricts only root to be able to modify
configuration files while allowing the services to
read them through their group membership in
the 'swift' group.</para>
</section>
</section>
<!-- Securing Services - General -->
<section xml:id="object-storage-securing-storage-services">
<title>Securing storage services</title>
<para>The following are the default listening ports for the
various storage services:</para>
<informaltable>
<thead>
<tr>
<td>Service name</td>
<td>Port</td>
<td>Type</td>
</tr>
</thead>
<tbody>
<tr>
<td>Account service</td>
<td>6002</td>
<td>TCP</td>
</tr>
<tr>
<td>Container service</td>
<td>6001</td>
<td>TCP</td>
</tr>
<tr>
<td>Object service</td>
<td>6000</td>
<td>TCP</td>
</tr>
<tr>
<td>Rsync<footnote><para>If ssync is used
instead of rsync, the Object service port is used
for maintaining durability.</para></footnote>
</td>
<td>873</td>
<td>TCP</td>
</tr>
</tbody>
</informaltable>
<para>Authentication does not take place at the storage
nodes. If someone was able to connect to a storage
node on one of these ports they could access or
modify data without authentication. In order to secure
against this issue you should follow the recommendations
given previously about using a private storage
network.</para>
<section xml:id="object-storage-securing-storage-services-object-storage-account-terminology">
<title>Object Storage "account" terminology</title>
<para>An Object Storage "account" is not a user account or
credential. The following explains the
relations:</para>
<informaltable>
<tbody>
<tr>
<td>OpenStack Object Storage account</td>
<td>Collection of containers; not user
accounts or authentication. Which users
are associated with the account and how
they may access it depends on the
authentication system used. See
<xref linkend="object-storage-object-storage-authentication"
/>.</td>
</tr>
<tr>
<td>OpenStack Object Storage containers</td>
<td>Collection of objects. Metadata on the
container is available for ACLs. The
meaning of ACLs is dependent on the
authentication system used.</td>
</tr>
<tr>
<td>OpenStack Object Storage objects</td>
<td>The actual data objects. ACLs at the
object level are also possible with
metadata and are dependent on the
authentication system used.</td>
</tr>
</tbody>
</informaltable>
<tip>
<para>
<?dbhtml bgcolor="#DDFADE" ?><?dbfo bgcolor="#DDFADE" ?>
Another way of thinking about the above would be:
A single shelf (account) holds zero or more
buckets (containers) which each hold zero or more
objects. A garage (Object Storage cluster) may have
multiple shelves (accounts) with each shelf
belonging to zero or more users.</para>
</tip>
<para>At each level you may have ACLs that dictate who has
what type of access. ACLs are interpreted based on
what authentication system is in use. The two most
common types of authentication providers used are
Identity service (keystone) and TempAuth. Custom authentication providers
are also possible. Please see
<xref linkend="object-storage-object-storage-authentication"/>
for more information.</para>
</section>
</section>
<!-- Securing Storage Services -->
<section xml:id="object-storage-securing-proxy-services">
<title>Securing proxy services</title>
<para>A proxy node should have at least two interfaces
(physical or virtual): one public and one
private. Firewalls or service binding might protect the
public interface. The public facing service is an HTTP web
server that processes end-point client requests,
authenticates them, and performs the appropriate
action. The private interface does not require any
listening services but is instead used to establish
outgoing connections to storage nodes on the
private storage network.</para>
<section xml:id="object-storage-securing-proxy-services-HTTP-listening-port">
<title>HTTP listening port</title>
<para>You should configure your web service as a
non-root (no UID 0) user such as "swift" mentioned
before. The use of a port greater than 1024 is
required to make this easy and avoid running any part
of the web container as root. Doing so is not a burden
as end-point clients are not typically going to type
in the URL manually into a web browser to browse
around in the object storage. Additionally, for
clients using the HTTP REST API and performing
authentication they will normally automatically grab
the full REST API URL they are to use as provided by
the authentication response. OpenStack's REST API
allows for a client to authenticate to one URL and
then be told to use a completely different URL for the
actual service. Example: Client authenticates to
<uri>https://identity.cloud.example.org:55443/v1/auth</uri>
and gets a response with their authentication key and
Storage URL (the URL of the proxy nodes or load
balancer) of
<uri>https://swift.cloud.example.org:44443/v1/AUTH_8980</uri>.</para>
<para>The method for configuring your web server to start
and run as a non-root user varies by web server and
OS.</para>
</section>
<section xml:id="object-storage-securing-proxy-services-load-balancer">
<title>Load balancer</title>
<para>If the option of using Apache is not feasible or for
performance you wish to offload your TLS work you may
employ a dedicated network device load balancer. This
is also the common way to provide redundancy and load
balancing when using multiple proxy nodes.</para>
<para>If you choose to offload your TLS, ensure that the
network link between the load balancer and your proxy
nodes are on a private (V)LAN segment such that other
nodes on the network (possibly compromised) cannot
wiretap (sniff) the unencrypted traffic. If such a
breach were to occur the attacker could gain access to
end-point client or cloud administrator credentials
and access the cloud data.</para>
<para>The authentication service you use, such as
Identity service (keystone) or TempAuth, will determine how you configure a
different URL in the responses to end-point clients so they
use your load balancer instead of an individual proxy
node.</para>
</section>
</section>
<!-- Securing Proxy Services -->
<section xml:id="object-storage-object-storage-authentication">
<title>Object Storage authentication</title>
<para>Object Storage uses a WSGI model to provide for a middleware
capability that not only provides general extensibility but
is also used for authentication of end-point clients. The
authentication provider defines what roles and user types
exist. Some use traditional user name and password
credentials while others may leverage API key tokens or
even client-side x.509 certificates. Custom providers
can be integrated in using custom middleware.</para>
<para>Object Storage comes with two authentication middleware modules
by default, either of which can be used as sample code for
developing a custom authentication middleware.</para>
<section xml:id="object-storage-object-storage-authentication-tempauth">
<title>TempAuth</title>
<para>TempAuth is the default authentication for
Object Storage. In contrast to Identity it stores the user
accounts, credentials, and metadata in object storage
itself. More information can be found in the section
<link xlink:href="http://docs.openstack.org/developer/swift/overview_auth.html"
>The Auth System</link> of the Object Storage (swift) documentation.</para>
</section>
<section xml:id="object-storage-object-storage-authentication-keystone">
<title>Keystone</title>
<para>Keystone is the commonly used Identity provider in
OpenStack. It may also be used for authentication in
Object Storage. Coverage of securing keystone is
already provided in <xref
linkend="identity"/>.</para>
</section>
</section>
<!-- Object Storage Authentication -->
<section xml:id="object-storage-other-notable-items">
<title>Other notable items</title>
<para>In <filename>/etc/swift/swift.conf</filename> on every
node there is a
<option>swift_hash_path_prefix</option> setting and a
<option>swift_hash_path_suffix</option> setting. These are
provided to reduce the chance of hash collisions for objects
being stored and avert one user overwriting the data of
another user.</para>
<para>This value should be initially set with a
cryptographically secure random number generator and
consistent across all nodes. Ensure that it is
protected with proper ACLs and that you have a backup copy
to avoid data loss.</para>
</section>
</chapter>

View File

@ -1,32 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="secure_communication">
<title>Secure communication</title>
<para>
Inter-device communication is an issue still plaguing security
researchers. Between large project errors such as Heartbleed or more
advanced attacks such as BEAST and CRIME, secure methods of
communication over a network are becoming more important. It should
be remembered, however that encryption should be applied as one part
of a larger security strategy. The compromise of an endpoint means
that an attacker no longer needs to break the encryption used, but
is able to view and manipulate messages as they are processed by
the system.
</para>
<para>
This chapter will review several features around configuring TLS to
secure both internal and external resources, and will call out
specific categories of systems that should be given specific
attention.
</para>
<xi:include href="section_introduction-to-ssl-tls.xml"/>
<xi:include href="section_tls-proxies-and-http-services.xml"/>
<xi:include href="section_secure-reference-architectures.xml"/>
<xi:include href="section_case-studies-pki-and-certificate-management.xml"/>
</chapter>

View File

@ -1,23 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="tenant-data">
<title>Tenant data privacy</title>
<para>
OpenStack is designed to support multitenancy and those tenants will
most probably have different data requirements. As a cloud builder
and operator you need to ensure your OpenStack environment can
address various data privacy concerns and regulations. In this
chapter we will address data residency and disposal as it pertains
to OpenStack implementations.
</para>
<xi:include href="section_data-privacy-concerns.xml"/>
<xi:include href="section_data-encryption.xml"/>
<xi:include href="section_key-management.xml"/>
<xi:include href="section_case-studies-tenant-data.xml"/>
</chapter>

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,25 +0,0 @@
<!-- The master of this file is in openstack-manuals repository, file
doc/common/entities/openstack.ent.
Any changes to the master file will override changes in other
repositories. -->
<!-- Some useful entities borrowed from HTML -->
<!ENTITY ndash "&#x2013;">
<!ENTITY mdash "&#x2014;">
<!ENTITY nbsp "&#160;">
<!ENTITY times "&#215;">
<!ENTITY hellip "&#133;">
<!-- Useful for describing APIs in the User Guide -->
<!ENTITY COPY '<command xmlns="http://docbook.org/ns/docbook">COPY</command>'>
<!ENTITY GET '<command xmlns="http://docbook.org/ns/docbook">GET</command>'>
<!ENTITY HEAD '<command xmlns="http://docbook.org/ns/docbook">HEAD</command>'>
<!ENTITY PUT '<command xmlns="http://docbook.org/ns/docbook">PUT</command>'>
<!ENTITY POST '<command xmlns="http://docbook.org/ns/docbook">POST</command>'>
<!ENTITY DELETE '<command xmlns="http://docbook.org/ns/docbook">DELETE</command>'>
<!ENTITY CHECK '<inlinemediaobject xmlns="http://docbook.org/ns/docbook">
<imageobject>
<imagedata fileref="static/Check_mark_23x20_02.svg"
format="SVG" scale="60"/>
</imageobject>
</inlinemediaobject>'>

View File

@ -1,101 +0,0 @@
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<artifactId>openstack-security-guide</artifactId>
<groupId>org.openstack.docs</groupId>
<version>1.0.0</version>
<packaging>jar</packaging>
<name>OpenStack Security Guide</name>
<properties>
<!-- This is set by Jenkins according to the branch. -->
<release.path.name>local</release.path.name>
<comments.enabled>1</comments.enabled>
</properties>
<!-- ################################################ -->
<!-- USE "mvn clean generate-sources" to run this POM -->
<!-- ################################################ -->
<build>
<plugins>
<plugin>
<groupId>com.rackspace.cloud.api</groupId>
<artifactId>clouddocs-maven-plugin</artifactId>
<version>2.1.4</version>
<executions>
<execution>
<id>security-guide</id>
<goals>
<goal>generate-webhelp</goal>
</goals>
<phase>generate-sources</phase>
<configuration>
<includes>bk-openstack-security-guide.xml</includes>
<sourceDirectory>.</sourceDirectory>
<glossaryCollection>${basedir}/../glossary/glossary-terms.xml</glossaryCollection>
<webhelpDirname>security-guide</webhelpDirname>
<pdfFilenameBase>security-guide</pdfFilenameBase>
<enableDisqus>${comments.enabled}</enableDisqus>
<enableGoogleAnalytics>1</enableGoogleAnalytics>
<googleAnalyticsId>UA-17511903-1</googleAnalyticsId>
<chapterAutolabel>1</chapterAutolabel>
<sectionAutolabel>0</sectionAutolabel>
<tocSectionDepth>1</tocSectionDepth>
<formalProcedures>0</formalProcedures>
<generateToc>
appendix toc,title
article/appendix nop
article toc,title
book toc,title,figure,table,equation
chapter toc
part toc,title
acknowledgements toc,title
preface toc
qandadiv toc
qandaset toc
reference toc,title
section toc
set toc,title
</generateToc>
<pageWidth>7.44in</pageWidth>
<pageHeight>9.68in</pageHeight>
<doubleSided>1</doubleSided>
<omitCover>1</omitCover>
</configuration>
</execution>
</executions>
<configuration>
<canonicalUrlBase>http://docs.openstack.org/security-guide/content</canonicalUrlBase>
<branding>openstack</branding>
<showXslMessages>true</showXslMessages>
</configuration>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>Rackspace Research Repositories</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>rackspace-research</id>
<name>Rackspace Research Repository</name>
<url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>rackspace-research</id>
<name>Rackspace Research Repository</name>
<url>http://maven.research.rackspacecloud.com/content/groups/public/</url>
</pluginRepository>
</pluginRepositories>
</profile>
</profiles>
</project>

View File

@ -1,37 +0,0 @@
Roadmap for Security Guide
==========================
This file is stored with the source to offer ideas for what to work on.
Put your name next to a task if you want to work on it and put a WIP
review up on http://review.openstack.org.
Latest update: May 20, 2014
.. contents:: :local:
To do tasks
-----------
- Add OpenStack Security Notes (OSSN) as appendix
- Update for Icehouse - this guide is continually released, but hasn't
had updates for latest releases of OpenStack
- Document SSL everywhere (it's there but does it need updating for latest
releases?)
- Document Hyper-V considerations and Windows security
- Document key management - Barbican is not yet integrated so must link
to their documentation externally and document Kite and other solutions
- Standardize diagrams
- Add audience information; who is this book intended for
Ongoing tasks
-------------
- Ensure it meets conventions and standards
- Triage incoming bugs in openstack-manuals project
- Continually update with latest release information relevant to security
Wishlist tasks
--------------
- Replace all individual client commands (like keystone, nova) with openstack
client commands

View File

@ -1,26 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="acknowledgements">
<?dbhtml stop-chunking?>
<title>Acknowledgments</title>
<para>The OpenStack Security Group would like to acknowledge contributions
from the following organizations that were instrumental in making this
book possible. The organizations are:</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="900" contentwidth="600"
fileref="static/book-sprint-all-logos.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/book-sprint-all-logos.png"
format="PNG" scalefit="1" width="90%"/>
</imageobject>
</inlinemediaobject>
</para>
</section>

View File

@ -1,139 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="api-endpoint-configuration-recommendations">
<?dbhtml stop-chunking?>
<title>API endpoint configuration recommendations</title>
<section xml:id="api-endpoint-configuration-recommendations-internal-api-communications">
<title>Internal API communications</title>
<para>
OpenStack provides both public facing and private API
endpoints. By default, OpenStack components use the publicly
defined endpoints. The recommendation is to configure these
components to use the API endpoint within the proper security
domain.</para>
<para>
Services select their respective API endpoints based on the
OpenStack service catalog. These services might not
obey the listed public or internal API end point
values. This can lead to internal management traffic being
routed to external API endpoints.</para>
<section xml:id="api-endpoint-configuration-recommendations-internal-api-communications-configure-internal-urls-in-identity-service-catalog">
<title>Configure internal URLs in the Identity service catalog</title>
<para>
The Identity service catalog should be aware of your
internal URLs. While this feature is not utilized by
default, it may be leveraged through
configuration. Additionally, it should be forward-compatible
with expectant changes once this behavior becomes the
default.</para>
<para>To register an internal URL for an endpoint:</para>
<screen><prompt>$</prompt> <userinput>keystone endpoint-create \
--region RegionOne \
--service-id=1ff4ece13c3e48d8a6461faebd9cd38f \
--publicurl='https://public-ip:8776/v1/%(tenant_id)s' \
--internalurl='https://management-ip:8776/v1/%(tenant_id)s' \
--adminurl='https://management-ip:8776/v1/%(tenant_id)s'</userinput></screen>
</section>
<section xml:id="api-endpoint-configuration-recommendations-internal-api-communications-configure-applications-for-internal-urls">
<title>Configure applications for internal URLs</title>
<para>
You can force some services to use specific API
endpoints. Therefore, it is recommended that each OpenStack
service communicating to the API of another service must be
explicitly configured to access the proper internal API
endpoint.</para>
<para>
Each project may present an inconsistent way of defining
target API endpoints. Future releases of OpenStack seek to
resolve these inconsistencies through consistent use of the
Identity service catalog.</para>
<section xml:id="api-endpoint-configuration-recommendations-internal-api-communications-configure-applications-for-internal-urls-configuration-example-1">
<title>Configuration example #1: nova</title>
<programlisting language="ini">[DEFAULT]
cinder_catalog_info='volume:cinder:internalURL'
glance_protocol='https'
neutron_url='https://neutron-host:9696'
neutron_admin_auth_url='https://neutron-host:9696'
s3_host='s3-host'
s3_use_ssl=True</programlisting>
</section>
<section xml:id="api-endpoint-configuration-recommendations-internal-api-communications-configure-applications-for-internal-urls-configuration-example-2">
<title>Configuration example #2: cinder</title>
<programlisting language="ini">glance_host='https://glance-server'</programlisting>
</section>
</section>
</section>
<section xml:id="api-endpoint-configuration-recommendations-paste-and-middleware">
<title>Paste and middleware</title>
<para>
Most API endpoints and other HTTP services in OpenStack use
the Python Paste Deploy library. From a security perspective,
this library enables manipulation of the request filter
pipeline through the application's configuration. Each element
in this chain is referred to as
<emphasis>middleware</emphasis>. Changing the order of filters
in the pipeline or adding additional middleware might have
unpredictable security impact.</para>
<para>
Commonly, implementers add middleware to extend OpenStack's
base functionality. We recommend implementers make careful
consideration of the potential exposure introduced by the
addition of non-standard software components to their HTTP
request pipeline.</para>
<para>For more information about Paste Deploy, see
<link xlink:href="http://pythonpaste.org/deploy/">http://pythonpaste.org/deploy/</link>.
</para>
</section>
<section xml:id="api-endpoint-configuration-recommendations-api-endpoint-process-isolation-and-policy">
<title>API endpoint process isolation and policy</title>
<para>
You should isolate API endpoint
processes, especially those that reside within the public
security domain should be isolated as much as possible. Where
deployments allow, API endpoints should be deployed on
separate hosts for increased isolation.</para>
<section xml:id="api-endpoint-configuration-recommendations-api-endpoint-process-isolation-and-policy-namespaces">
<title>Namespaces</title>
<para>
Many operating systems now provide compartmentalization
support. Linux supports namespaces to assign processes into
independent domains. Other parts of this guide cover system
compartmentalization in more detail.</para>
</section>
<section xml:id="api-endpoint-configuration-recommendations-api-endpoint-process-isolation-and-policy-network-policy">
<title>Network policy</title>
<para>
Because API endpoints typically bridge multiple security
domains, you must pay particular attention to the
compartmentalization of the API processes. See <xref
linkend="security-boundaries-and-threats-bridging-security-domains"/>
for additional information in this area.</para>
<para>
With careful modeling, you can use network ACLs and IDS
technologies to enforce explicit point to point
communication between network services. As a critical cross
domain service, this type of explicit enforcement works well
for OpenStack's message queue service.</para>
<para>
To enforce policies, you can configure services, host-based
firewalls (such as iptables), local policy (SELinux or
AppArmor), and optionally global network policy.</para>
</section>
<section xml:id="api-endpoint-configuration-recommendations-api-endpoint-process-isolation-and-policy-mandatory-access-controls">
<title>Mandatory access controls</title>
<para>
You should isolate API endpoint processes from each other
and other processes on a machine. The configuration for
those processes should be restricted to those processes not
only by Discretionary Access Controls, but through Mandatory
Access Controls. The goal of these enhanced access controls
is to aid in the containment and escalation of API endpoint
security breaches. With mandatory access controls, such
breaches severely limit access to resources and provide
earlier alerting on such events.</para>
</section>
</section>
</section>

View File

@ -1,56 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-api-endpoints">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address endpoint configuration to secure their private and public clouds. Alice's cloud is not publicly accessible, but she is still concerned about securing the endpoints against improper use. Bob's cloud, being public, must take measures to reduce the risk of attacks by external adversaries.</para>
<section xml:id="case-studies-api-endpoints-alice-private-cloud">
<title>Alice's private cloud</title>
<para>
Alice's organization requires that the security architecture
protect the access to the private endpoints, so she elects to
use Apache with TLS enabled and HAProxy for load balancing in
front of the web service. As Alice's organization has
implemented its own certificate authority, she configures the
services within both the guest and management security domains
to use these certificates. Since Alice's OpenStack deployment
exists entirely on a network disconnected from the Internet, she
makes sure to remove all default CA bundles that contain
external public CA providers to ensure the OpenStack services
only accept client certificates issued by her agency's CA. As
she is using HAProxy, Alice configures SSL offloading on her
load balancer, and a virtual server IP (VIP) on the load
balancer with the http to https redirection policy to her API
endpoint systems.
</para>
<para>Alice has registered all of the services in the
Identity service's catalog, using the internal URLs for access
by internal services. She has installed host-based intrusion
detection (HIDS) to monitor the security events on the
endpoints. On the hosts, Alice also ensures that the API
services are confined to a network namespace while confirming
that there is a robust SELinux profile applied to the services.
</para>
</section>
<section xml:id="case-studies-api-endpoints-bob-public-cloud">
<title>Bob's public cloud</title>
<para>
Bob must also protect the access to the public and private
endpoints, so he elects to use the Apache TLS proxy on both
public and internal services. On the public services, he has
configured the certificate key files with certificates signed
by a well-known Certificate Authority. He has used his
organization's self-signed CA to sign certificates in the
internal services on the Management network. Bob has
registered his services in the Identity service's catalog,
using the internal URLs for access by internal services. Bob's
public cloud runs services on SELinux, which he has configured
with a mandatory access control policy to reduce the impact of
any publicly accessible services that may be compromised. He
has also configured the endpoints with a host-based IDS.
</para>
</section>
</section>

View File

@ -1,90 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-compliance">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address common compliance requirements. The preceding chapter refers to a wide variety of compliance certifications and standards. Alice will address compliance in a private cloud, while Bob will be focused on compliance for a public cloud.</para>
<section xml:id="case-studies-compliance-alice-private-cloud">
<title>Alice's private cloud</title>
<para>Alice is building an OpenStack private cloud for the
United States government, specifically to provide elastic
compute environments for signal processing. Alice has
researched government compliance requirements, and has
identified that her private cloud will be required to certify
against FISMA and follow the FedRAMP accreditation process,
which is required for all federal agencies, departments and
contractors to become a Certified Cloud Provider (CCP). In this
particular scenario for signal processing, the FISMA controls
required will most likely be FISMA High, which indicates
possible "severe or catastrophic adverse effects" should the
information system become compromised. In addition to FISMA
Moderate controls Alice must ensure her private cloud is FedRAMP
certified, as this is a requirement for all agencies that
currently utilize, or host federal information within a cloud
environment.
</para>
<para>
To meet these strict government regulations Alice undertakes a
number of activities. Scoping of requirements is particularly
important due to the volume of controls that must be
implemented, which will be defined in NIST Publication 800-53.
</para>
<para>
As the U.S. Department of Defense is involved, Security
Technical Implementation Guides (STIGs) will come into play,
which are the configuration standards for DOD IA and IA-enabled
devices and systems. Alice notices a number of complications
here as there is no STIG for OpenStack, so she must address
several underlying requirements for each OpenStack service; for
example, the networking SRG and Application SRG will both be
applicable (list of SRGs). Other critical controls include
ensuring that all identities in the cloud use PKI, that SELinux
is enabled, that encryption exists for all wire-level
communications, and that continuous monitoring is in place and
clearly documented. Alice is not concerned with object
encryption, as this will be the tenant's responsibility rather
than the provider.
</para>
<para>
If Alice has adequately scoped and executed these compliance
activities, she may begin the process to become FedRAMP
compliant by hiring an approved third-party auditor. Typically
this process takes up to six months, after which she will
receive an Authority to Operate and can offer OpenStack cloud
services to the government.
</para>
</section>
<section xml:id="case-studies-compliance-bob-public-cloud">
<title>Bob's public cloud</title>
<para>Bob is tasked with compliance for a new OpenStack public
cloud deployment, that is focused on providing cloud services to
both small developers and startups, as well as large
enterprises. Bob recognizes that individual developers are not
necessarily concerned with compliance certifications, but to
larger enterprises certifications are critical. Specifically Bob
desires to achieve SOC 1, SOC 2 Security, as well as ISO 27001/2
as quickly as possible. Bob references the Cloud Security
Alliance Cloud Control Matrix (CCM) to assist in identifying
common controls across these three certifications (such as
periodic access reviews, auditable logging and monitoring
services, risk assessment activities, security reviews, etc).
Bob then engages an experienced audit team to conduct a gap
analysis on the public cloud deployment, reviews the results and
fills any gaps identified. Bob works with other team members to
ensure that these security controls and activities are regularly
conducted for a typical audit period (~6-12 months).</para>
<para>At the end of the audit period Bob has arranged for an
external audit team to review in-scope security controls at
randomly sampled points of time over a 6 month period. The audit
team provides Bob with an official report for SOC 1 and SOC 2,
and separately for ISO 27001/2. As Bob has been diligent in
ensuring security controls are in place for his OpenStack public
cloud, there are no additional gaps exposed on the report. Bob
can now provide these official reports to his customers under
NDA, and advertise that he is SOC 1, SOC 2 and ISO 27001/2
compliant on his website.</para>
</section>
</section>

View File

@ -1,47 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-dashboard">
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we
introduced the Alice and Bob case studies where Alice is deploying a
private government cloud and Bob is deploying a public cloud each
with different security requirements. Here we discuss how Alice and
Bob would address dashboard configuration to secure their private and
public clouds. Alice's dashboard is not publicly accessible, but she
is still concerned about securing the administrative dashboard
against improper use. Bob's dashboard, being public, must take
measures to reduce the risk of attacks by external adversaries.
</para>
<section xml:id="case-studies-dashboard-alice">
<title>Alice's cloud running a public application</title>
<para>On the new installation, Alice deploys Apache as the Web
Service Gateway Interface (WSGI) so that she can take advantage of
the health monitoring and clustering features of HAProxy, and keep
a homogeneous deployment in her environment. She modifies the
<literal>SECURE_PROXY_SSL_HEADER</literal> variable and disables
front end caching with with the session cookies set to httponly to
apply HSTS protections, which decreases the risk of communication
being downgraded from TLS to HTTP. Such a downgrade would be more
vulnerable to a 'man in the middle' (MITM) attack. As her
application is public facing, Alice creates an internal domain for
the dashboard access and issues internal PKI certificates.</para>
<para>Alice disables image uploading in the OpenStack dashboard
(horizon) as application users will not need this feature, and
management users will be uploading purpose-built images. As these
images will be sufficiently large, using the OpenStack Image
service (glance) upload features directly will be more efficient
than doing so through the dashboard, and her team has the required
access to the Image service. She uploads her divisions logo into
the dashboard page, but leaves the rest of the dashboard default
taking care not to add additional programs or features that may
introduce additional vulnerabilities.</para>
</section>
<section xml:id="case-studies-dashboard-bob">
<title>Bob's public cloud</title>
<para>In this case Bob takes the same precautions Alice does, except
that Bob deploys his dashboard as public facing.</para>
</section>
</section>

View File

@ -1,39 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-database">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address database
selection and configuration for their respective private and public
clouds.</para>
<section xml:id="case-studies-database-alice-private-cloud">
<title>Alice's private cloud</title>
<para>Alice's organization has high availability concerns and so she has
selected MySQL as the underlying database for the cloud services. She places
the database on the Management network, utilizing SSL/TLS with mutual
authentication among the services to ensure secure access. Based on the
assumption that external access of the database will not be facilitated, she
installs a certificate signed with the organization's root certificate on the
database and its access endpoints. Alice creates separate user accounts for
each database user then configures the database to use both passwords and
X.509 certificates for authentication. She elects not to use the
<systemitem class="service">nova-conductor</systemitem> sub-service due to the
desire for fine-grained access control policies and audit support.</para>
</section>
<section xml:id="case-studies-database-bob-public-cloud">
<title>Bob's public cloud</title>
<para>Bob is concerned about strong separation of his tenants' data, so
he has elected to use the PostgreSQL database, known for its stronger security
features. The database resides on the Management network and uses SSL/TLS with
mutual authentication with the services. Since the database is on the
Management network, the database uses certificates signed with the company's
self-signed root certificate. Bob creates separate user accounts for each
database user, and configures the database to use both passwords and X.509
certificates for authentication. He elects not to use the <systemitem
class="service">nova-conductor</systemitem> sub-service due to a desire for
fine-grained access control.</para>
</section>
</section>

View File

@ -1,67 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-identity-management">
<title>Case studies: Identity management</title>
<para>
Earlier in <xref linkend="introduction-to-case-studies"/> we introduced
the Alice and Bob case studies where Alice is deploying a private
government cloud and Bob is deploying a public cloud each with
different security requirements. Here we discuss how Alice and Bob would address
configuration of the OpenStack Identity service. Alice will be
concerned with integration into the existing government
directory services, while Bob will need to provide access to the
public.
</para>
<section xml:id="case-studies-identity-management-alice-private-cloud">
<title>Alice's private cloud</title>
<para>
Alice's enterprise has a well-established directory service
with two-factor authentication for all users. She configures
the Identity service to support an external authentication
service supporting authentication with government-issued
access cards. She also uses an external LDAP server to provide
role information for the roles that are integrated with the
access control policy. Due to FedRAMP compliance requirements,
Alice implements two-factor authentication on the management
network for all administrator access.
</para>
<para>
Alice also deploys the dashboard to manage many aspects of the
cloud. She deploys the dashboard with HSTS to ensure that only
HTTPS is used. The dashboard resides within an internal
subdomain of the private network domain name system.
</para>
<para>
Alice decides to use SPICE instead of VNC for the virtual
console. She wants to take advantage of the emerging
capabilities in SPICE.
</para>
</section>
<section xml:id="case-studies-identity-management-bob-public-cloud">
<title>Bob's public cloud</title>
<para>
Because Bob must support authentication for the general
public, he decides to use authentication based on a user name and password.
He has concerns about brute force attacks
attempting to crack user passwords, so he also uses an
external authentication extension that throttles the number of
failed login attempts. Bob's management network is separate
from the other networks within his cloud, but can be reached
from his corporate network through ssh. As recommended
earlier, Bob requires administrators to use two-factor
authentication on the Management network to reduce the risk
of compromised administrator passwords.</para>
<para>Bob also deploys the dashboard to manage many aspects of
the cloud. He deploys the dashboard with HSTS to ensure that
only HTTPS is used. He has ensured that the dashboard is
deployed on a second-level domain due to the limitations of the
same-origin policy. He also disables
<option>HORIZON_IMAGES_ALLOW_UPLOAD</option> to prevent resource
exhaustion.</para>
<para>Bob decides to use VNC for his virtual console for its
maturity and security features.</para>
</section>
</section>

View File

@ -1,21 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-instance-isolation">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would ensure that their instances are properly isolated. First we consider hypervisor selection, and then techniques for hardening QEMU and applying mandatory access controls.</para>
<section xml:id="case-studies-instance-isolation-alice-private-cloud">
<title>Alice's private cloud</title>
<para>Alice chooses Xen for the hypervisor in her cloud due to a strong internal knowledge base and a desire to use the Xen security modules (XSM) for fine-grained policy enforcement.</para>
<para>Alice is willing to apply a relatively large amount of resources to software packaging and maintenance. She will use these resources to build a highly customized version of QEMU that has many components removed, thereby reducing the attack surface. She will also ensure that all compiler hardening options are enabled for QEMU. Alice accepts that these decisions will increase long-term maintenance costs.</para>
<para>Alice writes XSM policies (for Xen) and SELinux policies (for Linux domain 0, and device domains) to provide stronger isolation between the instances. Alice also uses the Intel TXT support in Xen to measure the hypervisor launch in the TPM.</para>
</section>
<section xml:id="case-studies-instance-isolation-bob-public-cloud">
<title>Bob's public cloud</title>
<para>Bob is very concerned about instance isolation since the users in a public cloud represent anyone with a credit card, meaning they are inherently untrusted. Bob has just started hiring the team that will deploy the cloud, so he can tailor his candidate search for specific areas of expertise. With this in mind, Bob chooses a hypervisor based on its technical features, certifications, and community support. KVM has an EAL 4+ common criteria rating, with a labeled security protection profile (LSPP) to provide added assurance for instance isolation. This, combined with the strong support for KVM within the OpenStack community drives Bob's decision to use KVM.</para>
<para>Bob weighs the added cost of repackaging QEMU and decides that he cannot commit those resources to the project. Fortunately, his Linux distribution has already enabled the compiler hardening options. So he decides to use this QEMU package. Finally, Bob leverages sVirt to manage the SELinux polices associated with the virtualization stack.</para>
</section>
</section>

View File

@ -1,50 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-instance-management">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would architect their clouds with respect to instance entropy, scheduling instances, trusted images, and instance migrations.</para>
<section xml:id="case-studies-instance-management-alice-private-cloud">
<title>Alice's private cloud</title>
<para>
Earlier in <xref linkend="case-studies-management-interfaces-alice-private-cloud"/>,
Alice issued a Request for Product (or RFP) to major hardware
vendors that outlined her performance and form factor needs.
This RFP includes the requirement for a processor architecture
with rdrand support (currently Ivy Bridge or Haswell). When the
hardware has been delivered and is being configured, Alice will
use the entropy-gathering daemon (egd) in libvirt to ensure
sufficient entropy and the ability to feed that entropy to
instances. She also enables 'trusted compute pools' for boot
time attestation of the image that will be compared to a hash
from the 'golden images.' She configures the
<filename>.bash_profile</filename> to log all commands, and
sends those to the event monitoring collector. As users are
expected to only have access to the application, and not the
instance behind it, Alice installs a host intrusion detection
system (HIDS) agent on the instance as well to monitor and
export system events, and also ensures her internal public
certificate is installed into the certficate store on the
system. Alice is also aware that a side effect of this
architecture is that Alice's team will be expected to manage
all of the instances in the environment.
</para>
</section>
<section xml:id="case-studies-instance-management-bob-public-cloud">
<title>Bob's public cloud</title>
<para>Bob is aware that entropy will be a concern for some of his customers, such as those in the financial industry. However, due to the added cost and complexity, Bob has decided to forgo integrating hardware entropy into the first iteration of his cloud. He adds hardware entropy as a fast-follow to do for a later improvement for the second generation of his cloud architecture.</para>
<para>Bob is interested in ensuring that customers receive a high quality of service. He is concerned that providing excess explicit user control over instance scheduling could negatively impact the quality of service. As a result, he disables this feature. Bob provides images in the cloud from a known trusted source for users to use. Additionally, he allows users to upload their own images. However, users generally cannot share their images. This helps prevent a user from sharing a malicious image, which could negatively impact the security of other users in the cloud.</para>
<para>
For migrations, Bob wants to enable secure instance migrations
in order to support rolling upgrades with minimal user
downtime. Bob ensures that all migrations occur on an isolated
VLAN. He plans to defer implementing encrypted migrations
until this is better supported in <command>nova</command>
client tools. As a result, he makes a note to track this carefully
and switch to encrypted migrations as soon as possible.
</para>
</section>
</section>

View File

@ -1,75 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-management-interfaces">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Previously we discussed typical OpenStack management interfaces and associated backplane issues. We will now approach these issues by returning to the Alice and Bob case studies
(See <xref linkend="introduction-to-case-studies"/> ) where Alice is deploying a government cloud and Bob is deploying a public cloud each with different security requirements. In this section, we will look into how both Alice and Bob will address:</para>
<itemizedlist><listitem>
<para>Cloud administration</para>
</listitem>
<listitem>
<para>Self service</para>
</listitem>
<listitem>
<para>Data replication and recovery</para>
</listitem>
<listitem>
<para>SLA and security monitoring</para>
</listitem>
</itemizedlist>
<section xml:id="case-studies-management-interfaces-alice-private-cloud">
<title>Alice's private cloud</title>
<para>
Alice's cloud has very strict requirements around management and
interfaces. She builds an SVN repository and uploads the
baseline configuration files there for service teams to modify
and deploy through tool automation. User and group accounts for
each service are created with the principle of least privilege
in mind. Service teams also have their roles defined, and are
given or denied access as well. For example, the development
team is not given production access, however an escalation path
is outlined through Alice's Security Operations Center where
privileges can be added and audited during issues. Additionally,
an update and patching policy is created, with tiered
criticalities for normal updates compared to security or other
critical fixes that may be more time sensitive.
</para>
<para>
For out-of-band management Alice has included a BMC/IPMI version
specification for the 'Request for Pricing' (RFP) which she
submits to approved hardware vendors for quotes and system
specifications. This includes ensuring communication with the
out-of-band management interface can be encrypted with TLS for
both textual and GUI access. She ensures that a network
intrusion detection system (NIDS) will be monitoring the
management security domain that the IPMI traffic will be using.
Depending on usage, which may vary throughout the year, Alice may
set the NIDS to do passive anomoly detection so that packets are
not missed by the NIDS while it is processing.
</para>
<para>
Alice also creates 'golden images' of various systems that
will be used to quickly spin up a new instance. These golden
images already have the service accounts, configuration details,
logging, and other policies set. One is built for each service
type that may be needed, such as a golden image for API
endpoints, hypervisors, network devices, message queue
instances, and any other devices that are commonly used or may
need to be recreated quickly. She then ensures a process exists
for updating golden images on a regular schedule as well as
reporting package versions for each image, as well as what will
be used by the Ansible configuration management tool, and
exporting that data into the CMDB automatically.
</para>
</section>
<section xml:id="case-studies-management-interfaces-bob-public-cloud">
<title>Bob's public cloud</title>
<para>As a public cloud provider, Bob is concerned with both the continuous availability of management interfaces and the security of transactions to the management interfaces. To that end Bob implements multiple redundant OpenStack API endpoints for the services his cloud will run. Additionally on the public network Bob uses TLS to encrypt all transactions between his customers and his cloud interfaces. To isolate his cloud operations Bob has physically isolated his management, instance migration, and storage networks.</para>
<para>To ease scaling and reduce management overhead Bob implements a configuration management system. For customer data assurances, Bob offers a backup as a service product as requirements will vary between customers. Finally, Bob does not provide a "baremetal" or the ability to schedule an entire node, so to reduce management overhead and increase operational efficiency Bob does not implement any node boot time security.</para>
</section>
</section>

View File

@ -1,22 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-messaging">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address
security concerns around the messaging service.</para>
<para>The message queue is a critical piece of infrastructure that supports a number of OpenStack services but is most strongly associated with the Compute service. Due to the nature of the message queue service, Alice and Bob have similar security concerns. One of the larger concerns that remains is that many systems have access to this queue and there is no way for a consumer of the queue messages to verify which host or service placed the messages on the queue. An attacker who is able to successfully place messages on the queue is able to create and delete VM instances, attach the block storage of any tenant and a myriad of other malicious actions. There are a number of solutions anticipated in the near future, with several proposals for message signing and encryption making their way through the OpenStack development process.
</para>
<section xml:id="case-studies-messaging-alice-private-cloud">
<title>Alice's private cloud</title>
<para>In this case, Alice's controls are the same as Bob's controls, which are described below.</para>
</section>
<section xml:id="case-studies-messaging-bob-public-cloud">
<title>Bob's public cloud</title>
<para>Bob assumes the infrastructure or networks underpinning the Compute service could become compromised, therefore he recognizes the importance of hardening the system by restricting access to the message queue. In order to accomplish this task Bob deploys his RabbitMQ servers with TLS and X.509 client auth for access control. Hardening activities assists in limiting the capabilities of a malicious user that has compromised the system by disallowing queue access, provided that this user does not have valid credentials to override the controls.</para>
<para>Additionally, Bob adds strong network ACL rulesets to enforce which endpoints can communicate with the message servers. This second control provides some additional assurance should the other protections fail.</para>
</section>
</section>

View File

@ -1,47 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-monitoring-and-logging">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address monitoring and logging in the public vs a private cloud. In both instances, time synchronization and a centralized store of logs become extremely important for performing proper assessments and troubleshooting of anomalies. Just collecting logs is not very useful, a robust monitoring system must be built to generate actionable events.</para>
<section xml:id="case-studies-monitoring-and-logging-alice-private-cloud">
<title>Alice's private cloud</title>
<para>
As Alice is building out a new cloud within an existing
organization, she is able to leverage several existing tools
inside the existing business unit to help monitor and log her
environment. She assigns a resource to the Security Operations
Center (SOC) to monitor and respond to alerts coming from the
new infrastructure. She uses a currently existing Security
Event and Incident Management (SEIM) solution, and configures
secure logging to the SEIM event collector. Alice and the SOC
analyst build the SEIM views so that logs are correlated by
type, and trigger alerts on unexpected or “interesting”
events, such as a successful login by a user immediately after
a string of failed login attempts within a given timeframe.
The SOC analyst is also given escalation protocols and contact
information so that when an specific event occurs, the analyst
will escalate to the proper resource and coordinate
communications about the event.
</para>
<para>
Alice ensures that only her resource in the SOC has access to
SEIM feeds from her systems as well as the database back-end.
She also gives the services teams read-only access to separate
views SEIM feeds so that they can perform compliance-related
duties such as regular log review or create tickets for
tracking and other engagement. Finally, based on the default
reporting that is currently built out, Alice may choose to
hire a consultant to ensure specific events are captured for
compliance purposes, or that specific events trigger the
proper SEIM workflows.
</para>
</section>
<section xml:id="case-studies-monitoring-and-logging-bob-public-cloud">
<title>Bob's public cloud</title>
<para>When it comes to logging, as a public cloud provider, Bob is interested in the activities for situational awareness as well as compliance. In the aspect of compliance, as a provider, Bob is subject to adherence to various rules and regulations to include activities such as providing timely, relevant logs or reports to customers to meet the requirements of their compliance programs. With that in mind, Bob configures all of his instances, nodes, and infrastructure devices to perform time synchronization with an external, validated time device. Additionally, Bob's team has built a Django based web application for his customers to perform self-service log retrieval from the SIEM tool. Bob also uses this SIEM tool along with a robust set of alerts and integration with his CMDB to provide operational awareness to both customers and cloud administrators.</para>
</section>
</section>

View File

@ -1,61 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-networking">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced
the Alice and Bob case studies where Alice is deploying a private government
cloud and Bob is deploying a public cloud each with different
security requirements. Here we discuss how Alice and Bob would address
providing networking services to the user.</para>
<section xml:id="case-studies-networking-alice-private-cloud">
<title>Alice's private cloud</title>
<para>Alice picks nova-network due to its stability. She
understands a migration from nova-network to OpenStack
Networking (neutron) will not be easy, and keeps a blueprint for
the migration that is refreshed every OpenStack release. This
will ensure that Alice and her team are up-to-date with
OpenStack Networking features even though they are focusing on
nova-network, and will allow the migration plan to mature with
Alice's experience with OpenStack to help enable a smooth
transition.
</para>
<para>Alice configures nova-network with the VLAN network manager
so that each tenant will have its own virtual network, keeping
in mind the VLAN trunking will stress the environment and the
scaling limit of 4096 tenants. She is aware that all traffic
will pass through the virtual networking appliance, however she
is comfortable with this given the hardware positioned to
support the VLAN trunking across the entire environment.
Additionally, as the nova-network hypervisor enforces the
security group rules with iptables, invalid traffic will not be
an issue. To ensure this, she designates a quarterly review of
the security group rules for each VLAN. She understands there is
no overlapping IP space and has ensured the deployment systems
have separate IP ranges with room for growth until the migration
to OpenStack Networking occurs.
</para>
</section>
<section xml:id="case-studies-networking-bob-public-cloud">
<title>Bob's public cloud</title>
<para>A major business driver for Bob is to provide an advanced
networking services to his customers. Bob's customers would like
to deploy multi-tiered application stacks. This multi-tiered
application are either existing enterprise application or newly
deployed applications. Since Bob's public cloud is a
multi-tenancy enterprise service, the choice to use for L2
isolation in this environment is to use overlay
networking. Another aspect of Bob's cloud is the self-service
aspect where the customer can provision available networking
services as needed. These networking services encompass L2
networks, L3 Routing, Network <glossterm>ACL</glossterm> and
NAT. It is important that per-tenant quota's be implemented in
this environment.</para>
<para>An added benefit with utilizing OpenStack Networking is
when new advanced networking services become available, these
new features can be easily provided to the end customers.</para>
</section>
</section>

View File

@ -1,33 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-pki-and-certificate-management">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case study where Alice is deploying a government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address deployment of PKI certification authorities (CA) and certificate management.</para>
<section xml:id="case-studies-pki-and-certificate-management-alice-private-cloud">
<title>Alice's private cloud</title>
<para>After looking through the secure communication controls of
both FedRAMP/FISMA and HIPAA, Alice decides to have all the
systems use TLS 1.1 or greater, with export ciphers, the RC4
encryption algorithm, and all versions of SSL disabled. She
registers the website through a certificate signing company and
validates her identity and workplace to get a public
certificate. This is then added to the web application so that
incoming clients will be able to use TLS. Internally, she
configures a PKI infrastructure to failover across availability
zones, and bundles the trust certificate into each golden
image's certificate store so that new images will be able to use
the certificate upon creation. She also notes in the golden
image update policy that the certificate will need to be rotated
in the image and on running instances before the expiration
date. She also validates that HSTS is enabled on all web servers
and other protections outlined in the Dashboard chapter.</para>
</section>
<section xml:id="case-studies-pki-and-certificate-management-bob-public-cloud">
<title>Bob's public cloud</title>
<para>Bob is architecting a public cloud and needs to ensure that the publicly facing OpenStack services are using certificates issued by a major public CA. Bob acquires certificates for his public OpenStack services and configures the services to use PKI and TLS and includes the public CAs in his trust bundle for the services. Additionally, Bob also wants to further isolate the internal communications amongst the services within the management security domain. Bob contacts the team within his organization that is responsible for managing his organization's PKI and issuance of certificates using their own internal CA. Bob obtains certificates issued by this internal CA and configures the services that communicate within the management security domain to use these certificates and configures the services to only accept client certificates issued by his internal CA.</para>
</section>
</section>

View File

@ -1,41 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-system-documentation">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a government cloud and Bob is deploying a public cloud each with different security requirements. Here we discuss how Alice and Bob would address their system documentation requirements. The documentation suggested above includes hardware and software records, network diagrams, and system configuration details.</para>
<section xml:id="case-studies-system-documentation-alice-private-cloud">
<title>Alice's private cloud</title>
<para>As Alice needs detailed documentation to satisfy FISMA and
FedRAMP requirements, she implements Microsoft's Systems Center
due to its established auditing capabilities to support FedRAMP
artifact creation, including capturing hardware, firmware, and
software details. Architecture docs are created that clearly
define the components, services, and data flows, with supporting
materials listing the details of those services including
processes, protocols, and ports used. These documents are then
stored on a secured file share, allowing authenticated access
for service and architecture teams to reference.</para>
<para>Additionally, the security domains are clearly highlighted
on each document, and asset groups are categorized per the NIST
Risk Management Framework. Specifically, Alice will call out the
fact that several services cross security domains, such as the
API endpoints crossing the Public and Management domains, the
Identity data being served from a Federated entity crossing
from a system she does not manage to her Management domain, the
Database service crossing both Data and Guest domains, and
hypervisor crossing Management, Guest, and Public domains. She
will then be able to dictate additional controls that ensure
and reinforce the trust level of each domain. For example, the
application will be exposed to the Internet, and therefore data
coming through that will initially be untrusted before it is
moved through to the data domain and into the database.</para>
</section>
<section xml:id="case-studies-system-documentation-bob-public-cloud">
<title>Bob's public cloud</title>
<para>In this case, Bob will approach these steps the same as Alice.</para>
</section>
</section>

View File

@ -1,82 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="case-studies-tenant-data">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>
Earlier in <xref linkend="introduction-to-case-studies"/> we introduced the Alice and Bob case studies where Alice is deploying a private government cloud and Bob is deploying a public cloud each with different security requirements. Here we dive
into their particular tenant data privacy
requirements. Specifically, we will look into how Alice and Bob
both handle tenant data, data destruction, and data
encryption.
</para>
<section xml:id="case-studies-tenant-data-alice-private-cloud">
<title>Alice's private cloud</title>
<para>
As stated during the introduction to Alice's case study, data
protection is of an extremely high priority. She needs to
ensure that a compromise of one tenant's data does not cause
loss of other tenant data. She also has strong regulator
requirements that require documentation of data destruction
activities. Alice does this using the following:
</para>
<itemizedlist>
<listitem>
<para>Establishing procedures to sanitize tenant data when
a program or project ends.</para>
</listitem>
<listitem>
<para>Track the destruction of both the tenant data and
metadata through ticketing in a CMDB.</para>
</listitem>
<listitem><para>For Volume storage:</para>
<itemizedlist>
<listitem>
<para>Physical server issues</para>
</listitem>
<listitem>
<para>To provide secure ephemeral instance storage,
Alice implements qcow2 files on an encrypted
filesystem.</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</section>
<section xml:id="case-studies-tenant-data-bob-public-cloud">
<title>Bob's public cloud</title>
<para>
As stated during the introduction to Bob's case study, tenant
privacy is of an extremely high priority. In addition to the
requirements and actions Bob will take to isolate tenants from
one another at the infrastructure layer, Bob also needs to
provide assurances for tenant data privacy. Bob does this
using the following:
</para>
<itemizedlist>
<listitem>
<para>Establishing procedures to sanitize customer data
when a customer churns.</para>
</listitem>
<listitem>
<para>Track the destruction of both the customer data and
metadata through ticketing in a CMDB.</para>
</listitem>
<listitem>
<para>For Volume storage:</para>
<itemizedlist>
<listitem>
<para>Physical server issues</para>
</listitem>
<listitem>
<para>To provide secure ephemeral instance storage, Bob
implements qcow2 files on an encrypted filesystems.</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,285 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="certification-and-compliance-statements">
<?dbhtml stop-chunking?>
<title>Certification and compliance statements</title>
<para>Compliance and security are not exclusive, and must be
addressed together. OpenStack deployments are unlikely to satisfy
compliance requirements without security hardening. The listing
below provides an OpenStack architect foundational knowledge and
guidance to achieve compliance against commercial and government
certifications and standards.</para>
<section
xml:id="certification-and-compliance-statements-commerical-standards">
<title>Commercial standards</title>
<para>For commercial deployments of OpenStack, it is recommended
that SOC 1/2 combined with ISO 2700 1/2 be considered as a
starting point for OpenStack certification activities. The
required security activities mandated by these certifications
facilitate a foundation of security best practices and common
control criteria that can assist in achieving more stringent
compliance activities, including government attestations and
certifications.</para>
<para>After completing these initial certifications, the remaining
certifications are more deployment specific. For example, clouds
processing credit card transactions will need PCI-DSS, clouds
storing health care information require HIPAA, and clouds within
the federal government may require FedRAMP/FISMA, and ITAR,
certifications.</para>
<section
xml:id="certification-and-compliance-statements-commerical-standards-soc-1-ssae-16-isae-3402">
<title>SOC 1 (SSAE 16) / ISAE 3402</title>
<para>Service Organization Controls (SOC) criteria are defined
by the <link xlink:href="http://www.aicpa.org/">American
Institute of Certified Public Accountants</link> (AICPA).
SOC controls assess relevant financial statements and
assertions of a service provider, such as compliance with the
Sarbanes-Oxley Act. SOC 1 is a replacement for Statement on
Auditing Standards No. 70 (SAS 70) Type II report. These
controls commonly include physical data centers in
scope.</para>
<para>There are two types of SOC 1 reports:</para>
<itemizedlist>
<listitem>
<para>Type 1 - report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design of the controls
to achieve the related control objectives included in the
description as of a specified date.</para>
</listitem>
<listitem>
<para>Type 2 - report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design and operating
effectiveness of the controls to achieve the related
control objectives included in the description throughout
a specified period</para>
</listitem>
</itemizedlist>
<para>For more details see the <link
xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC1Report.aspx"
>AICPA Report on Controls at a Service Organization Relevant
to User Entities' Internal Control over Financial
Reporting</link>.</para>
</section>
<section
xml:id="certification-and-compliance-statements-commerical-standards-soc-2">
<title>SOC 2</title>
<para>Service Organization Controls (SOC) 2 is a self
attestation of controls that affect the security,
availability, and processing integrity of the systems a
service organization uses to process users' data and the
confidentiality and privacy of information processed by these
system. Examples of users are those responsible for governance
of the service organization; customers of the service
organization; regulators; business partners; suppliers and
others who have an understanding of the service organization
and its controls.</para>
<para>There are two types of SOC 2 reports:</para>
<itemizedlist>
<listitem>
<para>Type 1 - report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design of the controls
to achieve the related control objectives included in the
description as of a specified date.</para>
</listitem>
<listitem>
<para>Type 2 - report on the fairness of the presentation of
management's description of the service organization's
system and the suitability of the design and operating
effectiveness of the controls to achieve the related
control objectives included in the description throughout
a specified period.</para>
</listitem>
</itemizedlist>
<para>For more details see the <link
xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC2Report.aspx"
>AICPA Report on Controls at a Service Organization Relevant
to Security, Availability, Processing Integrity,
Confidentiality or Privacy</link>.</para>
</section>
</section>
<section
xml:id="certification-and-compliance-statements-soc-3">
<title>SOC 3</title>
<para>Service Organization Controls (SOC) 3 is a trust services
report for service organizations. These reports are designed to
meet the needs of users who want assurance on the controls at a
service organization related to security, availability,
processing integrity, confidentiality, or privacy but do not
have the need for or the knowledge necessary to make effective
use of a SOC 2 Report. These reports are prepared using the
AICPA/Canadian Institute of Chartered Accountants (CICA) Trust
Services Principles, Criteria, and Illustrations for Security,
Availability, Processing Integrity, Confidentiality, and
Privacy. Because they are general use reports, SOC 3 Reports can
be freely distributed or posted on a website as a seal.</para>
<para>For more details see the <link
xlink:href="http://www.aicpa.org/InterestAreas/FRC/AssuranceAdvisoryServices/Pages/AICPASOC3Report.aspx"
>AICPA Trust Services Report for Service
Organizations</link>.</para>
</section>
<section
xml:id="certification-and-compliance-statements-iso-27001-2">
<title>ISO 27001/2</title>
<para>The ISO/IEC 27001/2 standards replace BS7799-2, and are
specifications for an Information Security Management System
(ISMS). An ISMS is a comprehensive set of policies and processes
that an organization creates and maintains to manage risk to
information assets. These risks are based upon the
confidentiality, integrity, and availability (CIA) of user
information. The CIA security triad has been used as a
foundation for much of the chapters in this book.</para>
<para>For more details see <link
xlink:href="http://www.27000.org/iso-27001.htm">ISO
27001</link>.</para>
</section>
<section
xml:id="certification-and-compliance-statements-hipaa-hitech">
<title>HIPAA / HITECH</title>
<para>The Health Insurance Portability and Accountability Act
(HIPAA) is a United States congressional act that governs the
collection, storage, use and destruction of patient health
records. The act states that Protected Health Information (PHI)
must be rendered "unusable, unreadable, or indecipherable" to
unauthorized persons and that encryption for data 'at-rest' and
'inflight' should be addressed.</para>
<para>HIPAA is not a certification, rather a guide for protecting
healthcare data. Similar to the PCI-DSS, the most important
issues with both PCI and HIPPA is that a breach of credit card
information, and health data, does not occur. In the instance of a
breach the cloud provider will be scrutinized for compliance
with PCI and HIPPA controls. If proven compliant, the provider
can be expected to immediately implement remedial controls,
breach notification responsibilities, and significant
expenditure on additional compliance activities. If not
compliant, the cloud provider can expect on-site audit teams,
fines, potential loss of merchant ID (PCI), and massive
reputation impact.</para>
<para>Users or organizations that possess PHI must support HIPAA
requirements and are HIPAA covered entities. If an entity
intends to use a service, or in this case, an OpenStack cloud
that might use, store or have access to that PHI, then a
Business Associate Agreement must be signed. The BAA is a
contract between the HIPAA covered entity and the OpenStack
service provider that requires the provider to handle that PHI
in accordance with HIPAA requirements. If the service provider
does not handle the PHI, such as with security controls and
hardening, then they are subject to HIPAA fines and
penalties.</para>
<para>OpenStack architects interpret and respond to HIPAA
statements, with data encryption remaining a core practice.
Currently this would require any protected health information
contained within an OpenStack deployment to be encrypted with
industry standard encryption algorithms. Potential future
OpenStack projects such as object encryption will facilitate
HIPAA guidelines for compliance with the act.</para>
<para>For more details see the <link
xlink:href="https://www.cms.gov/Regulations-and-Guidance/HIPAA-Administrative-Simplification/HIPAAGenInfo/downloads/HIPAALaw.pdf"
>Health Insurance Portability And Accountability
Act</link>.</para>
<section
xml:id="certification-and-compliance-statements-hipaa-hitech-pci-dss">
<title>PCI-DSS</title>
<para>The Payment Card Industry Data Security Standard (PCI DSS)
is defined by the Payment Card Industry Standards Council, and
created to increase controls around card holder data to reduce
credit card fraud. Annual compliance validation is assessed by
an external Qualified Security Assessor (QSA) who creates a
Report on Compliance (ROC), or by a Self-Assessment
Questionnaire (SAQ) dependent on volume of card-holder
transactions.</para>
<para>OpenStack deployments which stores, processes, or
transmits payment card details are in scope for the PCI-DSS.
All OpenStack components that are not properly segmented from
systems or networks that handle payment data fall under the
guidelines of the PCI-DSS. Segmentation in the context of
PCI-DSS does not support multi-tenancy, but rather physical
separation (host/network).</para>
<para>For more details see <link
xlink:href="https://www.pcisecuritystandards.org/security_standards/"
>PCI security standards</link>.</para>
</section>
</section>
<section xml:id="certification-and-compliance-statements-government-standards">
<title>Government standards</title>
<section
xml:id="certification-and-compliance-statements-government-standards-fedramp">
<title>FedRAMP</title>
<para>"The <link xlink:href="http://www.fedramp.gov">Federal
Risk and Authorization Management Program</link> (FedRAMP)
is a government-wide program that provides a standardized
approach to security assessment, authorization, and continuous
monitoring for cloud products and services". NIST 800-53 is
the basis for both FISMA and FedRAMP which mandates security
controls specifically selected to provide protection in cloud
environments. FedRAMP can be extremely intensive from
specificity around security controls, and the volume of
documentation required to meet government standards.</para>
<para>For more details see <link
xlink:href="http://www.gsa.gov/portal/category/102371"
>http://www.gsa.gov/portal/category/102371</link>.</para>
</section>
<section
xml:id="certification-and-compliance-statements-government-standards-itar">
<title>ITAR</title>
<para>The International Traffic in Arms Regulations (ITAR) is a
set of United States government regulations that control the
export and import of defense-related articles and services on
the United States Munitions List (USML) and related technical
data. ITAR is often approached by cloud providers as an
"operational alignment" rather than a formal certification.
This typically involves implementing a segregated cloud
environment following practices based on the NIST 800-53
framework, as per FISMA requirements, complemented with
additional controls restricting access to "U.S. Persons" only
and background screening.</para>
<para>For more details see <link
xlink:href="https://www.pmddtc.state.gov/regulations_laws/itar.html"
>https://www.pmddtc.state.gov/regulations_laws/itar.html</link>.</para>
</section>
<section
xml:id="certification-and-compliance-statements-government-standards-fisma">
<title>FISMA</title>
<para>The Federal Information Security Management Act requires
that government agencies create a comprehensive plan to
implement numerous government security standards, and was
enacted within the E-Government Act of 2002. FISMA outlines a
process, which utilizing multiple NIST publications, prepares
an information system to store and process government
data.</para>
<para>This process is broken apart into three primary
categories:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">System
categorization:</emphasis> The information system will
receive a security category as defined in Federal
Information Processing Standards Publication 199 (FIPS
199). These categories reflect the potential impact of
system compromise.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Control
selection:</emphasis>Based upon system security category as
defined in FIPS 199, an organization utilizes FIPS 200 to
identify specific security control requirements for the
information system. For example, if a system is
categorized as "moderate" a requirement may be introduced
to mandate "secure passwords".</para>
</listitem>
<listitem>
<para><emphasis role="bold">Control tailoring:</emphasis>
Once system security controls are identified, an OpenStack
architect will utilize NIST 800-53 to extract tailored
control selection. For example, specification of what
constitutes a "secure password".</para>
</listitem>
</itemizedlist>
</section>
</section>
</section>

View File

@ -1,66 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compliance-activities">
<?dbhtml stop-chunking?>
<title>Compliance activities</title>
<para>There are a number of standard activities that will greatly assist with the compliance process. In this chapter we outline some of the most common compliance activities. These are not specific to OpenStack, however we provide references to relevant sections in this book as useful context.</para>
<section xml:id="compliance-activities-information-security-management-system-isms">
<title>Information Security Management system (ISMS)</title>
<para>An Information Security Management System (ISMS) is a comprehensive set of policies and
processes that an organization creates and maintains to manage risk to information assets. The
most common ISMS for cloud deployments is <link
xlink:href="http://www.27000.org/iso-27001.htm">ISO/IEC 27001/2</link>, which creates a
solid foundation of security controls and practices for achieving more stringent compliance
certifications. This standard was updated in 2013 to reflect the growing use of cloud services
and places more emphasis on measuring and evaluating how well an organization's ISMS is
performing.</para>
</section>
<section xml:id="compliance-activities-risk-assessment">
<title>Risk assessment</title>
<para>A risk assessment framework identifies risks within an organization or service, and specifies ownership of these risks, along with implementation and mitigation strategies. Risks apply to all areas of the service, from technical controls to environmental disaster scenarios and human elements, for example a malicious insider (or rogue employee). Risks can be rated using a variety of mechanisms, for example likelihood vs impact. An OpenStack deployment risk assessment can include control gaps that are described in this book.</para>
</section>
<section xml:id="compliance-activities-access-and-log-reviews">
<title>Access and log reviews</title>
<para>Periodic access and log reviews are required to ensure authentication, authorization, and accountability in a service deployment. Specific guidance for OpenStack on these topics are discussed in-depth in the logging section.</para>
</section>
<section xml:id="compliance-activities-backup-and-disaster-recovery">
<title>Backup and disaster recovery</title>
<para>Disaster Recovery (DR) and Business Continuity Planning (BCP) plans are common requirements for ISMS and compliance activities. These plans must be periodically tested as well as documented. In OpenStack key areas are found in the management security domain, and anywhere that single points of failure (SPOFs) can be identified. See the section on secure backup and recovery for additional details.</para>
</section>
<section xml:id="compliance-activities-security-training">
<title>Security training</title>
<para>Annual, role-specific, security training is a mandatory requirement for almost all compliance certifications and attestations. To optimize the effectiveness of security training, a common method is to provide role specific training, for example to developers, operational personnel, and non-technical employees. Additional cloud security or OpenStack security training based on this hardening guide would be ideal.</para>
</section>
<section xml:id="compliance-activities-security-reviews">
<title>Security reviews</title>
<para>As OpenStack is a popular open source project, much of the
codebase and architecture has been scrutinized by individual
contributors, organizations and enterprises. This can be
advantageous from a security perspective, however the need for
security reviews is still a critical consideration for service
providers, as deployments vary, and security is not always the
primary concern for contributors. A comprehensive security
review process may include architectural review, threat
modelling, source code analysis and penetration testing. There
are many techniques and recommendations for conducting security
reviews that can be found publicly posted. A well-tested example
is the <link xlink:href="http://www.microsoft.com/security/sdl/process/release.aspx">Microsoft SDL</link>,
created as part of the Microsoft
Trustworthy Computing Initiative.</para>
</section>
<section xml:id="compliance-activities-vulnerability-management">
<title>Vulnerability management</title>
<para>Security updates are critical to any IaaS deployment, whether private or public. Vulnerable systems expand attack surfaces, and are obvious targets for attackers. Common scanning technologies and vulnerability notification services can help mitigate this threat. It is important that scans are authenticated and that mitigation strategies extend beyond simple perimeter hardening. Multi-tenant architectures such as OpenStack are particularly prone to hypervisor vulnerabilities, making this a critical part of the system for vulnerability management. See the section on instance isolation for additional details.</para>
</section>
<section xml:id="compliance-activities-data-classification">
<title>Data classification</title>
<para>Data Classification defines a method for classifying and handling information, often to protect customer information from accidental or deliberate theft, loss, or inappropriate disclosure. Most commonly this involves classifying information as sensitive or non-sensitive, or as personally identifiable information (PII). Depending on the context of the deployment various other classifying criteria may be used (government, health-care etc). The underlying principle is that data classifications are clearly defined and in-use. The most common protective mechanisms include industry standard encryption technologies. See the data security section for additional details.</para>
</section>
<section xml:id="compliance-activities-exception-process">
<title>Exception process</title>
<para>An exception process is an important component of an ISMS. When certain actions are not compliant with security policies that an organization has defined, they must be logged. Appropriate justification, description and mitigation details need to be included, and signed off by appropriate authorities. OpenStack default configurations may vary in meeting various compliance criteria, areas that fail to meet compliance requirements should be logged, with potential fixes considered for contribution to the community.</para>
</section>
</section>

View File

@ -1,142 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compliance-overview">
<?dbhtml stop-chunking?>
<title>Compliance overview</title>
<section xml:id="compliance-overview-security-principles">
<title>Security principles</title>
<para>Industry standard security principles provide a baseline for compliance certifications and
attestations. If these principles are considered and referenced throughout an OpenStack
deployment, certification activities may be simplified.</para>
<variablelist>
<varlistentry>
<term>Layered defenses</term>
<listitem>
<para>Identify where risks exist in a cloud architecture and apply controls to mitigate
the risks. In areas of significant concern, layered defences provide multiple
complementary controls to manage risk down to an acceptable level. For example, to ensure adequate
isolation between cloud tenants, we recommend hardening QEMU, using a hypervisor with
SELinux support, enforcing mandatory access control policies, and reducing the overall
attack surface. The foundational principle is to harden an area of concern with multiple
layers of defense such that if any one layer is compromised, other layers will exist to
offer protection and minimize exposure.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Fail securely</term>
<listitem>
<para>In the case of failure, systems should be configured to fail into a closed secure
state. For example, TLS certificate verification should fail closed by severing the
network connection if the CNAME doesn't match the server's DNS name. Software often
fails open in this situation, allowing the connection to proceed without a CNAME match,
which is less secure and not recommended.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Least privilege</term>
<listitem>
<para>Only the minimum level of access for users and system services is granted. This
access is based upon role, responsibility and job function. This security principal of
least privilege is written into several international government security policies, such
as NIST 800-53 Section AC-6 within the United States.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Compartmentalize</term>
<listitem>
<para>Systems should be segregated in a such way that if one machine, or system-level
service, is compromised the security of the other systems will remain intact.
Practically, the enablement and proper usage of SELinux helps accomplish this
goal.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Promote privacy</term>
<listitem>
<para>The amount of information that can be gathered about a system and its users should
be minimized.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Logging capability</term>
<listitem>
<para>Appropriate logging is implemented to monitor for unauthorized use, incident
response and forensics. It is highly recommended that selected audit subsystems be
Common Criteria certified, which provides non-attestable event records in most
countries.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="compliance-overview-CommonControlFramework">
<title>Common control frameworks</title>
<para>The following is a list of Control Frameworks that an organization can use to build their
security controls</para>
<variablelist>
<varlistentry>
<term><link
xlink:href="https://cloudsecurityalliance.org/media/news/csa-releases-new-ccm-caiq-v3-0-1/"
>Cloud Security Alliance (CSA) Common Control Matrix (CCM)</link></term>
<listitem>
<para>The CSA CCM is specifically designed to provide fundamental security principles to
guide cloud vendors and to assist prospective cloud customers in assessing the overall
security risk of a cloud provider. The CSA CCM provides a controls framework that are
aligned across 16 security domains. The foundation of the Cloud Controls Matrix rests on
its customized relationship to other industry standards, regulations, and controls
frameworks such as: ISO 27001:2013, COBIT 5.0, PCI:DSS v3, AICPA 2014 Trust Service
Principles and Criteria and augments internal control direction for service organization
control reports attestations.</para>
<para>The CSA CCM strengthens existing information security control environments by
enabling the reduction of security threats and vulnerabilities in the cloud, provides
standardized security and operational risk management, and seeks to normalize security
expectations, cloud taxonomy and terminology, and security measures implemented in the
cloud.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><link xlink:href="http://www.27000.org/iso-27001.htm">ISO 27001/2:2013</link></term>
<listitem>
<para>The ISO 27001 Information Security standard and certification has been used for
many years to evaluate and distinguish an organizations alignment with information
Security best practices. The standard is comprised of two parts: Mandatory Clauses that
define the Information Security Management System (ISMS) and Annex A which contains a list
of controls organized by domain.</para>
<para>The information security management system preserves the confidentiality, integrity,
and availability of information by applying a risk management process and gives
confidence to interested parties that risks are adequately managed.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><link
xlink:href="http://www.aicpa.org/InterestAreas/InformationTechnology/Resources/TrustServices/Pages/Trust%20Services%20Principles%E2%80%94An%20Overview.aspx"
>Trusted Security Principles </link></term>
<listitem>
<para>Trust Services are a set of professional attestation and advisory services based on
a core set of principles and criteria that address the risks and opportunities of
IT-enabled systems and privacy programs. Commonly known as the SOC audits, the
principles define what the requirement is and it is the organizations responsibility to
define the control that meets the requirement.</para>
<para/>
</listitem>
</varlistentry>
</variablelist>
</section>
<section xml:id="compliance-overview-AuditReference">
<title>Audit reference</title>
<para>OpenStack is innovative in many ways however the process used to audit an OpenStack
deployment is fairly common. Auditors will evaluate a process by two criteria: Is the
control designed effectively and if the control is operating effectively. An understanding of
how an auditor evaluates if a control is designed and operating effectively will be discussed
in <xref linkend="understanding-the-audit-process"/>.</para>
<para>The most common frameworks for auditing and evaluating a cloud deployment include the
previously mentioned ISO 27001/2 Information Security standard, ISACA's Control Objectives
for Information and Related Technology (COBIT) framework, Committee of Sponsoring
Organizations of the Treadway Commission (COSO), and Information Technology Infrastructure
Library (ITIL). It is very common for audits to include areas of focus from one or more of
these frameworks. Fortunately there is a lot of overlap between the frameworks, so an
organization that adopts one will be in a good position come audit time.</para>
</section>
</section>

View File

@ -1,54 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compute-hardening-deployments">
<?dbhtml stop-chunking?>
<title>Hardening Compute deployments</title>
<para>
One of the main security concerns with any OpenStack deployment is
the security and controls around sensitive files, such as the
<filename>nova.conf</filename> file. Normally contained in the
<filename>/etc</filename> directory, this configuration file
contains many sensitive options including configuration details
and service passwords. All such sensitive files should be given
strict file level permissions, and monitored for changes through
file integrity monitoring (FIM) tools such as iNotify or Samhain.
These utilities will take a hash of the target file in a known
good state, and then periodically take a new hash of the file
and compare it to the known good hash. An alert can be created
if it was found to have been modified unexpectedly.
</para>
<para>
The permissions of a file can be examined my moving into the
directory the file is contained in and running the <command>ls -lh
</command> command. This will show the permissions, owner, and
group that have access to the file, as well as other information
such as the last time the file was modified and when it was
created.
</para>
<para>
The <filename>/var/lib/nova</filename> directory is used to hold
details about the instances on a given Compute host. This
directory should be considered sensitive as well, with strictly
enforced file permissions. Additionally, it should be backed up
regularly as it contains information and metadata for the
instances associated with that host.
</para>
<para>
If your deployment does not require full virtual machine backups,
we recommend excluding the <filename>/var/lib/nova/instances
</filename> directory as it will be as large as the combined space
of each vm running on that node. If your deployment does require
full vm backups, you will need to ensure this directory is backed
up successfully.
</para>
<para>
Monitoring is a critical component of IT infrastructure, and we
recommend the
<link xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_nova-logs.html">
Compute logfiles</link> be monitored and analyzed so that
meaningful alerts can be created.
</para>
</section>

View File

@ -1,111 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compute-how-to-select-virtual-consoles">
<?dbhtml stop-chunking?>
<title>How to select virtual consoles</title>
<para>One decision a cloud architect will need to make regarding
Compute service configuration is whether to use <glossterm
baseform="Virtual Network Computing (VNC)">VNC</glossterm> or
<glossterm>SPICE</glossterm>. Below we provide some details on
the differences between these options.</para>
<section xml:id="compute-how-to-select-virtual-consoles-virtual-network-computer-vnc">
<title>Virtual Network Computer (VNC)</title>
<para>OpenStack can be configured to provide remote desktop console access to instances for tenants and/or administrators using the Virtual Network Computer (VNC) protocol.</para>
<section xml:id="compute-how-to-select-virtual-consoles-virtual-network-computer-vnc-capabilites">
<title>Capabilities</title>
<itemizedlist><listitem>
<para>
The OpenStack dashboard (horizon) can provide a VNC
console for instances directly on the web page using the
HTML5 noVNC client. This requires the <systemitem
class="service">nova-novncproxy</systemitem> service to
bridge from the public network to the management
network.
</para>
</listitem>
<listitem>
<para>The <command>nova</command> command-line utility can
return a URL for the VNC console for access by the
<systemitem class="service">nova</systemitem> Java VNC
client. This requires the <systemitem
class="service">nova-xvpvncproxy</systemitem> service to
bridge from the public network to the management
network.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="compute-how-to-select-virtual-consoles-virtual-network-computer-vnc-security-considerations">
<title>Security considerations</title>
<itemizedlist><listitem>
<para>The <systemitem
class="service">nova-novncproxy</systemitem> and
<systemitem class="service">nova-xvpvncproxy</systemitem>
services by default open public-facing ports that are
token authenticated.</para>
</listitem>
<listitem>
<para>By default, the remote desktop traffic is not encrypted.
TLS can be enabled to encrypt the VNC traffic. Please refer to <link linkend="introduction-to-ssl-tls">Introduction to TLS and SSL</link> for appropriate recommendations.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="compute-how-to-select-virtual-consoles-virtual-network-computer-vnc-references">
<title>Bibliography</title>
<para>blog.malchuk.ru, OpenStack VNC Security. 2013. <link xlink:href="http://blog.malchuk.ru/2013/05/21/47">Secure Connections to VNC ports</link></para>
<para>OpenStack Mailing List, [OpenStack] nova-novnc SSL configuration - Havana. 2014. <link xlink:href="http://lists.openstack.org/pipermail/openstack/2014-February/005357.html">OpenStack nova-novnc SSL Configuration</link></para>
<para>Redhat.com/solutions, Using SSL Encryption with OpenStack nova-novacproxy. 2014. <link xlink:href="https://access.redhat.com/solutions/514143">OpenStack nova-novncproxy SSL encryption</link></para>
</section>
</section>
<section xml:id="compute-how-to-select-virtual-consoles-simple-protocol-for-independent-computing-environments-spice">
<title>Simple Protocol for Independent Computing Environments (SPICE)</title>
<para>As an alternative to VNC, OpenStack provides remote desktop access to guest virtual machines using the Simple Protocol for Independent Computing Environments (SPICE) protocol.</para>
<section xml:id="compute-how-to-select-virtual-consoles-simple-protocol-for-independent-computing-environments-spice-capabilities">
<title>Capabilities</title>
<itemizedlist><listitem>
<para>
SPICE is supported by the OpenStack dashboard
(horizon) directly on the instance web page. This
requires the <systemitem
class="service">nova-spicehtml5proxy</systemitem>
service.
</para>
</listitem>
<listitem>
<para>The nova command-line utility can return a URL for SPICE console for access by a SPICE-html client.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="compute-how-to-select-virtual-consoles-simple-protocol-for-independent-computing-environments-spice-limitations">
<title>Limitations</title>
<itemizedlist><listitem>
<para>Although SPICE has many advantages over VNC, the spice-html5 browser integration currently doesn't really allow admins to take advantage of any of the benefits. To take advantage of SPICE features like multi-monitor, USB pass through, etc. admins are recommended to use a standalone SPICE client within the Management Network.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="compute-how-to-select-virtual-consoles-simple-protocol-for-independent-computing-environments-spice-security-considerations">
<title>Security considerations</title>
<itemizedlist>
<listitem>
<para>The <systemitem class="service">nova-spicehtml5proxy</systemitem>
service by default opens public-facing ports that are token
authenticated.</para>
</listitem>
<listitem>
<para>The functionality and integration are still evolving. We will access the features in the next release and make recommendations.</para>
</listitem>
<listitem>
<para>As is the case for VNC, at this time we recommend using SPICE from the management network in addition to limiting use to few individuals.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="compute-how-to-select-virtual-consoles-simple-protocol-for-independent-computing-environments-spice-references">
<title>Bibliography</title>
<para>OpenStack Configuration Reference - Havana. SPICE Console. <link xlink:href="http://docs.openstack.org/havana/config-reference/content/spice-console.html">SPICE Console</link></para>
<para>bugzilla.redhat.com, Bug 913607 - RFE: Support Tunnelling SPICE over websockets. 2013. <link xlink:href="https://bugzilla.redhat.com/show_bug.cgi?id=913607">Red Hat bug 913607</link></para>
</section>
</section>
</section>

View File

@ -1,93 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="compute-vulnerability-awareness">
<title>Vulnerability awareness</title>
<section xml:id="compute-vulernability-awareness-vmt">
<title>OpenStack vulnerability management team</title>
<para>
We recommend keeping up to date on security issues and advisories
as they are published. The OpenStack Security Portal
(<link xlink:href="https://security.openstack.org/">
https://security.openstack.org/</link>) is the central portal
where advisories, notices, meetings, and processes can be
coordinated. Additionally, the OpenStack Vulnerability Management
Team (VMT) portal (<link
xlink:href="https://security.openstack.org/#openstack-vulnerability-management-team">
https://security.openstack.org/#openstack-vulnerability-management-team
</link>) coordinates remediation within the OpenStack project, as
well as the process of investigating reported bugs which are
responsibly disclosed (privately) to the VMT, by marking the bug
as 'This bug is a security vulnerability'. Further detail is
outlined in the VMT process page
(<link xlink:href="https://security.openstack.org/vmt-process.html#process">
https://security.openstack.org/vmt-process.html#process</link>)
and results in an OpenStack Security Advisory or OSSA. This OSSA
outlines the issue and the fix, as well as linking to both the
original bug, and the location where the where the patch is
hosted.
</para>
</section>
<section xml:id="compute-vulernability-awareness-ossn">
<title>OpenStack security notes</title>
<para>
Reported security bugs that are found to be the result of a
misconfiguration, or are not strictly part of OpenStack are
drafted into Openstack Security Notes or OSSNs. These include
configuration issues such as ensuring Identity provider mappings
as well as non-OpenStack but critical issues such as the
Bashbug/Ghost or Venom vulnerabilities that affect the platform
OpenStack utilizes. The current set of OSSNs is in the Security
Note wiki (<link
xlink:href="https://wiki.openstack.org/wiki/Security_Notes">
https://wiki.openstack.org/wiki/Security_Notes</link>).
</para>
</section>
<section xml:id="compute-vulernability-awareness-openstack-dev-mailinglist">
<title>OpenStack-dev mailinglist</title>
<para>
All bugs, OSSAs and OSSNs are publicly disseminated through the
openstack-dev mailinglist with the [security] topic in the subject
line. We recommend subscribing to this list as well as mail
filtering rules that ensure OSSNs, OSSAs, and other important
advisories are not missed. The openstack-dev mailinglist is
managed through <link
xlink:href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</link>.
The openstack-dev list has a high traffic rate, and filtering is
discussed in the thread <link
xlink:href="http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html">
http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html</link>.
</para>
</section>
<section xml:id="compute-vulernability-awareness-hypervisor-mailinglists">
<title>Hypervisor mailinglists</title>
<para>
When implementing OpenStack, one of the core decisions is which
hypervisor to utilize. We recommend being informed of advisories
pertaining to the hypervisor(s) you have chosen. Several common
hypervisor security lists are below:
</para>
<itemizedlist>
<listitem>
<para>Xen: <link xlink:href="http://xenbits.xen.org/xsa/">
http://xenbits.xen.org/xsa/</link>
</para>
</listitem>
<listitem>
<para>VMWare: <link
xlink:href="http://blogs.vmware.com/security/">
http://blogs.vmware.com/security/</link>
</para>
</listitem>
<listitem>
<para>Others (KVM, and more): <link
xlink:href=" http://seclists.org/oss-sec">
http://seclists.org/oss-sec</link>
</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,328 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="continuous-systems-management">
<?dbhtml stop-chunking?>
<title>Continuous systems management</title>
<para>A cloud will always have bugs. Some of these will be security
problems. For this reason, it is critically important to be
prepared to apply security updates and general software updates.
This involves smart use of configuration management tools, which
are discussed below. This also involves knowing when an upgrade is
necessary.</para>
<section xml:id="continuous-systems-management-vulnerability-management">
<title>Vulnerability management</title>
<para>For announcements regarding security relevant changes,
subscribe to the <link
xlink:href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-announce"
>OpenStack Announce mailing list</link>. The security
notifications are also posted through the downstream packages,
for example, through Linux distributions that you may be
subscribed to as part of the package updates.</para>
<para>The OpenStack components are only a small fraction of the
software in a cloud. It is important to keep up to date with all
of these other components, too. While certain data sources
will be deployment specific, it is important that a cloud administrator
subscribe to the necessary mailing lists in order to receive notification
of any security updates applicable to the organization's environment.
Often this is as simple as tracking an upstream Linux
distribution.</para>
<note>
<para>OpenStack releases security information through two
channels. <itemizedlist>
<listitem>
<para>OpenStack Security Advisories (OSSA) are created by
the OpenStack Vulnerability Management Team (VMT). They
pertain to security holes in core OpenStack services.
More information on the VMT can be found here: <link
xlink:href="https://wiki.openstack.org/wiki/Vulnerability_Management"
>https://wiki.openstack.org/wiki/Vulnerability_Management</link></para>
</listitem>
<listitem>
<para>OpenStack Security Notes (OSSN) are created by the
OpenStack Security Group (OSSG) to support the work of
the VMT. OSSN address issues in supporting software and
common deployment configurations. They are referenced
throughout this guide. Security Notes are archived at
<link xlink:href="https://launchpad.net/ossn/"
>https://launchpad.net/ossn/</link></para>
</listitem>
</itemizedlist>
</para>
</note>
<section xml:id="continuous-systems-management-vulnerability-management-triage">
<title>Triage</title>
<para>After you are notified of a security update, the next step
is to determine how critical this update is to a given cloud
deployment. In this case, it is useful to have a pre-defined
policy. Existing vulnerability rating systems such as the
common vulnerability scoring system (CVSS) v2 do not properly
account for cloud deployments.</para>
<para>In this example we introduce a scoring matrix that places
vulnerabilities in three categories: Privilege Escalation,
Denial of Service and Information Disclosure. Understanding
the type of vulnerability and where it occurs in your
infrastructure will enable you to make reasoned response
decisions.</para>
<para>Privilege Escalation describes the ability of a user to
act with the privileges of some other user in a system,
bypassing appropriate authorization checks. A guest user
performing an operation that allows them to conduct
unauthorized operations with the privileges of an
administrator is an example of this type of
vulnerability.</para>
<para>Denial of Service refers to an exploited vulnerability
that may cause service or system disruption. This includes
both distributed attacks to overwhelm network resources, and
single-user attacks that are typically caused through resource
allocation bugs or input induced system failure flaws.</para>
<para>Information Disclosure vulnerabilities reveal information
about your system or operations. These vulnerabilities range
from debugging information disclosure, to exposure of critical
security data, such as authentication credentials and
passwords.</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<tbody>
<tr>
<td><para> </para></td>
<td colspan="4"><para><emphasis>Attacker position /
Privilege level</emphasis></para></td>
</tr>
<tr>
<td><para><emphasis role="bold"> </emphasis></para></td>
<td><para><emphasis role="bold"
>External</emphasis></para></td>
<td><para><emphasis role="bold">Cloud
user</emphasis></para></td>
<td><para><emphasis role="bold">Cloud
admin</emphasis></para></td>
<td><para><emphasis role="bold">Control
plane</emphasis></para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege elevation (3
levels)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege elevation (2
levels)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Privilege elevation (1
level)</emphasis></para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>Critical</para></td>
<td><para>n/a</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Denial of
service</emphasis></para></td>
<td><para>High</para></td>
<td><para>Medium</para></td>
<td><para>Low</para></td>
<td><para>Low</para></td>
</tr>
<tr>
<td><para><emphasis role="bold">Information
disclosure</emphasis></para></td>
<td><para>Critical / high</para></td>
<td><para>Critical / high</para></td>
<td><para>Medium / low</para></td>
<td><para>Low</para></td>
</tr>
</tbody>
</informaltable>
<para>This table illustrates a generic approach to
measuring the impact of a vulnerability based on where it
occurs in your deployment and the effect. For example, a
single level privilege escalation on a Compute API node
potentially allows a standard user of the API to escalate to
have the same privileges as the root user on the node.</para>
<para>We suggest that cloud administrators use this table as a
model to help define which actions to take for the various
security levels. For example, a critical-level security update
might require the cloud to be upgraded quickly whereas a
low-level update might take longer to be completed.</para>
</section>
<section xml:id="continuous-systems-management-vulnerability-management-testing-the-updates">
<title>Testing the updates</title>
<para>You should test any update before you deploy it in a
production environment. Typically this requires having a
separate test cloud setup that first receives the update. This
cloud should be as close to the production cloud as possible,
in terms of software and hardware. Updates should be tested
thoroughly in terms of performance impact, stability,
application impact, and more. Especially important is to
verify that the problem theoretically addressed by the update,
such as a specific vulnerability, is actually fixed.</para>
</section>
<section xml:id="continuous-systems-management-vulnerability-management-deploying-the-updates">
<title>Deploying the updates</title>
<para>Once the updates are fully tested, they can be deployed to
the production environment. This deployment should be fully
automated using the configuration management tools described
below.</para>
</section>
</section>
<section xml:id="continuous-systems-management-configuration-management">
<title>Configuration management</title>
<para>A production quality cloud should always use tools to
automate configuration and deployment. This eliminates human
error, and allows the cloud to scale much more rapidly.
Automation also helps with continuous integration and
testing.</para>
<para>When building an OpenStack cloud it is strongly recommended
to approach your design and implementation with a configuration
management tool or framework in mind. Configuration management
allows you to avoid the many pitfalls inherent in building,
managing, and maintaining an infrastructure as complex as
OpenStack. By producing the manifests, cookbooks, or templates
required for a configuration management utility, you are able to
satisfy a number of documentation and regulatory reporting
requirements. Further, configuration management can also
function as part of your business continuity plan (BCP) and
data recovery (DR) plans wherein you can rebuild a node or
service back to a known state in a DR event or given a
compromise.</para>
<para>Additionally, when combined with a version control system
such as Git or SVN, you can track changes to your environment
over time and re-mediate unauthorized changes that may occur.
For example, a <filename>nova.conf</filename> file or other
configuration file falls out of compliance with your standard,
your configuration management tool can revert or replace the
file and bring your configuration back into a known state.
Finally a configuration management tool can also be used to
deploy updates; simplifying the security patch process. These
tools have a broad range of capabilities that are useful in this
space. The key point for securing your cloud is to choose a tool
for configuration management and use it.</para>
<para>There are many configuration management solutions; at the
time of this writing there are two in the marketplace that are
robust in their support of OpenStack environments:
<glossterm>Chef</glossterm> and <glossterm>Puppet</glossterm>.
A non-exhaustive listing of tools in this space is provided
below:</para>
<itemizedlist>
<listitem>
<para>Chef</para>
</listitem>
<listitem>
<para>Puppet</para>
</listitem>
<listitem>
<para>Salt Stack</para>
</listitem>
<listitem>
<para>Ansible</para>
</listitem>
</itemizedlist>
<section xml:id="continuous-systems-management-configuration-management-policy-changes">
<title>Policy changes</title>
<para>Whenever a policy or configuration management is changed,
it is good practice to log the activity, and backup a copy of
the new set. Often, such policies and configurations are
stored in a version controlled repository such as Git.</para>
</section>
</section>
<section xml:id="continuous-systems-management-secure-backup-and-recovery">
<title>Secure backup and recovery</title>
<para>It is important to include Backup procedures and policies in
the overall System Security Plan. For a good overview of
OpenStack's Backup and Recovery capabilities and procedures,
please refer to the OpenStack Operations Guide.</para>
<section xml:id="continuous-systems-management-secure-backup-and-recovery-security-considerations">
<title>Security considerations</title>
<itemizedlist>
<listitem>
<para>Ensure only authenticated users and backup clients
have access to the backup server.</para>
</listitem>
<listitem>
<para>Use data encryption options for storage and
transmission of backups.</para>
</listitem>
<listitem>
<para>Use a dedicated and hardened backup servers. The logs
for the backup server must be monitored daily and
accessible by only few individuals.</para>
</listitem>
<listitem>
<para>Test data recovery options regularly. One of the
things that can be restored from secured backups is the
images. In case of a compromise, the best practice would
be to terminate running instances immediately and then
relaunch the instances from the images in the secured
backup repository.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="continuous-systems-management-secure-backup-and-recovery-references">
<title>References</title>
<itemizedlist>
<listitem>
<para><citetitle>OpenStack Operations Guide</citetitle> on
<link
xlink:href="http://docs.openstack.org/openstack-ops/content/backup_and_recovery.html"
>backup and recovery</link></para>
</listitem>
<listitem>
<para><link
xlink:href="http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515"
>http://www.sans.org/reading_room/whitepapers/backup/security-considerations-enterprise-level-backups_515</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.music-piracy.com/?p=494"
>OpenStack Security Primer</link>, an entry in the
music piracy blog by a former member of the original
NASA project team that created nova</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="continuous-systems-management-security-auditing-tools">
<title>Security auditing tools</title>
<para>Security auditing tools can complement the configuration
management tools. Security auditing tools automate the process
of verifying that a large number of security controls are
satisfied for a given system configuration. These tools help to
bridge the gap from security configuration guidance
documentation (for example, the STIG and NSA Guides) to a
specific system installation. For example, <link
xlink:href="https://fedorahosted.org/scap-security-guide/"
>SCAP</link> can compare a running system to a pre-defined
profile. SCAP outputs a report detailing which controls in the
profile were satisfied, which ones failed, and which ones were
not checked.</para>
<para>Combining configuration management and security auditing
tools creates a powerful combination. The auditing tools will
highlight deployment concerns. And the configuration management
tools simplify the process of changing each system to address
the audit concerns. Used together in this fashion, these tools
help to maintain a cloud that satisfies security requirements
ranging from basic hardening to compliance validation.</para>
<para>Configuration management and security auditing tools will
introduce another layer of complexity into the cloud. This
complexity brings additional security concerns with it. We view
this as an acceptable risk trade-off, given their security
benefits. Securing the operational use of these tools is beyond
the scope of this guide.</para>
</section>
</section>

View File

@ -1,16 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-cookies">
<?dbhtml stop-chunking?>
<title>Cookies</title>
<para>Session Cookies should be set to HTTPONLY:</para>
<programlisting>SESSION_COOKIE_HTTPONLY = True</programlisting>
<para>Never configure CSRF or session cookies to have a wild card
domain with a leading dot. Horizon's session and CSRF cookie
should be secured when deployed with HTTPS:</para>
<programlisting>CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True</programlisting>
</section>

View File

@ -1,14 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-cross-origin-resource-sharing-cors">
<?dbhtml stop-chunking?>
<title>Cross Origin Resource Sharing (CORS)</title>
<para>Configure your web server to send a restrictive CORS header
with each response, allowing only the dashboard domain and
protocol:</para>
<programlisting>Access-Control-Allow-Origin: https://example.com/</programlisting>
<para>Never allow the wild card origin.</para>
</section>

View File

@ -1,14 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-debug">
<?dbhtml stop-chunking?>
<title>Debug</title>
<para>We recommend that the <option>DEBUG</option> setting
is set to <literal>False</literal> in production environments.
If <option>DEBUG</option> is set to True,
Django will display stack traces and sensitive web server
state information when exceptions are thrown.</para>
</section>

View File

@ -1,73 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-domain-names-upgrades-configuration">
<?dbhtml stop-chunking?>
<title>Domain names, dashboard upgrades, and basic web server configuration</title>
<section xml:id="dashboard-domain-names">
<title>Domain names</title>
<para>Many organizations typically deploy web applications at
subdomains of an overarching organization domain. It is natural
for users to expect a domain of the form
<uri>openstack.example.org</uri>. In this context, there are
often applications which are deployed in the same second-level
namespace. This name structure is convenient and simplifies name
server maintenance.</para>
<para>We strongly recommend deploying dashboard to a
<emphasis>second-level domain</emphasis>, such as
<uri>https://example.com</uri>, rather than deploying
dashboard on a <emphasis>shared subdomain</emphasis> of any level,
for example <uri>https://openstack.example.org</uri> or
<uri>https://horizon.openstack.example.org</uri>. We also
advise against deploying to bare internal domains like
<uri>https://horizon/</uri>. These recommendations are based on the
limitations of browser same-origin-policy.</para>
<para>Recommendations given in this guide cannot effectively guard against
known attacks if you deploy the dashboard in a domain that also hosts
user-generated content, even when this content resides on a separate
sub-domain. User-generated content can consist of scripts, images, or uploads
of any type. Most major web presences, including googleusercontent.com,
fbcdn.com, github.io, and twimg.co, use this approach to segregate
user-generated content from cookies and security tokens.</para>
<para>If you do not follow this recommendation regarding
second-level domains, avoid a cookie-backed session store and
employ HTTP Strict Transport Security (HSTS). When deployed on
a subdomain, the dashboard's security is equivalent to the least secure
application deployed on the same second-level domain.</para>
</section>
<section xml:id="dashboard-basic-web-server-configuration">
<title>Basic web server configuration</title>
<para>The dashboard should be deployed as a Web Services Gateway
Interface (WSGI) application behind an HTTPS proxy such as
Apache or nginx. If Apache is not already in use, we recommend
nginx since it is lightweight and easier to configure
correctly.</para>
<para>When using nginx, we recommend <link
xlink:href="http://docs.gunicorn.org/en/latest/deploy.html"
>gunicorn</link> as the WSGI host with an appropriate number
of synchronous workers. When using Apache, we recommend
<literal>mod_wsgi</literal> to host the dashboard.</para>
</section>
<section xml:id="dashboard-allowed-hosts">
<title>Allowed hosts</title>
<para>Configure the <option>ALLOWED_HOSTS</option> setting with
the domain or domains where the dashboard is available. Failure
to configure this setting (especially if not following the
recommendation above regarding second level domains) opens the
dashboard to a number of serious attacks. Wild card domains
should be avoided.</para>
<para>For further details, see the <link
xlink:href="https://docs.djangoproject.com/"
>Django documentation</link>.</para>
</section>
<section xml:id="dashboard-horizon-image-upload">
<title>Horizon image upload</title>
<para>We recommend that implementers <link
xlink:href="http://docs.openstack.org/developer/horizon/topics/deployment.html#file-uploads"
>disable HORIZON_IMAGES_ALLOW_UPLOAD</link> unless they have
implemented a plan to prevent resource exhaustion and denial of
service.</para>
</section>
</section>

View File

@ -1,42 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-front-end-caching-session-back-end">
<?dbhtml stop-chunking?>
<title>Front-end caching and session back end</title>
<section xml:id="dashboard-front-end-caching">
<title>Front-end caching</title>
<para>We do not recommend using front-end caching tools with the
dashboard. The dashboard is rendering dynamic content resulting
directly from OpenStack API requests and front-end caching layers
such as varnish can prevent the correct content from being
displayed. In Django, static media is directly served from Apache
or nginx and already benefits from web host caching.</para>
</section>
<section xml:id="dashboard-session-back-end">
<title>Session back end</title>
<para>The default session back end for horizon
(<literal>django.contrib.sessions.backends.signed_cookies</literal>)
saves user data in signed, but unencrypted cookies stored in the
browser. Due to the fact that each dashboard instance is
stateless, the previously mentioned methodology provides the
ability to implement the most simple session back-end scaling.</para>
<para>It should be noted that with this type of implementation
sensitive access tokens will be stored in the browser and will be
transmitted with each request made. The back end ensures the
integrity of session data, even though the transmitted data
is only encrypted by HTTPS.</para>
<para>If your architecture allows it, we recommend using
<literal>django.contrib.sessions.backends.cache</literal> as
your session back end with memcache as the cache. Memcache must
not be exposed publicly, and should communicate over a secured
private channel. If you choose to use the signed cookies
back end, refer to the Django documentation to understand the
security trade-offs.</para>
<para>For further details, see the <link
xlink:href="https://docs.djangoproject.com/"
>Django documentation</link>.</para>
</section>
</section>

View File

@ -1,68 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-https-hsts-xss-ssrf">
<?dbhtml stop-chunking?>
<title>HTTPS, HSTS, XSS, and SSRF</title>
<section xml:id="dashboard-cross-site-scripting-xss">
<title>Cross Site Scripting (XSS)</title>
<para>Unlike many similar systems, the OpenStack dashboard allows the
entire Unicode character set in most fields. This means
developers have less latitude to make escaping mistakes that
open attack vectors for cross-site scripting (XSS).</para>
<para>Dashboard provides tools for developers to avoid creating
XSS vulnerabilities, but they only work if developers use them
correctly. Audit any custom dashboards, paying particular
attention to use of the <literal>mark_safe</literal> function,
use of <literal>is_safe</literal> with
custom template tags, the <literal>safe</literal> template tag, anywhere
auto escape
is turned off, and any JavaScript which might evaluate
improperly escaped data.</para>
</section>
<section xml:id="dashboard-cross-site-request-forgery-csrf">
<title>Cross Site Request Forgery (CSRF)</title>
<para>Django has dedicated middleware for cross-site request forgery (CSRF).
For further details, see the <link xlink:href="https://docs.djangoproject.com/">
Django documentation</link>.</para>
<para>The OpenStack dashboard is designed to discourage
developers from introducing cross-site scripting vulnerabilities
with custom dashboards as threads can be introduced. Dashboards
that utilize multiple instances of JavaScript should be audited
for vulnerabilities such as inappropriate use of the
<literal>@csrf_exempt</literal> decorator. Any dashboard that
does not follow these recommended security settings should be
carefully evaluated before restrictions are relaxed.</para>
</section>
<section xml:id="dashboard-https">
<title>HTTPS</title>
<para>
Deploy the dashboard behind a secure
<glossterm>HTTPS</glossterm> server by using a valid, trusted
certificate from a recognized certificate authority
(CA). Private organization-issued certificates are only
appropriate when the root of trust is pre-installed in all user
browsers.</para>
<para>Configure HTTP requests to the dashboard domain to redirect
to the fully qualified HTTPS URL.</para>
</section>
<section xml:id="dashboard-http-strict-transport-security-hsts">
<title>HTTP Strict Transport Security (HSTS)</title>
<para>It is highly recommended to use HTTP Strict Transport
Security (HSTS).</para>
<note>
<para>If you are using an HTTPS proxy in front of your web
server, rather than using an HTTP server with HTTPS
functionality, modify the <literal>SECURE_PROXY_SSL_HEADER</literal>
variable. Refer to the <link
xlink:href="https://docs.djangoproject.com/"
>Django documentation</link> for information about modifying the
<literal>SECURE_PROXY_SSL_HEADER</literal> variable.</para>
</note>
<para>See the chapter on PKI/SSL Everywhere for more specific
recommendations and server configurations for HTTPS
configurations, including the configuration of HSTS.</para>
</section>
</section>

View File

@ -1,16 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-secret-key">
<?dbhtml stop-chunking?>
<title>Secret key</title>
<para>The dashboard depends on a shared <option>SECRET_KEY</option>
setting for some security functions. The secret key should be a
randomly generated string at least 64 characters long, which must
be shared across all active dashboard instances. Compromise of this
key may allow a remote attacker to execute arbitrary code. Rotating
this key invalidates existing user sessions and caching. Do not
commit this key to public repositories.</para>
</section>

View File

@ -1,29 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="dashboard-static-media">
<?dbhtml stop-chunking?>
<title>Static media</title>
<para>The dashboard's static media should be deployed to a subdomain
of the dashboard domain and served by the web server. The use of
an external content delivery network (CDN) is also acceptable.
This subdomain should not set cookies or serve user-provided
content. The media should also be served with HTTPS.</para>
<para>Django media settings are documented in the <link
xlink:href="https://docs.djangoproject.com/"
>Django documentation</link>.</para>
<para>Dashboard's default configuration uses <link
xlink:href="http://django-compressor.readthedocs.org/"
>django_compressor</link> to compress and minify CSS and
JavaScript content before serving it. This process should be
statically done before deploying the dashboard, rather than using
the default in-request dynamic compression and copying the
resulting files along with deployed code or to the CDN server.
Compression should be done in a non-production build
environment. If this is not practical, we recommend disabling
resource compression entirely. Online compression dependencies
(less, Node.js) should not be installed on production
machines.</para>
</section>

View File

@ -1,99 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data-encryption">
<?dbhtml stop-chunking?>
<title>Data encryption</title>
<para>The option exists for implementers to encrypt tenant data wherever it is stored on disk or transported over a network, such as the OpenStack volume encryption feature described below. This is above and beyond the general recommendation that users encrypt their own data before sending it to their provider.</para>
<para>The importance of encrypting data on behalf of tenants is largely related to the risk assumed by a provider that an attacker could access tenant data. There may be requirements here in government, as well as requirements per-policy, in private contract, or even in case law in regard to private contracts for public cloud providers. It is recommended that a risk assessment and legal consul advised before choosing tenant encryption policies.</para>
<para>Per-instance or per-object encryption is preferable over, in descending order, per-project, per-tenant, per-host, and per-cloud aggregations. This recommendation is inverse to the complexity and difficulty of implementation. Presently, in some projects it is difficult or impossible to implement encryption as loosely granular as even per-tenant. We recommend implementors make a best-effort in encrypting tenant data.</para>
<para>Often, data encryption relates positively to the ability to reliably destroy tenant and per-instance data, simply by throwing away the keys. It should be noted that in doing so, it becomes of great importance to destroy those keys in a reliable and secure manner.</para>
<para>Opportunities to encrypt data for users are present:</para>
<itemizedlist><listitem>
<para>Object Storage objects</para>
</listitem>
<listitem>
<para>Network data</para>
</listitem>
</itemizedlist>
<section xml:id="data-encryption-block-storage-volume-encryption">
<title>Volume encryption</title>
<para>A volume encryption feature in OpenStack supports privacy on a per-tenant basis. As of the Kilo release, the following features are supported:</para>
<itemizedlist>
<listitem>
<para>Creation and usage of encrypted volume types, initiated through the dashboard or a command line interface</para>
<itemizedlist>
<listitem>
<para>Enable encryption and select parameters such as encryption algorithm and key size</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Volume data contained within iSCSI packets is encrypted</para>
</listitem>
<listitem>
<para>Supports encrypted backups if the original volume is encrypted</para>
</listitem>
<listitem>
<para>Dashboard indication of volume encryption status. Includes indication that a volume is encrypted, and includes the encryption parameters such as algorithm and key size</para>
</listitem>
<listitem>
<para>Interface with the Key management service through a secure wrapper</para>
<itemizedlist>
<listitem>
<para>Volume encryption is supported by back-end key storage for enhanced security (for example, a Hardware Security Module (HSM) or a KMIP server can be used as a Barbican back-end secret store)</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</section>
<section xml:id="data-encryption-block-storage-ephemeral-disk-encryption">
<title>Ephemeral disk encryption</title>
<para>An ephemeral disk encryption feature addresses data privacy. The ephemeral disk is a temporary work space used by the virtual host operating system. Without encryption, sensitive user information could be accessed on this disk, and vestigial information could remain after the disk is unmounted. As of the Kilo release, the following ephemeral disk encryption features are supported:</para>
<itemizedlist><listitem>
<para>Creation and usage of encrypted LVM ephemeral disks</para>
<itemizedlist>
<listitem>
<para>Compute configuration enables encryption and specifies encryption parameters such as algorithm and key size</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>Interface with the Key management service through a secure wrapper</para>
<itemizedlist>
<listitem>
<para>Key management service will support data isolation by providing ephemeral disk encryption keys on a per-tenant basis</para>
</listitem>
<listitem>
<para>Ephemeral disk encryption is supported by back-end key storage for enhanced security (for example, an HSM or a KMIP server can be used as a Barbican back-end secret store)</para>
</listitem>
<listitem>
<para>With the Key management service, when an ephemeral disk is no longer needed, simply deleting the key may take the place of overwriting the ephemeral disk storage area</para>
</listitem>
</itemizedlist>
</listitem>
</itemizedlist>
</section>
<section xml:id="data-encryption-object-storage-objects">
<title>Object Storage objects</title>
<para>The ability to encrypt objects in Object Storage is presently limited to disk-level encryption per node. However, there does exist third-party extensions and modules for per-object encryption. These modules have been proposed upstream, but have not per this writing been formally accepted. Below are some pointers:</para>
<para><link xlink:href="https://github.com/Mirantis/swift-encrypt">https://github.com/Mirantis/swift-encrypt</link></para>
<para><link xlink:href="http://www.mirantis.com/blog/on-disk-encryption-prototype-for-openstack-swift/">http://www.mirantis.com/blog/on-disk-encryption-prototype-for-openstack-swift/</link></para>
</section>
<section xml:id="data-encryption-block-storage-volumes-and-instance-ephemeral-filesystems">
<title>Block Storage volumes and instance ephemeral filesystems</title>
<para>Block Storage supports a variety of mechanisms for supplying mountable volumes. The ability to encrypt volumes on the storage host depends on the service back ends chosen. Some back ends may not support this at all. It is outside the scope of this guide to specify recommendations for each Block Storage back-end driver.</para>
<para>For the purpose of performance, many storage protocols are unencrypted. Some protocols such as iSCSI can provide authentication and encrypted sessions, it is our recommendation to enable these features.</para>
<para>As both block storage and compute support LVM backed storage, we can easily provide an example applicable to both systems. In deployments using LVM, encryption may be performed against the backing physical volumes. An encrypted block device would be created using the standard Linux tools, with the LVM physical volume (PV) created on top of the decrypted block device using pvcreate. Then, the vgcreate or vgmodify tool may be used to add the encrypted physical volume to an LVM volume group (VG).</para>
</section>
<section xml:id="data-encryption-network-data">
<title>Network data</title>
<para>Tenant data for compute could be encrypted over IPsec or
other tunnels. This is not functionality common or standard in
OpenStack, but is an option available to motivated and
interested implementors.</para>
<para>Likewise, encrypted data will remain encrypted as it is transferred over the network.</para>
</section>
</section>

View File

@ -1,192 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data-privacy-concerns">
<?dbhtml stop-chunking?>
<title>Data privacy concerns</title>
<section xml:id="data-privacy-concerns-data-residency">
<title>Data residency</title>
<para>The privacy and isolation of data has consistently been cited as the primary barrier to cloud adoption over the past few years. Concerns over who owns data in the cloud and whether the cloud operator can be ultimately trusted as a custodian of this data have been significant issues in the past.</para>
<para>Numerous OpenStack services maintain data and metadata belonging to tenants or reference tenant information.</para>
<para>Tenant data stored in an OpenStack cloud may include the following items:</para>
<itemizedlist><listitem>
<para>Object Storage objects</para>
</listitem>
<listitem>
<para>Compute instance ephemeral filesystem storage</para>
</listitem>
<listitem>
<para>Compute instance memory</para>
</listitem>
<listitem>
<para>Block Storage volume data</para>
</listitem>
<listitem>
<para>Public keys for Compute access</para>
</listitem>
<listitem>
<para>Virtual machine images in the Image service</para>
</listitem>
<listitem>
<para>Machine snapshots</para>
</listitem>
<listitem>
<para>Data passed to OpenStack Compute's configuration-drive extension</para>
</listitem>
</itemizedlist>
<para>Metadata stored by an OpenStack cloud includes the following non-exhaustive items:</para>
<itemizedlist><listitem>
<para>Organization name</para>
</listitem>
<listitem>
<para>User's "Real Name"</para>
</listitem>
<listitem>
<para>Number or size of running instances, buckets, objects, volumes, and other quota-related items</para>
</listitem>
<listitem>
<para>Number of hours running instances or storing data</para>
</listitem>
<listitem>
<para>IP addresses of users</para>
</listitem>
<listitem>
<para>Internally generated private keys for compute image bundling</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="data-privacy-concerns-data-disposal">
<title>Data disposal</title>
<para>OpenStack operators should strive to provide a certain level of tenant data disposal assurance. Best practices suggest that the operator sanitize cloud system media (digital and non-digital) prior to disposal, release out of organization control or release for reuse. Sanitization methods should implement an appropriate level of strength and integrity given the specific security domain and sensitivity of the information.</para>
<blockquote>
<para>"The sanitization process removes information from the media
such that the information cannot be retrieved or reconstructed.
Sanitization techniques, including clearing, purging,
cryptographic erase, and destruction, prevent the disclosure
of information to unauthorized individuals when such media is
reused or released for disposal."
<link xlink:href="http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf">NIST Special Publication 800-53 Revision 4</link></para>
</blockquote>
<para>General data disposal and sanitization guidelines as adopted from NIST recommended security controls. Cloud operators should:</para>
<orderedlist><listitem>
<para>Track, document and verify media sanitization and disposal actions.</para>
</listitem>
<listitem>
<para>Test sanitation equipment and procedures to verify
proper performance.</para>
</listitem>
<listitem>
<para>Sanitize portable, removable storage devices prior to connecting such devices to the cloud infrastructure.</para>
</listitem>
<listitem>
<para>Destroy cloud system media that cannot be sanitized.</para>
</listitem>
</orderedlist>
<para>In an OpenStack deployment you will need to address the following:</para>
<itemizedlist><listitem>
<para>Secure data erasure</para>
</listitem>
<listitem>
<para>Instance memory scrubbing</para>
</listitem>
<listitem>
<para>Block Storage volume data</para>
</listitem>
<listitem>
<para>Compute instance ephemeral storage</para>
</listitem>
<listitem>
<para>Bare metal server sanitization</para>
</listitem>
</itemizedlist>
<section xml:id="data-privacy-concerns-data-disposal-data-not-securely-erased">
<title>Data not securely erased</title>
<para>Within OpenStack some data may be deleted, but not securely erased in the context of the NIST standards outlined above. This is generally applicable to most or all of the above-defined metadata and information stored in the database. This may be remediated with database and/or system configuration for auto vacuuming and periodic free-space wiping.</para>
</section>
<section xml:id="data-privacy-concerns-data-disposal-instance-memory-scrubbing">
<title>Instance memory scrubbing</title>
<para>Specific to various hypervisors is the treatment of instance memory. This behavior is not defined in OpenStack Compute, although it is generally expected of hypervisors that they will make a best effort to scrub memory either upon deletion of an instance, upon creation of an instance, or both.</para>
<para>Xen explicitly assigns dedicated memory regions to instances and scrubs data upon the destruction of instances (or domains in Xen parlance). KVM depends more greatly on Linux page management; A complex set of rules related to KVM paging is defined in the <link xlink:href="http://www.linux-kvm.org/page/Memory">KVM documentation</link>.</para>
<para>It is important to note that use of the Xen memory balloon feature is likely to result in information disclosure. We strongly recommended to avoid use of this feature.</para>
<para>For these and other hypervisors, we recommend referring to hypervisor-specific documentation.</para>
</section>
<section xml:id="data-privacy-concerns-data-disposal-cinder-volume-data">
<title>Cinder volume data</title>
<para>Use of the OpenStack volume encryption feature is highly encouraged. This is
discussed in the Data Encryption section below. When this feature is used,
destruction of data is accomplished by securely deleting the encryption key.</para>
<para>If a back end plug-in is being used, there may be independent ways of doing encryption or non-standard overwrite solutions. Plug-ins to OpenStack Block Storage will store data in a variety of ways. Many plug-ins are specific to a vendor or technology, whereas others are more DIY solutions around filesystems such as LVM or ZFS. Methods to securely destroy data will vary from one plug-in to another, from one vendor's solution to another, and from one filesystem to another.</para>
<para>Some back ends such as ZFS will support copy-on-write to
prevent data exposure. In these cases, reads from unwritten
blocks will always return zero. Other back ends such as LVM
may not natively support this, thus the Block Storage plug-in
takes the responsibility to override previously written blocks
before handing them to users. It is important to review what
assurances your chosen volume back end provides and to see what
mediations may be available for those assurances not
provided.</para>
<para>Finally, while not a feature of OpenStack, vendors and
implementors may choose to add or support encryption of volumes.
In this case, destruction of data is as simple as throwing
away the key.</para>
</section>
<section xml:id="data-privacy-concerns-data-disposal-glance-delay-delete">
<title>Image service delay delete feature</title>
<para>OpenStack Image service has a delayed delete feature, which will
pend the deletion of an image for a defined time period. It
is recommended to disable this feature if it is a security
concern, by editing the <filename>etc/glance/glance-api.conf</filename>
file and setting the <literal>delayed_delete</literal> option
as <replaceable>False</replaceable>.</para>
</section>
<section xml:id="data-privacy-concerns-data-disposal-nova-soft-delete">
<title>Compute soft delete feature</title>
<para>OpenStack Compute has a soft-delete feature, which enables
an instance that is deleted to be in a soft-delete state for
a defined time period. The instance can be restored during
this time period. To disable the soft-delete feature, edit
the <filename>etc/nova/nova.conf</filename> file and leave
the <literal>reclaim_instance_interval</literal> option empty.</para>
</section>
<section xml:id="data-privacy-concerns-data-disposal-compute-instnace-ephemeral-storage">
<title>Compute instance ephemeral storage</title>
<para>The creation and destruction of ephemeral storage will be
somewhat dependent on the chosen hypervisor and the OpenStack
Compute plug-in.</para>
<para>The libvirt plug-in for compute may maintain ephemeral
storage directly on a filesystem, or in LVM. Filesystem storage
generally will not overwrite data when it is removed, although
there is a guarantee that dirty extents are not provisioned
to users.</para>
<para>When using LVM backed ephemeral storage, which is block-based,
it is necessary that the OpenStack Compute software securely
erases blocks to prevent information disclosure. There have
in the past been information disclosure vulnerabilities related
to improperly erased ephemeral block storage devices.</para>
<para>Filesystem storage is a more secure solution for ephemeral
block storage devices than LVM as dirty extents cannot be provisioned
to users. However, it is important to be mindful that user
data is not destroyed, so it is suggested to encrypt the
backing filesystem.</para>
</section>
<section xml:id="data-privacy-concerns-data-disposal-bare-metal-server-sanitization">
<title>Bare metal server sanitization</title>
<para>
A bare metal server driver for Compute was under development
and has since moved into a separate project called <link
xlink:href="https://wiki.openstack.org/wiki/Ironic">ironic</link>. At
the time of this writing, ironic does not appear to address
sanitization of tenant data resident the physical hardware.
</para>
<para>
Additionally, it is possible for tenants of a bare metal
system to modify system firmware. TPM technology, described
in <xref linkend="integrity-life-cycle-secure-bootstrapping"/>,
provides a solution for detecting unauthorized firmware
changes.
</para>
</section>
</section>
</section>

View File

@ -1,47 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data-processing-case-studies">
<?dbhtml stop-chunking?>
<title>Case studies</title>
<para>
Continuing with the studies described in
<xref linkend="introduction-to-case-studies"/>, we present Alice and
Bob's approaches to deploying the Data processing service for their
users.
</para>
<section xml:id="data-processing-case-studies-alice">
<title>Alice's private cloud</title>
<para>
Alice is deploying the Data processing service for a group of users
that are trusted members of a collaboration. They are all placed in a
single project and share the clusters, jobs, and data within. She
deploys the controller with TLS enabled, using a certificate signed
by the organization's root certificate. She configures the controller
to provide floating IP addresses to the cluster instances allowing for
users to gain access to the instances in the event of errors. She
enables the use of proxy domains to prevent the users from needing to
enter their credentials into the Data processing service.
</para>
</section>
<section xml:id="data-processing-case-studies-bob">
<title>Bob's public cloud</title>
<para>
Bob's public cloud contains users that will not necessarily
know or trust each other. He puts all users into separate projects.
Each user has their own clusters, jobs, and data which cannot be
accessed by other users. He deploys the controller with TLS enabled,
using a certificate signed by a well known public certificate
authority. He configures a custom topology to ensure that access to
the provisioned cluster instances will flow through a controlled
gateway. He creates a security group that opens only the ports needed
for the controller to access the frameworks deployed. He enables the
use of proxy domains to prevent the users from needing to enter their
credentials into the Data processing service. He configures the
rootwrap command to allow the data processing controller user to
run the proxy commands.
</para>
</section>
</section>

View File

@ -1,297 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data-processing-configuration-and-hardening">
<?dbhtml stop-chunking?>
<title>Configuration and hardening</title>
<para>
There are several configuration options and deployment strategies that
can improve security in the Data processing service. The service
controller is configured through a main configuration file and one or
more policy files. Installations that are using the data-locality
features will also have two additional files to specify the physical
location of Compute and Object Storage nodes.
</para>
<section xml:id="data-processing-configuration-and-hardening-tls">
<title>TLS</title>
<para>
The Data processing service controller, like many other OpenStack
controllers, can be configured to require TLS connections.
</para>
<para>
Pre-Kilo releases will require a TLS proxy as the controller does not
allow direct TLS connections. Configuring TLS proxies is
covered in <xref linkend="tls-proxies-and-http-services"/>, and we
recommend following the advice there to create this type of
installation.
</para>
<para>
From the Kilo release onward the data processing controller allows
direct TLS connections. Enabling this behavior requires some small
adjustments to the controller configuration file. For any post-Juno
installation we recommend enabling the direct TLS connections in
the controller configuration.
</para>
<section xml:id="data-processing-configuration-and-hardening-tls-example-1">
<title>Example. Configuring TLS access to the controller</title>
<programlisting>[ssl]
ca_file = cafile.pem
cert_file = certfile.crt
key_file = keyfile.key</programlisting>
</section>
</section>
<section xml:id="data-processing-configuration-and-hardening-role-based-access-control-policies">
<title>Role-based access control policies</title>
<para>
The Data processing service uses a policy file, as described in
<xref linkend="identity-policies"/>, to configure role-based access
control. Using the policy file an operator can restrict a groups
access to specific data processing functionality.
</para>
<para>
The reasons for doing this will change depending on the organizational
requirements of the installation. In general, these fine
grained controls are used in situations where an operator needs to
restrict the creation, deletion, and retrieval of the Data processing
service resources. Operators who need to restrict access within a project
should be fully aware that there will need to be alternative means for
users to gain access to the core functionality of the service (for
example, provisioning clusters).
</para>
<section xml:id="data-processing-configuration-and-hardening-role-based-access-control-policies-example-1">
<title>Example. Allow all methods to all users (default policy)</title>
<programlisting>{
"default": ""
}</programlisting>
</section>
<section xml:id="data-processing-configuration-and-hardening-role-based-access-control-policies-example-2">
<title>Example. Disallow image registry manipulations to non-admin users</title>
<programlisting>{
"default": "",
"images:register": "role:admin",
"images:unregister": "role:admin",
"images:add_tags": "role:admin",
"images:remove_tags": "role:admin"
}</programlisting>
</section>
</section>
<section xml:id="data-processing-configuration-and-hardening-security-groups">
<title>Security groups</title>
<para>
The Data processing service allows for the association of security
groups with instances provisioned for its clusters. With no additional
configuration the service will use the default security group for any
project that provisions clusters. A different security group may be
used if requested, or an automated option exists which instructs the
service to create a security group based on ports specified by the
framework being accessed.
</para>
<para>
For production environments we recommend controlling the security
groups manually and creating a set of group rules that are appropriate
for the installation. In this manner the operator can ensure that the
default security group will contain all the appropriate rules. For an
expanded discussion of security groups please see
<xref linkend="networking-services-security-best-practices-security-groups"/>.
</para>
</section>
<section xml:id="data-processing-configuration-and-hardening-proxy-domains">
<title>Proxy domains</title>
<para>
When using the Object Storage service in conjunction with data
processing it is necessary to add credentials for the store access.
With proxy domains the Data processing service can instead use a
delegated trust from the Identity service to allow store access via a
temporary user created in the domain. For this delegation mechanism to
work the Data processing service must be configured to use proxy
domains and the operator must configure an identity domain for the
proxy users.
</para>
<para>
The data processing controller retains temporary storage of the
username and password provided for object store access. When using proxy
domains the controller will generate this pair for the proxy user, and
the access of this user will be limited to that of the identity trust.
We recommend using proxy domains in any installation where the
controller or its database have routes to or from public networks.
</para>
<section xml:id="data-processing-configuration-and-hardening-proxy-domains-example-1">
<title>Example. Configuring for a proxy domain named “dp_proxy”</title>
<programlisting>[DEFAULT]
use_domain_for_proxy_users = true
proxy_user_domain_name = dp_proxy
proxy_user_role_names = Member</programlisting>
</section>
</section>
<section xml:id="data-processing-configuration-and-hardening-custom-network-topologies">
<title>Custom network topologies</title>
<para>
The data processing controller can be configured to use proxy commands
for accessing its cluster instances. In this manner custom network
topologies can be created for installations which will not use the
networks provided directly by the Networking service. We recommend
using this option for installations which require limiting access
between the controller and the instances.
</para>
<section xml:id="data-processing-configuration-and-hardening-custom-network-topologies-example-1">
<title>Example. Access instances through a specified relay machine</title>
<programlisting>[DEFAULT]
proxy_command='ssh relay-machine-{tenant_id} nc {host} {port}'</programlisting>
</section>
<section xml:id="data-processing-configuration-and-hardening-custom-network-topologies-example-2">
<title>Example. Access instances through a custom network namespace</title>
<programlisting>[DEFAULT]
proxy_command='ip netns exec ns_for_{network_id} nc {host} {port}'</programlisting>
</section>
</section>
<section xml:id="data-processing-configuration-and-hardening-indirect-access">
<title>Indirect access</title>
<para>
For installations in which the controller will have limited access to
all the instances of a cluster, due to limits on floating IP addresses
or security rules, indirect access may be configured. This allows some
instances to be designated as proxy gateways to the other instances of
the cluster.
</para>
<para>
This configuration can only be enabled while defining the node group
templates that will make up the data processing clusters. It is
provided as a run time option to be enabled during the cluster
provisioning process.
</para>
</section>
<section xml:id="data-processing-configuration-and-hardening-rootwrap">
<title>Rootwrap</title>
<para>
When creating custom topologies for network access it can be necessary
to allow non-root users the ability to run the proxy commands. For
these situations the oslo rootwrap package is used to provide a
facility for non-root users to run privileged commands. This
configuration requires the user associated with the data processing
controller application to be in the sudoers list and for the option to
be enabled in the configuration file. Optionally, an alternative
rootwrap command can be provided.
</para>
<section xml:id="data-processing-configuration-and-hardening-rootwrap-example-1">
<title>Example. Enabling rootwrap usage and showing the default command</title>
<programlisting>[DEFAULT]
use_rootwrap=True
rootwrap_command=sudo sahara-rootwrap /etc/sahara/rootwrap.conf</programlisting>
<para>
For more information on the rootwrap project, please see the official
documentation:
</para>
<para>
<link xlink:href="https://wiki.openstack.org/wiki/Rootwrap">
https://wiki.openstack.org/wiki/Rootwrap
</link>
</para>
</section>
</section>
<section xml:id="data-processing-configuration-and-hardening-logging">
<title>Logging</title>
<para>
Monitoring the output of the service controller is a powerful forensic
tool, as described more thoroughly in
<xref linkend="monitoring-logging"/>. The Data processing
service controller offers a few options for setting the location and
level of logging.
</para>
<section xml:id="data-processing-configuration-and-hardening-logging-example-1">
<title>Example. Setting the log level higher than warning and specifying an output file.</title>
<programlisting>[DEFAULT]
verbose = true
log_file = /var/log/data-processing.log</programlisting>
</section>
</section>
<section xml:id="data-processing-configuration-and-hardening-references">
<title>References</title>
<para>
Sahara project documentation:
<link xlink:href="http://docs.openstack.org/developer/sahara">
http://docs.openstack.org/developer/sahara
</link>
</para>
<para>
Hadoop project:
<link xlink:href="https://hadoop.apache.org/">
https://hadoop.apache.org/
</link>
</para>
<para>
Hadoop secure mode docs:
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html">
https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html
</link>
</para>
<para>
Hadoop HDFS documentation:
<link xlink:href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">
https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
</link>
</para>
<para>
Spark project:
<link xlink:href="https://spark.apache.org/">
https://spark.apache.org/
</link>
</para>
<para>
Spark security documentation:
<link xlink:href="https://spark.apache.org/docs/latest/security.html">
https://spark.apache.org/docs/latest/security.html
</link>
</para>
<para>
Storm project:
<link xlink:href="https://storm.apache.org/">
https://storm.apache.org/
</link>
</para>
<para>
Zookeeper project:
<link xlink:href="https://zookeeper.apache.org/">
https://zookeeper.apache.org/
</link>
</para>
<para>
Oozie project:
<link xlink:href="https://oozie.apache.org/">
https://oozie.apache.org/
</link>
</para>
<para>
Hive
<link xlink:href="https://hive.apache.org/">
https://hive.apache.org/
</link>
</para>
<para>
Pig
<link xlink:href="https://pig.apache.org/">
https://pig.apache.org/
</link>
</para>
<para>
Cloudera CDH documentation:
<link xlink:href="https://www.cloudera.com/content/cloudera/en/documentation.html#CDH">
https://www.cloudera.com/content/cloudera/en/documentation.html#CDH
</link>
</para>
<para>
Hortonworks Data Platform documentation:
<link xlink:href="http://docs.hortonworks.com/">
http://docs.hortonworks.com/
</link>
</para>
<para>
MapR project:
<link xlink:href="https://www.mapr.com/products/mapr-distribution-including-apache-hadoop">
https://www.mapr.com/products/mapr-distribution-including-apache-hadoop
</link>
</para>
</section>
</section>

View File

@ -1,118 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data-processing-deployment">
<?dbhtml stop-chunking?>
<title>Deployment</title>
<para>
The Data processing service is deployed, like many other OpenStack
services, as an application running on a host connected to the stack.
As of the Kilo release, it has the ability to be deployed in a
distributed manner with several redundant controllers. Like other
services, it also requires a database to store information about its
resources. See <xref linkend="databases"/>. It is important to
note that the Data processing service will need to manage several
Identity service trusts, communicate directly with the Orchestration and
Networking services, and potentially create users in a proxy domain.
For these reasons the controller will need access to the control plane
and as such we recommend installing it alongside other service
controllers.
</para>
<para>
Data processing interacts directly with several openstack services:
</para>
<itemizedlist>
<listitem>
<para>
Compute
</para>
</listitem>
<listitem>
<para>
Identity
</para>
</listitem>
<listitem>
<para>
Networking
</para>
</listitem>
<listitem>
<para>
Object Storage
</para>
</listitem>
<listitem>
<para>
Orchestration
</para>
</listitem>
<listitem>
<para>
Block Storage (optional)
</para>
</listitem>
</itemizedlist>
<para>
We recommend documenting all the data flows and bridging points
between these services and the data processing controller. See
<xref linkend="documentation"/>.
</para>
<para>
The Object Storage service is used by the Data processing service to store
job binaries and data sources. Users wishing to have access to the full
Data processing service functionality will need an object store in the
projects they are using.
</para>
<para>
The Networking service plays an important role in the provisioning of
clusters. Prior to provisioning, the user is expected to provide one
or more networks for the cluster instances. The action of associating
networks is similar to the process of assigning networks when
launching instances through the dashboard. These networks are used by
the controller for administrative access to the instances and
frameworks of its clusters.
</para>
<para>
Also of note is the Identity service. Users of the Data processing service
will need appropriate roles in their projects to allow the provisioning of
instances for their clusters. Installations that use the proxy domain
configuration require special consideration. See
<xref linkend="data-processing-configuration-and-hardening-proxy-domains"/>.
Specifically, the Data processing service will need the ability to create
users within the proxy domain.
</para>
<section xml:id="data-processing-deployment-controller-network-access-to-clusters">
<title>Controller network access to clusters</title>
<para>
One of the primary tasks of the data processing controller is to
communicate with the instances it spawns. These instances are
provisioned and then configured depending on the framework being
used. The communication between the controller and the instances uses
secure shell (SSH) and HTTP protocols.
</para>
<para>
When provisioning clusters each instance will be given an IP address in
the networks provided by the user. The first network is often referred
to as the data processing management network and instances can use the
fixed IP address assigned by the Networking service for this network.
The controller can also be configured to use floating IP addresses for
the instances in addition to their fixed address. When communicating
with the instances the controller will prefer the floating address
if enabled.
</para>
<para>
For situations where the fixed and floating IP addresses do not
provide the functionality required the controller can provide access
through two alternate methods: custom network topologies and indirect
access. The custom network topologies feature allows the controller to
access the instances through a supplied shell command in the
configuration file. Indirect access is used to specify instances that
can be used as proxy gateways by the user during cluster provisioning.
These options are discussed with examples of usage in
<xref linkend="data-processing-configuration-and-hardening"/>.
</para>
</section>
</section>

View File

@ -1,218 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="data-processing-introduction-to-data-processing">
<?dbhtml stop-chunking?>
<title>Introduction to Data processing</title>
<para>
The Data processing service controller will be responsible for
creating, maintaining, and destroying any instances created for its
clusters. The controller will use the Networking service to
establish network paths between itself and the cluster instances. It
will also manage the deployment and life-cycle of user applications
that are to be run on the clusters. The instances within a cluster
contain the core of a framework's processing engine and the Data
processing service provides several options for creating and
managing the connections to these instances.
</para>
<para>
Data processing resources (clusters, jobs, and data sources) are
segregated by projects defined within the Identity service. These
resources are shared within a project and it is important to
understand the access needs of those who are using the
service. Activities within projects (for example launching clusters,
uploading jobs, etc.) can be restricted further through the use of
role-based access controls.
</para>
<para>
In this chapter we discuss how to assess the needs of data processing
users with respect to their applications, the data that they use, and
their expected capabilities within a project. We will also demonstrate
a number of hardening techniques for the service controller and its
clusters, and provide examples of various controller configurations
and user management approaches to ensure an adequate level of security
and privacy.
</para>
<section xml:id="data-processing-introduction-to-data-processing-architecture">
<title>Architecture</title>
<para>
The following diagram presents a conceptual view of how the Data
processing service fits into the greater OpenStack ecosystem.
</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="621" contentwidth="955" fileref="static/data_processing_architecture.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/data_processing_architecture.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject>
</para>
<para>
The Data processing service makes heavy use of the Compute,
Orchestration, Image, and Block Storage services during the
provisioning of clusters. It will also use one or more networks,
created by the Networking service, provided during cluster creation
for administrative access to the instances. While users are running
framework applications the controller and the clusters will be
accessing the Object Storage service. Given these service usages, we
recommend following the instructions outlined in
<xref linkend="documentation"/> for cataloging all the components of
an installation.
</para>
</section>
<section xml:id="data-processing-introduction-to-data-processing-technologies-involved">
<title>Technologies involved</title>
<para>
The Data Processing service is responsible for the deployment and
management of several applications. For a complete understanding of
the security options provided we recommend that operators have a
general familiarity with these applications. The list of highlighted
technologies is broken into two sections: first, high priority
applications that have a greater impact on security, and second,
supporting applications with a lower impact.
</para>
<para>Higher impact</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="https://hadoop.apache.org/">
Hadoop
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SecureMode.html">
Hadoop secure mode docs
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">
HDFS
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://spark.apache.org/">
Spark
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://spark.apache.org/docs/latest/security.html">
Spark Security
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://storm.apache.org/">
Storm
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://zookeeper.apache.org/">
Zookeeper
</link>
</para>
</listitem>
</itemizedlist>
<para>Lower impact</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="https://oozie.apache.org/">
Oozie
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://hive.apache.org/">
Hive
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://pig.apache.org/">
Pig
</link>
</para>
</listitem>
</itemizedlist>
<para>
These technologies comprise the core of the frameworks that are
deployed with the Data processing service. In addition to these
technologies, the service also includes bundled frameworks provided by
third party vendors. These bundled frameworks are built using the same
core pieces described above plus configurations and applications that
the vendors include. For more information on the third party framework
bundles please see the following links:
</para>
<itemizedlist>
<listitem>
<para>
<link xlink:href="https://www.cloudera.com/content/cloudera/en/documentation.html#CDH">
Cloudera CDH
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="http://docs.hortonworks.com/">
Hortonworks Data Platform
</link>
</para>
</listitem>
<listitem>
<para>
<link xlink:href="https://www.mapr.com/products/mapr-distribution-including-apache-hadoop">
MapR
</link>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="data-processing-introduction-to-data-processing-user-access-resources">
<title>User access to resources</title>
<para>
The resources (clusters, jobs, and data sources) of the Data
processing service are shared within the scope of a project. Although
a single controller installation may manage several sets of resources,
these resources will each be scoped to a single project. Given this
constraint we recommend that user membership in projects is monitored
closely to maintain proper segregation of resources.
</para>
<para>
As the security requirements of organizations deploying this service
will vary based on their specific needs, we recommend that operators
focus on data privacy, cluster management, and end-user applications as
a starting point for evaluating the needs of their users. These
decisions will help guide the process of configuring user access to
the service. For an expanded discussion on data privacy see
<xref linkend="tenant-data"/>.
</para>
<para>
The default assumption for a data processing installation is that
users will have access to all functionality within their projects. In
the event that more granular control is required the Data processing
service provides a policy file (as described in
<xref linkend="identity-policies"/>). These configurations will be
highly dependent on the needs of the installing organization, and as
such there is no general advice on their usage: see
<xref linkend="data-processing-configuration-and-hardening-role-based-access-control-policies"/>
for details.
</para>
</section>
</section>

View File

@ -1,135 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="database-access-control">
<?dbhtml stop-chunking?>
<title>Database access control</title>
<para>Each of the core OpenStack services (Compute, Identity, Networking, Block Storage) store state and configuration information in databases. In this chapter, we discuss how databases are used currently in OpenStack. We also explore security concerns, and the security ramifications of database back end choices.</para>
<section xml:id="database-access-control-openstack-database-access-model">
<title>OpenStack database access model</title>
<para>All of the services within an OpenStack project access a single database. There are presently no reference policies for creating table or row based access restrictions to the database.</para>
<para>There are no general provisions for granular control of database operations in OpenStack. Access and privileges are granted simply based on whether a node has access to the database or not. In this scenario, nodes with access to the database may have full privileges to DROP, INSERT, or UPDATE functions.</para>
<section xml:id="database-access-control-openstack-database-access-model-granular-access-control">
<title>Granular access control</title>
<para>By default, each of the OpenStack services and their processes access the database using a shared set of credentials. This makes auditing database operations and revoking access privileges from a service and its processes to the database particularly difficult.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="283" contentwidth="355" fileref="static/databaseusername.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/databaseusername.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
</section>
<section xml:id="database-access-control-openstack-database-access-model-nova-conductor">
<title>Nova-conductor</title>
<para>The compute nodes are the least trusted of the services in OpenStack because they host tenant instances. The <systemitem class="service">nova-conductor</systemitem> service has been introduced to serve as a database proxy, acting as an intermediary between the compute nodes and the database. We discuss its ramifications later in this chapter.</para>
<para>We strongly recommend:</para>
<itemizedlist><listitem>
<para>All database communications be isolated to a management network</para>
</listitem>
<listitem>
<para>Securing communications using TLS</para>
</listitem>
<listitem>
<para>Creating unique database user accounts per OpenStack service endpoint (illustrated below)</para>
</listitem>
</itemizedlist>
<informalfigure>
<mediaobject>
<imageobject role="html">
<imagedata contentdepth="283" contentwidth="355" fileref="static/databaseusernamessl.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/databaseusernamessl.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</mediaobject>
</informalfigure>
</section>
</section>
<section xml:id="database-access-control-database-authentication-and-access-control">
<title>Database authentication and access control</title>
<para>Given the risks around access to the database, we strongly recommend that unique database user accounts be created per node needing access to the database. Doing this facilitates better analysis and auditing for ensuring compliance or in the event of a compromise of a node allows you to isolate the compromised host by removing access for that node to the database upon detection. When creating these per service endpoint database user accounts, care should be taken to ensure that they are configured to require TLS. Alternatively, for increased security it is recommended that the database accounts be configured using X.509 certificate authentication in addition to user names and passwords.</para>
<section xml:id="database-access-control-database-authentication-and-access-control-privileges">
<title>Privileges</title>
<para>A separate database administrator (DBA) account should be created and protected that has full privileges to create/drop databases, create user accounts, and update user privileges. This simple means of separation of responsibility helps prevent accidental misconfiguration, lowers risk and lowers scope of compromise.</para>
<para>The database user accounts created for the OpenStack services and for each node should have privileges limited to just the database relevant to the service where the node is a member.</para>
</section>
</section>
<section xml:id="database-access-control-require-user-accounts-to-require-ssl-transport">
<title>Require user accounts to require SSL transport</title>
<section xml:id="database-access-control-require-user-accounts-to-require-ssl-transport-configuration-example-1-mysql">
<title>Configuration example #1: (MySQL)</title>
<programlisting>GRANT ALL ON dbname.* to 'compute01'@'hostname' IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>' REQUIRE SSL;</programlisting>
</section>
<section xml:id="database-access-control-require-user-accounts-to-require-ssl-transport-configuration-example-2-postgresql">
<title>Configuration example #2: (PostgreSQL)</title>
<para>In file <filename>pg_hba.conf</filename>:</para>
<programlisting>hostssl dbname compute01 hostname md5</programlisting>
<para>Note this command only adds the ability to communicate over SSL and is non-exclusive. Other access methods that may allow unencrypted transport should be disabled so that SSL is the sole access method.</para>
<para>
The <literal>md5</literal> parameter defines the
authentication method as a hashed password. We provide a
secure authentication example in the section below.</para>
</section>
<section xml:id="database-access-control-openstack-service-database-configuration-tls">
<title>OpenStack service database configuration</title>
<para>If your database server is configured for TLS transport, you will need to specify the certificate authority information for use with the initial connection string in the SQLAlchemy query.
</para>
<section xml:id="database-access-control-openstack-service-database-configuration-tls-example-1-mysql">
<title>Example of a <literal>:sql_connection</literal> string to MySQL:</title>
<programlisting language="ini">sql_connection = mysql://compute01:<replaceable>NOVA_DBPASS</replaceable>@localhost/nova?charset=utf8&amp;ssl_ca=/etc/mysql/cacert.pem</programlisting>
</section>
</section>
</section>
<section xml:id="database-access-control-authentication-with-x509-certificates">
<title>Authentication with X.509 certificates</title>
<para>Security may be enhanced by requiring X.509 client certificates for authentication. Authenticating to the database in this manner provides greater identity assurance of the client making the connection to the database and ensures that the communications are encrypted.</para>
<section xml:id="database-access-control-authentication-with-x509-certificates-configuration-example-1-mysql">
<title>Configuration example #1: (MySQL)</title>
<programlisting>GRANT ALL on dbname.* to 'compute01'@'hostname' IDENTIFIED BY '<replaceable>NOVA_DBPASS</replaceable>' REQUIRE SUBJECT
'/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=compute01' AND ISSUER
'/C=XX/ST=YYY/L=ZZZZ/O=cloudycloud/CN=cloud-ca';</programlisting>
</section>
<section xml:id="database-access-control-authentication-with-x509-certificates-configuration-example-2-postgresql">
<title>Configuration example #2: (PostgreSQL)</title>
<programlisting>hostssl dbname compute01 hostname cert</programlisting>
</section>
</section>
<section xml:id="database-access-control-openstack-service-database-configuration">
<title>OpenStack service database configuration</title>
<para>If your database server is configured to require X.509
certificates for authentication you will need to specify the
appropriate SQLAlchemy query parameters for the database back
end. These parameters specify the certificate, private key, and
certificate authority information for use with the initial
connection string.</para>
<para>Example of a <literal>:sql_connection</literal> string for X.509 certificate authentication to MySQL:</para>
<programlisting language="ini">sql_connection = mysql://compute01:<replaceable>NOVA_DBPASS</replaceable>@localhost/nova?
charset=utf8&amp;ssl_ca=/etc/mysql/cacert.pem&amp;ssl_cert=/etc/mysql/server-cert.pem&amp;ssl_key=/etc/mysql/server-key.pem</programlisting>
</section>
<section xml:id="database-access-control-nova-conductor">
<title>Nova-conductor</title>
<para>OpenStack Compute offers a sub-service called <systemitem class="service">nova-conductor</systemitem> which proxies database connections, with the primary purpose of having the nova compute nodes interfacing with <systemitem class="service">nova-conductor</systemitem> to meet data persistence needs as opposed to directly communicating with the database.</para>
<para>Nova-conductor receives requests over RPC and performs actions on behalf of the calling service without granting granular access to the database, its tables, or data within. Nova-conductor essentially abstracts direct database access away from compute nodes.</para>
<para>This abstraction offers the advantage of restricting services to executing methods with parameters, similar to stored procedures, preventing a large number of systems from directly accessing or modifying database data. This is accomplished without having these procedures stored or executed within the context or scope of the database itself, a frequent criticism of typical stored procedures.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="304" contentwidth="593" fileref="static/novaconductor.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/novaconductor.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>Unfortunately, this solution complicates the task of more fine-grained access control and the ability to audit data access. Because the <systemitem class="service">nova-conductor</systemitem> service receives requests over RPC, it highlights the importance of improving the security of messaging. Any node with access to the message queue may execute these methods provided by the <systemitem class="service">nova-conductor</systemitem> and effectively modifying the database.</para>
<para>Note, as <systemitem
class="service">nova-conductor</systemitem> only applies to
OpenStack Compute, direct database access from compute hosts may
still be necessary for the operation of other OpenStack
components such as Telemetry (ceilometer), Networking, and Block
Storage.</para>
<para>To disable the <systemitem class="service">nova-conductor</systemitem>, place the following into your <filename>nova.conf</filename> file (on your compute hosts):</para>
<programlisting language="ini">[conductor]
use_local = true</programlisting>
</section>
</section>

View File

@ -1,34 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="database-backend-considerations">
<?dbhtml stop-chunking?>
<title>Database back end considerations</title>
<para>PostgreSQL has a number of desirable security features such as Kerberos authentication, object-level security, and encryption support. The PostgreSQL community has done well to provide solid guidance, documentation, and tooling to promote positive security practices.</para>
<para>MySQL has a large community, widespread adoption, and provides high availability options. MySQL also has the ability to provide enhanced client authentication by way of plug-in authentication mechanisms. Forked distributions in the MySQL community provide many options for consideration. It is important to choose a specific implementation of MySQL based on a thorough evaluation of the security posture and the level of support provided for the given distribution.</para>
<section xml:id="database-backend-considerations-security-references-for-database-back-ends">
<title>Security references for database back ends</title>
<para>Those deploying MySQL or PostgreSQL are advised to refer to existing security guidance. Some references are listed below:</para>
<para>MySQL:</para>
<itemizedlist><listitem>
<para><link xlink:href="https://www.owasp.org/index.php/OWASP_Backend_Security_Project_MySQL_Hardening">OWASP MySQL Hardening</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://dev.mysql.com/doc/refman/5.5/en/pluggable-authentication.html">MySQL Pluggable Authentication</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://downloads.mysql.com/docs/mysql-security-excerpt-5.1-en.pdf">Security in MySQL</link></para>
</listitem>
</itemizedlist>
<para>PostgreSQL:</para>
<itemizedlist><listitem>
<para><link xlink:href="https://www.owasp.org/index.php/OWASP_Backend_Security_Project_PostgreSQL_Hardening">OWASP PostgreSQL Hardening</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.ibm.com/developerworks/opensource/library/os-postgresecurity">Total security in a PostgreSQL database</link></para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,103 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="database-transport-security">
<?dbhtml stop-chunking?>
<title>Database transport security</title>
<para>
This chapter covers issues related to network communications to
and from the database server. This includes IP address bindings
and encrypting network traffic with TLS.</para>
<section xml:id="database-transport-security-database-server-ip-address-binding">
<title>Database server IP address binding</title>
<para>
To isolate sensitive database communications between the
services and the database, we strongly recommend that the
database server(s) be configured to only allow communications
to and from the database over an isolated management
network. This is achieved by restricting the interface or IP
address on which the database server binds a network socket
for incoming client connections.</para>
<section xml:id="database-transport-security-database-server-ip-address-binding-restricting-bind-address-for-mysql">
<title>Restricting bind address for MySQL</title>
<para>In <filename>my.cnf</filename>:</para>
<programlisting>[mysqld]
...
bind-address &lt;ip address or hostname of management network interface&gt;</programlisting>
</section>
<section xml:id="database-transport-security-database-server-ip-address-binding-restricting-listen-address-for-mysql">
<title>Restricting listen address for PostgreSQL</title>
<para>In <filename>postgresql.conf</filename>:</para>
<programlisting>listen_addresses = &lt;ip address or hostname of management network interface&gt;</programlisting>
</section>
</section>
<section xml:id="database-transport-security-database-transport">
<title>Database transport</title>
<para>
In addition to restricting database communications to the
management network, we also strongly recommend that the cloud
administrator configure their database back end to require
TLS. Using TLS for the database client connections protects
the communications from tampering and eavesdropping. As will
be discussed in the next section, using TLS also provides the
framework for doing database user authentication through X.509
certificates (commonly referred to as PKI). Below is guidance
on how TLS is typically configured for the two popular
database back ends MySQL and PostgreSQL.</para>
<note>
<para>
When installing the certificate and key files, ensure that
the file permissions are restricted, for example
<command>chmod 0600</command>, and the ownership is
restricted to the database daemon user to prevent
unauthorized access by other processes and users on the
database server.</para>
</note>
</section>
<section xml:id="database-transport-security-mysql-ssl-configuration">
<title>MySQL SSL configuration</title>
<para>The following lines should be added in the system-wide
MySQL configuration file:</para>
<para>In <filename>my.cnf</filename>:</para>
<programlisting>[[mysqld]]
...
ssl-ca=/path/to/ssl/cacert.pem
ssl-cert=/path/to/ssl/server-cert.pem
ssl-key=/path/to/ssl/server-key.pem</programlisting>
<para>Optionally, if you wish to restrict the set of SSL ciphers
used for the encrypted connection. See <link xlink:href="http://www.openssl.org/docs/apps/ciphers.html">http://www.openssl.org/docs/apps/ciphers.html</link> for a list of ciphers and the syntax for specifying the cipher string:</para>
<programlisting>ssl-cipher='cipher:list'</programlisting>
</section>
<section xml:id="database-transport-security-postgresql-ssl-configuration">
<title>PostgreSQL SSL configuration</title>
<para>
The following lines should be added in the system-wide
PostgreSQL configuration file,
<filename>postgresql.conf</filename>.</para>
<programlisting>ssl = true</programlisting>
<para>Optionally, if you wish to restrict the set of SSL ciphers used for the encrypted connection. See <link xlink:href="http://www.openssl.org/docs/apps/ciphers.html">http://www.openssl.org/docs/apps/ciphers.html</link> for a list of ciphers and the syntax for specifying the cipher string:</para>
<programlisting>ssl-ciphers = 'cipher:list'</programlisting>
<para>The server certificate, key, and certificate authority
(CA) files should be placed in the $PGDATA directory in the
following files:</para>
<itemizedlist><listitem>
<para><filename>$PGDATA/server.crt</filename> - Server
certificate</para>
</listitem>
<listitem>
<para><filename>$PGDATA/server.key</filename> - Private key
corresponding to <filename>server.crt</filename></para>
</listitem>
<listitem>
<para><filename>$PGDATA/root.crt</filename> - Trusted
certificate authorities</para>
</listitem>
<listitem>
<para><filename>$PGDATA/root.crl</filename> - Certificate
revocation list</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,131 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="forensics-and-incident-response">
<?dbhtml stop-chunking?>
<title>Forensics and incident response</title>
<para>
The generation and collection of logs is an important component
of securely monitoring an OpenStack infrastructure. Logs provide
visibility into the day-to-day actions of administrators,
tenants, and guests, in addition to the activity in the compute,
networking, and storage and other components that comprise your
OpenStack deployment.
</para>
<para>
Logs are not only valuable for proactive security and continuous
compliance activities, but they are also a valuable information
source for investigating and responding to incidents.
</para>
<para>
For instance, analyzing the access logs of Identity service or
its replacement authentication system would alert us to failed
logins, frequency, origin IP, whether the events are restricted
to select accounts and other pertinent information. Log analysis
supports detection.
</para>
<para>
Actions may be taken to mitigate potential malicious activity
such as blacklisting an IP address, recommending the
strengthening of user passwords, or de-activating a user account
if it is deemed dormant.
</para>
<section xml:id="forensics-and-incident-response-monitoring-use-cases">
<title>Monitoring use cases</title>
<para>
Event monitoring is a more pro-active approach to securing an
environment, providing real-time detection and response. Several
tools exist which can aid in monitoring.
</para>
<para>
In the case of an OpenStack cloud instance, we need to monitor
the hardware, the OpenStack services, and the cloud resource
usage. The latter stems from wanting to be elastic, to scale to
the dynamic needs of the users.
</para>
<para>
Here are a few important use cases to consider when implementing
log aggregation, analysis and monitoring. These use cases can be
implemented and monitored through various applications, tools or
scripts. There are open source and commercial solutions and some
operators develop their own in-house solutions. These tools and
scripts can generate events that can be sent to administrators
through email or viewed in the integrated dashboard. It is
important to consider additional use cases that may apply to
your specific network and what you may consider anomalous
behavior.
</para>
<itemizedlist>
<listitem>
<para>
Detecting the absence of log generation is an event of high
value. Such an event would indicate a service failure or
even an intruder who has temporarily switched off logging or
modified the log level to hide their tracks.
</para>
</listitem>
<listitem>
<para>
Application events such as start or stop events that were
unscheduled would also be events to monitor and examine for
possible security implications.
</para>
</listitem>
<listitem>
<para>
Operating system events on the OpenStack service machines
such as user logins or restarts also provide valuable
insight into proper and improper usage of systems.
</para>
</listitem>
<listitem>
<para>
Being able to detect the load on the OpenStack servers also
enables responding by way of introducing additional servers
for load balancing to ensure high availability.
</para>
</listitem>
<listitem>
<para>
Other events that are actionable are networking bridges
going down, ip tables being flushed on compute nodes and
consequential loss of access to instances resulting in
unhappy customers.
</para>
</listitem>
<listitem>
<para>
To reduce security risks from orphan instances on a
user, tenant, or domain deletion in the Identity service there is
discussion to generate notifications in the system and have
OpenStack components respond to these events as appropriate
such as terminating instances, disconnecting attached
volumes, reclaiming CPU and storage resources and so on.
</para>
</listitem>
</itemizedlist>
<para>
A cloud will host many virtual instances, and monitoring these
instances goes beyond hardware monitoring and log files which
may just contain CRUD events.
</para>
<para>
Security monitoring controls such as intrusion detection
software, antivirus software, and spyware detection and removal
utilities can generate logs that show when and how an attack or
intrusion took place. Deploying these tools on the cloud
machines provides value and protection. Cloud users, those
running instances on the cloud, may also want to run such tools
on their instances.
</para>
</section>
<section xml:id="forensics-and-incident-response-references">
<title>Bibliography</title>
<para>Siwczak, Piotr. Some Practical Considerations for Monitoring in the OpenStack Cloud. 2012. <link xlink:href="http://www.mirantis.com/blog/openstack-monitoring/">http://www.mirantis.com/blog/openstack-monitoring</link></para>
<para>blog.sflow.com, sflow: Host sFlow distributed agent. 2012. <link xlink:href="http://blog.sflow.com/2012/01/host-sflow-distributed-agent.html">http://blog.sflow.com/2012/01/host-sflow-distributed-agent.html</link></para>
<para>blog.sflow.com, sflow: LAN and WAN. 2009. <link xlink:href="http://blog.sflow.com/2009/09/lan-and-wan.html">http://blog.sflow.com/2009/09/lan-and-wan.html</link></para>
<para>blog.sflow.com, sflow: Rapidly detecting large flows sFlow vs. NetFlow/IPFIX. 2013. <link xlink:href="http://blog.sflow.com/2013/01/rapidly-detecting-large-flows-sflow-vs.html">http://blog.sflow.com/2013/01/rapidly-detecting-large-flows-sflow-vs.html</link></para>
</section>
</section>

View File

@ -1,440 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="hardening-the-virtualization-layers">
<?dbhtml stop-chunking?>
<title>Hardening the virtualization layers</title>
<para>
In the beginning of this chapter we discuss the use of both
physical and virtual hardware by instances, the associated
security risks, and some recommendations for mitigating those
risks. We conclude the chapter with a discussion of sVirt, an
open source project for integrating SELinux mandatory access
controls with the virtualization components.</para>
<section xml:id="hardening-the-virtualization-layers-physical-hardware-pci-passthrough">
<title>Physical hardware (PCI passthrough)</title>
<para>
Many hypervisors offer a functionality known as PCI
passthrough. This allows an instance to have direct access to
a piece of hardware on the node. For example, this could be
used to allow instances to access video cards or GPUs offering
the compute unified device architecture (CUDA) for high
performance computation. This feature carries two types of
security risks: direct memory access and hardware
infection.</para>
<para>
Direct memory access (DMA) is a feature that permits certain
hardware devices to access arbitrary physical memory addresses
in the host computer. Often video cards have this
capability. However, an instance should not be given arbitrary
physical memory access because this would give it full view of
both the host system and other instances running on the same
node. Hardware vendors use an input/output memory management
unit (IOMMU) to manage DMA access in these
situations. Therefore, cloud architects should ensure that the
hypervisor is configured to utilize this hardware
feature.</para>
<itemizedlist>
<listitem>
<para>KVM: <link
xlink:href="http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM">How
to assign devices with VT-d in KVM</link></para>
</listitem>
<listitem>
<para>Xen: <link xlink:href="http://wiki.xen.org/wiki/VTd_HowTo">VTd Howto</link>
</para>
</listitem>
</itemizedlist>
<note>
<para>
The IOMMU feature is marketed as VT-d by Intel and AMD-Vi by
AMD.</para>
</note>
<para>
A hardware infection occurs when an instance makes a malicious
modification to the firmware or some other part of a
device. As this device is used by other instances or the host
OS, the malicious code can spread into those systems. The end
result is that one instance can run code outside of its
security domain. This is a significant breach as it is harder
to reset the state of physical hardware than virtual hardware,
and can lead to additional exposure such as access to the
management network.</para>
<para>
Solutions to the hardware infection problem are domain
specific. The strategy is to identify how an instance can
modify hardware state then determine how to reset any
modifications when the instance is done using the
hardware. For example, one option could be to re-flash the
firmware after use. Clearly there is a need to balance
hardware longevity with security as some firmwares will fail
after a large number of writes. TPM technology, described in
<xref linkend="integrity-life-cycle-secure-bootstrapping"/>, provides
a solution for detecting unauthorized firmware
changes. Regardless of the strategy selected, it is important
to understand the risks associated with this kind of hardware
sharing so that they can be properly mitigated for a given
deployment scenario.
</para>
<para>
Additionally, due to the risk and complexities associated with
PCI passthrough, it should be disabled by default. If enabled
for a specific need, you will need to have appropriate
processes in place to ensure the hardware is clean before
re-issue.</para>
</section>
<section xml:id="hardening-the-virtualization-layers-virtual-hardware-qemu">
<title>Virtual hardware (QEMU)</title>
<para>
When running a virtual machine, virtual hardware is a software
layer that provides the hardware interface for the virtual
machine. Instances use this functionality to provide network,
storage, video, and other devices that may be needed. With
this in mind, most instances in your environment will
exclusively use virtual hardware, with a minority that will
require direct hardware access. The major open source
hypervisors use QEMU for this functionality. While QEMU fills
an important need for virtualization platforms, it has proven
to be a very challenging software project to write and
maintain. Much of the functionality in QEMU is implemented
with low-level code that is difficult for most developers to
comprehend. Furthermore, the hardware virtualized by QEMU
includes many legacy devices that have their own set of
quirks. Putting all of this together, QEMU has been the source
of many security problems, including hypervisor breakout
attacks.</para>
<para>
Therefore, it is important to take proactive steps to harden
QEMU. Three specific steps are recommended: minimizing the
code base, using compiler hardening, and using mandatory
access controls such as sVirt, SELinux, or AppArmor.
</para>
<para>
Additionally, ensure iptables has the default policy filtering
network traffic, and consider examining the existing rule set
to understand each rule and determine if the policy needs to be
expanded upon.</para>
<section xml:id="hardening-the-virtualization-layers-virtual-hardware-qemu-minimizing-the-qemu-code-base">
<title>Minimizing the QEMU code base</title>
<para>
The first recommendation is to minimize the QEMU code base
by removing unused components from the system. QEMU provides
support for many different virtual hardware devices, however
only a small number of devices are needed for a given
instance. The most common hardware devices are the virtio
devices. Some legacy instances will need access to specific
hardware, which can be specified using glance metadata:
</para>
<screen><prompt>$</prompt> <userinput>glance image-update \
--property hw_disk_bus=ide \
--property hw_cdrom_bus=ide \
--property hw_vif_model=e1000 \
f16-x86_64-openstack-sda</userinput></screen>
<para>
A cloud architect should decide what devices to make
available to cloud users. Anything that is not needed should
be removed from QEMU. This step requires recompiling QEMU
after modifying the options passed to the QEMU configure
script. For a complete list of up-to-date options simply run
<command>./configure --help</command> from within the QEMU
source directory. Decide what is needed for your deployment,
and disable the remaining options.</para>
</section>
<section xml:id="hardening-the-virtualization-layers-virtual-hardware-qemu-compiler-hardening">
<title>Compiler hardening</title>
<para>
The next step is to harden QEMU using compiler hardening
options. Modern compilers provide a variety of compile time
options to improve the security of the resulting
binaries. These features, which we will describe in more
detail below, include relocation read-only (RELRO), stack
canaries, never execute (NX), position independent
executable (PIE), and address space layout randomization
(ASLR).</para>
<para>
Many modern Linux distributions already build QEMU with
compiler hardening enabled, so you may want to verify your
existing executable before proceeding with the information
below. One tool that can assist you with this verification
is called <link
xlink:href="http://www.trapkit.de/tools/checksec.html"><literal>checksec.sh</literal></link>.</para>
<variablelist>
<varlistentry>
<term>RELocation Read-Only (RELRO)</term>
<listitem>
<para>
Hardens the data sections of an executable. Both full
and partial RELRO modes are supported by gcc. For QEMU
full RELRO is your best choice. This will make the
global offset table read-only and place various
internal data sections before the program data section
in the resulting executable.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Stack canaries</term>
<listitem>
<para>
Places values on the stack and verifies their presence
to help prevent buffer overflow attacks.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Never eXecute (NX)</term>
<listitem>
<para>
Also known as Data Execution Prevention (DEP), ensures
that data sections of the executable can not be
executed.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Position Independent Executable (PIE)</term>
<listitem>
<para>
Produces a position independent executable, which is
necessary for ASLR.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Address Space Layout Randomization (ASLR)</term>
<listitem>
<para>
This ensures that placement of both code and data
regions will be randomized. Enabled by the kernel (all
modern Linux kernels support ASLR), when the executable
is built with PIE.</para>
</listitem>
</varlistentry>
</variablelist>
<para>
The following compiler options are recommend for GCC when
compiling QEMU:
</para>
<programlisting>CFLAGS="-arch x86_64 -fstack-protector-all -Wstack-protector \
--param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 \
-Wl,-z,relro,-z,now"</programlisting>
<para>
We recommend testing your QEMU executable file after it is
compiled to ensure that the compiler hardening worked
properly.</para>
<para>
Most cloud deployments will not want to build software such
as QEMU by hand. It is better to use packaging to ensure
that the process is repeatable and to ensure that the end
result can be easily deployed throughout the cloud. The
references below provide some additional details on applying
compiler hardening options to existing packages.</para>
<itemizedlist>
<listitem>
<para>DEB packages: <link xlink:href="http://wiki.debian.org/HardeningWalkthrough">Hardening Walkthrough</link></para>
</listitem>
<listitem>
<para>RPM packages: <link xlink:href="http://fedoraproject.org/wiki/How_to_create_an_RPM_package">How to create an RPM package</link></para>
</listitem>
</itemizedlist>
</section>
<section xml:id="hardening-the-virtualization-layers-virtual-hardware-qemu-mandatory-access-controls">
<title>Mandatory access controls</title>
<para>
Compiler hardening makes it more difficult to attack the
QEMU process. However, if an attacker does succeed, we would
like to limit the impact of the attack. Mandatory access
controls accomplish this by restricting the privileges on
QEMU process to only what is needed. This can be
accomplished using sVirt / SELinux or AppArmor. When using
sVirt, SELinux is configured to run each QEMU process under
a separate security context. AppArmor can be configured to
provide similar functionality. We provide more details on
sVirt and instance isolation in the <link
linkend="hardening-the-virtualization-layers-svirt-selinux-and-virtualization">
section below</link>.</para>
</section>
</section>
<section xml:id="hardening-the-virtualization-layers-svirt-selinux-and-virtualization">
<title>sVirt: SELinux and virtualization</title>
<para>
With unique kernel-level architecture and National Security
Agency (NSA) developed security mechanisms, KVM provides
foundational isolation technologies for multi-tenancy. With
developmental origins dating back to 2002, the Secure
Virtualization (sVirt) technology is the application of
SELinux against modern day virtualization. SELinux, which was
designed to apply separation control based upon labels, has
been extended to provide isolation between virtual machine
processes, devices, data files and system processes acting
upon their behalf.</para>
<para>
OpenStack's sVirt implementation aspires to protect hypervisor
hosts and virtual machines against two primary threat
vectors:
</para>
<variablelist>
<varlistentry>
<term>Hypervisor threats</term>
<listitem>
<para>
A compromised application running within a virtual
machine attacks the hypervisor to access underlying
resources. For example, when a virtual machine is able
to access the hypervisor OS, physical devices, or other
applications. This threat vector represents considerable
risk as a compromise on a hypervisor can infect the
physical hardware as well as exposing other virtual
machines and network segments.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Virtual Machine (multi-tenant) threats</term>
<listitem>
<para>
A compromised application running within a VM attacks
the hypervisor to access or control another virtual
machine and its resources. This is a threat vector
unique to virtualization and represents considerable
risk as a multitude of virtual machine file images
could be compromised due to vulnerability in a single
application. This virtual network attack is a major
concern as the administrative techniques for
protecting real networks do not directly apply to the
virtual environment.
</para>
</listitem>
</varlistentry>
</variablelist>
<para>
Each KVM-based virtual machine is a process which is labeled
by SELinux, effectively establishing a security boundary
around each virtual machine. This security boundary is
monitored and enforced by the Linux kernel, restricting the
virtual machine's access to resources outside of its boundary
such as host machine data files or other VMs.</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="583" contentwidth="1135"
fileref="static/sVirt Diagram 1.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/sVirt Diagram 1.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject>
</para>
<para>
As shown above, sVirt isolation is provided regardless of the
guest Operating System running inside the virtual
machine&mdash;Linux or Windows VMs can be used. Additionally,
many Linux distributions provide SELinux within the operating
system, allowing the virtual machine to protect internal
virtual resources from threats.
</para>
<section xml:id="hardening-the-virtualization-layers-svirt-selinux-and-virtualization-labels-and-categories">
<title>Labels and categories</title>
<para>
KVM-based virtual machine instances are labelled with their
own SELinux data type, known as svirt_image_t. Kernel level
protections prevent unauthorized system processes, such as
malware, from manipulating the virtual machine image files
on disk. When virtual machines are powered off, images are
stored as svirt_image_t as shown below:</para>
<programlisting>system_u:object_r:svirt_image_t:SystemLow image1
system_u:object_r:svirt_image_t:SystemLow image2
system_u:object_r:svirt_image_t:SystemLow image3
system_u:object_r:svirt_image_t:SystemLow image4</programlisting>
<para>
The <literal>svirt_image_t</literal> label uniquely
identifies image files on disk, allowing for the SELinux
policy to restrict access. When a KVM-based Compute image is
powered on, sVirt appends a random numerical identifier to the image.
sVirt is capable of assigning numeric identifiers to a maximum of
524,288 virtual machines per hypervisor node, however most OpenStack
deployments are highly unlikely to encounter this limitation.</para>
<para>This example shows the sVirt category identifier:</para>
<programlisting>system_u:object_r:svirt_image_t:s0:c87,c520 image1
system_u:object_r:svirt_image_t:s0:419,c172 image2</programlisting>
</section>
<section xml:id="hardening-the-virtualization-selinux-users-roles">
<title>SELinux users and roles</title>
<para>
SELinux can also manage user roles. These can be viewed
through the <literal>-Z</literal> flag, or with the
<command>semanage</command> command. On the hypervisor, only
administrators should be able to access the system, and should
have an appropriate context around both the administrative
users and any other users that are on the system.
</para>
<itemizedlist>
<listitem>
<para>
SELinux users documentation:
<link xlink:href="http://selinuxproject.org/page/BasicConcepts#Users">SELinux.org Users and Roles Overview</link>
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="hardening-the-virtualization-layers-svirt-selinux-and-virtualization-booleans">
<title>Booleans</title>
<para>
To ease the administrative burden of managing SELinux, many
enterprise Linux platforms utilize SELinux Booleans to
quickly change the security posture of sVirt.</para>
<para>
Red Hat Enterprise Linux-based KVM deployments utilize the
following sVirt booleans:</para>
<informaltable rules="all" width="80%"><colgroup><col/><col/></colgroup>
<thead>
<tr>
<td><para><emphasis role="bold">sVirt SELinux Boolean</emphasis></para></td>
<td><para><emphasis role="bold">Description</emphasis></para></td>
</tr>
</thead>
<tbody>
<tr>
<td><para>virt_use_common</para></td>
<td><para>Allow virt to use serial/parallel communication ports.</para></td>
</tr>
<tr>
<td><para>virt_use_fusefs</para></td>
<td><para>Allow virt to read FUSE mounted files.</para></td>
</tr>
<tr>
<td><para>virt_use_nfs</para></td>
<td><para>Allow virt to manage NFS mounted files.</para></td>
</tr>
<tr>
<td><para>virt_use_samba</para></td>
<td><para>Allow virt to manage CIFS mounted files.</para></td>
</tr>
<tr>
<td><para>virt_use_sanlock</para></td>
<td><para>Allow confined virtual guests to interact with the sanlock.</para></td>
</tr>
<tr>
<td><para>virt_use_sysfs</para></td>
<td><para>Allow virt to manage device configuration (PCI).</para></td>
</tr>
<tr>
<td><para>virt_use_usb</para></td>
<td><para>Allow virt to use USB devices.</para></td>
</tr>
<tr>
<td><para>virt_use_xserver</para></td>
<td><para>Allow virtual machine to interact with the X Window System.</para></td>
</tr>
</tbody>
</informaltable>
</section>
</section>
</section>

View File

@ -1,725 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="hypervisor-selection">
<?dbhtml stop-chunking?>
<title>Hypervisor selection</title>
<section xml:id="hypervisor-selection-hypervisors-in-openstack">
<title>Hypervisors in OpenStack</title>
<para>Whether OpenStack is deployed within private data centers or
as a public cloud service, the underlying virtualization
technology provides enterprise-level capabilities in the realms
of scalability, resource efficiency, and uptime. While such
high-level benefits are generally available across many
OpenStack-supported hypervisor technologies, there are
significant differences in the security architecture and
features for each hypervisor, particularly when considering the
security threat vectors which are unique to elastic OpenStack
environments. As applications consolidate into single
Infrastructure-as-a-Service (IaaS) platforms, instance isolation
at the hypervisor level becomes paramount. The requirement for
secure isolation holds true across commercial, government, and
military communities.</para>
<para>Within the OpenStack framework, you can choose among many
hypervisor platforms and corresponding OpenStack plug-ins to
optimize your cloud environment. In the context of this guide,
hypervisor selection considerations are highlighted as they
pertain to feature sets that are critical to security. However,
these considerations are not meant to be an exhaustive
investigation into the pros and cons of particular hypervisors.
NIST provides additional guidance in Special Publication
800-125, "<emphasis>Guide to Security for Full Virtualization
Technologies</emphasis>".</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria">
<title>Selection criteria</title>
<para>As part of your hypervisor selection process, you must
consider a number of important factors to help increase your
security posture. Specifically, you must become familiar with
these areas:</para>
<itemizedlist>
<listitem>
<para>Team expertise</para>
</listitem>
<listitem>
<para>Product or project maturity</para>
</listitem>
<listitem>
<para>Common criteria</para>
</listitem>
<listitem>
<para>Certifications and attestations</para>
</listitem>
<listitem>
<para>Hardware concerns</para>
</listitem>
<listitem>
<para>Hypervisor vs. baremetal</para>
</listitem>
<listitem>
<para>Additional security features</para>
</listitem>
</itemizedlist>
<para>Additionally, the following security-related criteria are
highly encouraged to be evaluated when selecting a hypervisor
for OpenStack deployments:</para>
<itemizedlist>
<listitem>
<para>Has the hypervisor undergone Common Criteria
certification? If so, to what levels?</para>
</listitem>
<listitem>
<para>Is the underlying cryptography certified by a
third-party?</para>
</listitem>
</itemizedlist>
<section xml:id="team_expertise">
<title>Team expertise</title>
<para>Most likely, the most important aspect in hypervisor
selection is the expertise of your staff in managing and
maintaining a particular hypervisor platform. The more
familiar your team is with a given product, its configuration,
and its eccentricities, the fewer the configuration mistakes.
Additionally, having staff expertise spread across an
organization on a given hypervisor increases availability of
your systems, allows segregation of duties, and mitigates
problems in the event that a team member is
unavailable.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-product-or-project-maturity">
<title>Product or project maturity</title>
<para>The maturity of a given hypervisor product or project is
critical to your security posture as well. Product maturity
has a number of effects once you have deployed your
cloud:</para>
<itemizedlist>
<listitem>
<para>Availability of expertise</para>
</listitem>
<listitem>
<para>Active developer and user communities</para>
</listitem>
<listitem>
<para>Timeliness and availability of updates</para>
</listitem>
<listitem>
<para>Incidence response</para>
</listitem>
</itemizedlist>
<para>One of the biggest indicators of a hypervisor's maturity
is the size and vibrancy of the community that surrounds it.
As this concerns security, the quality of the community
affects the availability of expertise if you need additional
cloud operators. It is also a sign of how widely deployed the
hypervisor is, in turn leading to the battle readiness of any
reference architectures and best practices.</para>
<para>Further, the quality of community, as it surrounds an open
source hypervisor like KVM or Xen, has a direct impact on the
timeliness of bug fixes and security updates. When
investigating both commercial and open source hypervisors, you
must look into their release and support cycles as well as the
time delta between the announcement of a bug or security issue
and a patch or response. Lastly, the supported capabilities of
OpenStack compute vary depending on the hypervisor chosen. See
the <link
xlink:href="https://wiki.openstack.org/wiki/HypervisorSupportMatrix"
>OpenStack Hypervisor Support Matrix</link> for OpenStack
compute feature support by hypervisor.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-certifications-and-attestations">
<title>Certifications and attestations</title>
<para>One additional consideration when selecting a hypervisor
is the availability of various formal certifications and
attestations. While they may not be requirements for your
specific organization, these certifications and attestations
speak to the maturity, production readiness, and thoroughness
of the testing a particular hypervisor platform has been
subjected to.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-common-criteria">
<title>Common criteria</title>
<para>Common Criteria is an internationally standardized
software evaluation process, used by governments and
commercial companies to validate software technologies perform
as advertised. In the government sector, NSTISSP No. 11
mandates that U.S. Government agencies only procure software
which has been Common Criteria certified, a policy which has
been in place since July 2002. It should be specifically noted
that OpenStack has not undergone Common Criteria
certification, however many of the available hypervisors
have.</para>
<para>In addition to validating a technologies capabilities, the
Common Criteria process evaluates <emphasis>how</emphasis>
technologies are developed.</para>
<itemizedlist>
<listitem>
<para>How is source code management performed?</para>
</listitem>
<listitem>
<para>How are users granted access to build systems?</para>
</listitem>
<listitem>
<para>Is the technology cryptographically signed before
distribution?</para>
</listitem>
</itemizedlist>
<para>The KVM hypervisor has been Common Criteria certified
through the U.S. Government and commercial distributions,
which have been validated to separate the runtime environment
of virtual machines from each other, providing foundational
technology to enforce instance isolation. In addition to
virtual machine isolation, KVM has been Common Criteria
certified to</para>
<blockquote>
<para>"<emphasis>provide system-inherent separation mechanisms
to the resources of virtual machines. This separation
ensures that large software component used for
virtualizing and simulating devices executing for each
virtual machine cannot interfere with each other. Using
the SELinux multi-category mechanism, the virtualization
and simulation software instances are isolated. The
virtual machine management framework configures SELinux
multi-category settings transparently to the
administrator</emphasis>"</para>
</blockquote>
<para>While many hypervisor vendors, such as Red Hat, Microsoft,
and VMware have achieved Common Criteria Certification their
underlying certified feature set differs. It is recommended to
evaluate vendor claims to ensure they minimally satisfy the
following requirements:</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
</colgroup>
<tbody>
<tr>
<td><para>Identification and Authentication</para></td>
<td><para>Identification and authentication using
pluggable authentication modules (PAM) based upon user
passwords. The quality of the passwords used can be
enforced through configuration options.</para></td>
</tr>
<tr>
<td><para>Audit</para></td>
<td><para>The system provides the capability to audit a
large number of events including individual system
calls as well as events generated by trusted
processes. Audit data is collected in regular files in
ASCII format. The system provides a program for the
purpose of searching the audit
records.</para><para>The system administrator can
define a rule base to restrict auditing to the events
they are interested in. This includes the ability to
restrict auditing to specific events, specific users,
specific objects or a combination of all of
this.</para><para>Audit records can be transferred to
a remote audit daemon.</para></td>
</tr>
<tr>
<td><para>Discretionary Access Control</para></td>
<td>
<para>Discretionary Access Control
(<glossterm>DAC</glossterm>) restricts access to
file system objects based on <glossterm
baseform="access control list">Access Control
Lists</glossterm> (ACLs) that include the standard
UNIX permissions for user, group and others. Access
control mechanisms also protect IPC objects from
unauthorized access.</para>
<para>The system includes the ext4 file system, which
supports POSIX ACLs. This allows defining access
rights to files within this type of file system down
to the granularity of a single user.</para>
</td>
</tr>
<tr>
<td><para>Mandatory Access Control</para></td>
<td><para>Mandatory Access Control (MAC) restricts access
to objects based on labels assigned to subjects and
objects. Sensitivity labels are automatically attached
to processes and objects. The access control policy
enforced using these labels is derived from the
<glossterm baseform="Bell-LaPadula model">Bell-LaPadula
access control model</glossterm>.</para><para>SELinux
categories are attached to virtual machines and its
resources. The access control policy enforced using
these categories grant virtual machines access to
resources if the category of the virtual machine is
identical to the category of the accessed
resource.</para><para>The TOE implements
non-hierarchical categories to control access to
virtual machines.</para></td>
</tr>
<tr>
<td><para>Role-Based Access Control</para></td>
<td><para>Role-based access control (RBAC) allows
separation of roles to eliminate the need for an
all-powerful system administrator.</para></td>
</tr>
<tr>
<td><para>Object Reuse</para></td>
<td><para>File system objects and memory and IPC objects
are cleared before they can be reused by a process
belonging to a different user.</para></td>
</tr>
<tr>
<td><para>Security Management</para></td>
<td><para>The management of the security critical
parameters of the system is performed by
administrative users. A set of commands that require
root privileges (or specific roles when RBAC is used)
are used for system management. Security parameters
are stored in specific files that are protected by the
access control mechanisms of the system against
unauthorized access by users that are not
administrative users.</para></td>
</tr>
<tr>
<td><para>Secure Communication</para></td>
<td><para>The system supports the definition of trusted
channels using SSH. Password based authentication is
supported. Only a restricted number of cipher suites
are supported for those protocols in the evaluated
configuration.</para></td>
</tr>
<tr>
<td><para>Storage Encryption</para></td>
<td><para>The system supports encrypted block devices to
provide storage confidentiality via
dm_crypt.</para></td>
</tr>
<tr>
<td><para>TSF Protection</para></td>
<td><para>While in operation, the kernel software and data
are protected by the hardware memory protection
mechanisms. The memory and process management
components of the kernel ensure a user process cannot
access kernel storage or storage belonging to other
processes.</para><para>Non-kernel TSF software and
data are protected by DAC and process isolation
mechanisms. In the evaluated configuration, the
reserved user ID root owns the directories and files
that define the TSF configuration. In general, files
and directories containing internal TSF data, such as
configuration files and batch job queues, are also
protected from reading by DAC
permissions.</para><para>The system and the hardware
and firmware components are required to be physically
protected from unauthorized access. The system kernel
mediates all access to the hardware mechanisms
themselves, other than program visible CPU instruction
functions.</para><para>In addition, mechanisms for
protection against stack overflow attacks are
provided.</para></td>
</tr>
</tbody>
</informaltable>
</section>
<section xml:id="hypervisor-selection-selection-criteria-cryptography-standards">
<title>Cryptography standards</title>
<para>Several cryptography algorithms are available within
OpenStack for identification and authorization, data transfer
and protection of data at rest. When selecting a hypervisor,
the following are recommended algorithms and implementation
standards to ensure the virtualization layer supports:</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<th>Algorithm</th>
<th>Key length</th>
<th>Intended purpose</th>
<th>Security function</th>
<th>Implementation standard</th>
</tr>
</thead>
<tbody>
<tr>
<td>AES</td>
<td>128, 192, or 256 bits</td>
<td>Encryption / decryption</td>
<td>Protected data transfer, protection for data at
rest</td>
<td><link
xlink:href="http://www.ietf.org/rfc/rfc4253.txt"
>RFC 4253</link></td>
</tr>
<tr>
<td>TDES</td>
<td>168 bits</td>
<td>Encryption / decryption</td>
<td>Protected data transfer</td>
<td><link
xlink:href="http://www.ietf.org/rfc/rfc4253.txt"
>RFC 4253</link></td>
</tr>
<tr>
<td>RSA</td>
<td>1024, 2048, or 3072 bits</td>
<td>Authentication, key exchange</td>
<td>Identification and authentication, protected
data transfer</td>
<td><link
xlink:href="http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf"
>U.S. NIST FIPS PUB 186-3</link></td>
</tr>
<tr>
<td>DSA</td>
<td>L=1024, N=160 bits</td>
<td>Authentication, key exchange</td>
<td>Identification and authentication, protected
data transfer</td>
<td><link
xlink:href="http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf"
>U.S. NIST FIPS PUB 186-3</link></td>
</tr>
<tr>
<td>Serpent</td>
<td>128, 192, or 256 bits</td>
<td>Encryption / decryption</td>
<td>Protection of data at rest</td>
<td><link
xlink:href="http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf"
>http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf</link></td>
</tr>
<tr>
<td>Twofish</td>
<td>128, 192, or 256 bit</td>
<td>Encryption / decryption</td>
<td>Protection of data at rest</td>
<td><link
xlink:href="http://www.schneier.com/paper-twofish-paper.html"
>http://www.schneier.com/paper-twofish-paper.html</link></td>
</tr>
<tr>
<td>SHA-1</td>
<td>-</td>
<td>Message Digest</td>
<td>Protection of data at rest, protected data
transfer</td>
<td><link
xlink:href="http://csrc.nist.gov/publications/fips/fips180-3/fips180-3_final.pdf"
>U.S. NIST FIPS PUB 180-3</link></td>
</tr>
<tr>
<td>SHA-2 (224, 256, 384, or 512 bits)</td>
<td>-</td>
<td>Message Digest</td>
<td>Protection for data at rest, identification and
authentication</td>
<td><link
xlink:href="http://csrc.nist.gov/publications/fips/fips180-3/fips180-3_final.pdf"
>U.S. NIST FIPS PUB 180-3</link></td>
</tr>
</tbody>
</informaltable>
<section xml:id="hypervisor-selection-selection-criteria-cryptography-standards-fips-140-2">
<title>FIPS 140-2</title>
<para>In the United States the National Institute of Science
and Technology (NIST) certifies cryptographic algorithms
through a process known the Cryptographic Module Validation
Program. NIST certifies algorithms for conformance against
Federal Information Processing Standard 140-2 (FIPS 140-2),
which ensures:</para>
<blockquote>
<para><emphasis>Products validated as conforming to FIPS
140-2 are accepted by the Federal agencies of both
countries [United States and Canada] for the protection
of sensitive information (United States) or Designated
Information (Canada). The goal of the CMVP is to promote
the use of validated cryptographic modules and provide
Federal agencies with a security metric to use in
procuring equipment containing validated cryptographic
modules.</emphasis></para>
</blockquote>
<para>When evaluating base hypervisor technologies, consider
if the hypervisor has been certified against FIPS 140-2. Not
only is conformance against FIPS 140-2 mandated per U.S.
Government policy, formal certification indicates that a
given implementation of a cryptographic algorithm has been
reviewed for conformance against module specification,
cryptographic module ports and interfaces; roles, services,
and authentication; finite state model; physical security;
operational environment; cryptographic key management;
electromagnetic interference/electromagnetic compatibility
(EMI/EMC); self-tests; design assurance; and mitigation of
other attacks.</para>
</section>
</section>
<section xml:id="hypervisor-selection-selection-criteria-hardware-concerns">
<title>Hardware concerns</title>
<para>Further, when you evaluate a hypervisor platform, consider
the supportability of the hardware on which the hypervisor
will run. Additionally, consider the additional features
available in the hardware and how those features are supported
by the hypervisor you chose as part of the OpenStack
deployment. To that end, hypervisors each have their own
hardware compatibility lists (HCLs). When selecting compatible
hardware it is important to know in advance which
hardware-based virtualization technologies are important from
a security perspective.</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<thead>
<tr>
<td>Description</td>
<td>Technology</td>
<td>Explanation</td>
</tr>
</thead>
<tbody>
<tr>
<td>I/O MMU</td>
<td>VT-d / AMD-Vi</td>
<td>Required for protecting
PCI-passthrough</td>
</tr>
<tr>
<td>Intel Trusted Execution Technology</td>
<td>Intel TXT / SEM</td>
<td>Required for dynamic attestation
services</td>
</tr>
<tr>
<td><anchor
xml:id="PCI-SIG_I.2FO_virtualization_.28IOV.29"
/>PCI-SIG I/O virtualization</td>
<td>SR-IOV, MR-IOV, ATS</td>
<td>Required to allow secure sharing of PCI Express
devices</td>
</tr>
<tr>
<td>Network virtualization</td>
<td>VT-c</td>
<td>Improves performance of network I/O on
hypervisors</td>
</tr>
</tbody>
</informaltable>
</section>
<section xml:id="hypervisor-selection-selection-criteria-hypervisor-vs-baremetal">
<title>Hypervisor vs. baremetal</title>
<para>It is important to recognize the difference between using
LXC (Linux Containers) or baremetal systems vs using a
hypervisor like KVM. Specifically, the focus of this security
guide is largely based on having a hypervisor and
virtualization platform. However, should your implementation
require the use of a baremetal or LXC environment, you must
pay attention to the particular differences in regard to
deployment of that environment.</para>
<para>In particular, you must assure your end users that the
node has been properly sanitized of their data prior to
re-provisioning. Additionally, prior to reusing a node, you
must provide assurances that the hardware has not been
tampered or otherwise compromised.</para>
<note>
<para>While OpenStack has a baremetal project, a discussion of
the particular security implications of running baremetal is
beyond the scope of this book.</para>
</note>
<para>Finally, due to the time constraints around a book sprint,
the team chose to use KVM as the hypervisor in our example
implementations and architectures.</para>
<note>
<para>There is an OpenStack Security Note pertaining to the
<link
xlink:href="https://bugs.launchpad.net/ossn/+bug/1098582"
>use of LXC in Compute</link>.</para>
</note>
</section>
<section xml:id="hypervisor-selection-selection-criteria-hypervisor-memory-optimization">
<title>Hypervisor memory optimization</title>
<para>Many hypervisors use memory optimization techniques to
overcommit memory to guest virtual machines. This is a useful
feature that allows you to deploy very dense compute clusters.
One way to achieve this is through de-duplication or "sharing"
of memory pages. When two virtual machines have identical data
in memory, there are advantages to having them reference the
same memory.</para>
<para>Typically this is achieved through Copy-On-Write (COW)
mechanisms. These mechanisms have been shown to be vulnerable
to side-channel attacks where one VM can infer something about
the state of another and might not be appropriate for
multi-tenant environments where not all tenants are trusted or
share the same levels of trust.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-kvm-kernel-samepage-merging">
<title>KVM Kernel Samepage Merging</title>
<para>Introduced into the Linux kernel in version 2.6.32, Kernel
Samepage Merging (KSM) consolidates identical memory pages
between Linux processes. As each guest VM under the KVM
hypervisor runs in its own process, KSM can be used to
optimize memory use between VMs.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-xen-transparent-page-sharing">
<title>XEN transparent page sharing</title>
<para>XenServer 5.6 includes a memory overcommitment feature
named Transparent Page Sharing (TPS). TPS scans memory in 4 KB
chunks for any duplicates. When found, the Xen Virtual Machine
Monitor (VMM) discards one of the duplicates and records the
reference of the second one.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-security-considerations-for-memory-optimization">
<title>Security considerations for memory optimization</title>
<para>Traditionally, memory de-duplication systems are
vulnerable to side channel attacks. Both KSM and TPS have
demonstrated to be vulnerable to some form of attack. In
academic studies attackers were able to identify software packages
and versions running on neighboring virtual machines as well
as software downloads and other sensitive information through
analyzing memory access times on the attacker VM.</para>
<para>If a cloud deployment requires strong separation of
tenants, as is the situation with public clouds and some
private clouds, deployers should consider disabling TPS and
KSM memory optimizations.</para>
</section>
<section xml:id="hypervisor-selection-selection-criteria-additional-security-features">
<title>Additional security features</title>
<para>Another thing to look into when selecting a hypervisor
platform is the availability of specific security features. In
particular, we are referring to features like Xen Server's XSM
or Xen Security Modules, sVirt, Intel TXT, and AppArmor. The
presence of these features increase your security profile as
well as provide a good foundation.</para>
<para>The following table calls out these features by common
hypervisor platforms.</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
<col/>
</colgroup>
<tbody>
<tr>
<td><para/></td>
<td><para>XSM</para></td>
<td><para>sVirt</para></td>
<td><para>TXT</para></td>
<td><para>AppArmor</para></td>
<td><para>cgroups</para></td>
<td><para>MAC Policy</para></td>
</tr>
<tr>
<td><para>KVM</para></td>
<td><para/></td>
<td><para>&CHECK;</para></td>
<td><para>&CHECK;</para></td>
<td><para>&CHECK;</para></td>
<td><para>&CHECK;</para></td>
<td><para>&CHECK;</para></td>
</tr>
<tr>
<td><para>Xen</para></td>
<td><para>&CHECK;</para></td>
<td><para/></td>
<td><para>&CHECK;</para></td>
<td><para/></td>
<td><para/></td>
<td><para>&CHECK;</para></td>
</tr>
<tr>
<td><para>ESXi</para></td>
<td><para/></td>
<td><para/></td>
<td><para>&CHECK;</para></td>
<td><para/></td>
<td><para/></td>
<td><para/></td>
</tr>
<tr>
<td><para>Hyper-V</para></td>
<td><para/></td>
<td><para/></td>
<td><para/></td>
<td><para/></td>
<td><para/></td>
<td><para/></td>
</tr>
</tbody>
</informaltable>
<para>MAC Policy: Mandatory Access Control; may be implemented
with SELinux or other operating systems</para>
<para>* Features in this table might not be applicable to all
hypervisors or directly mappable between hypervisors.</para>
</section>
</section>
<section xml:id="hypversisor-selection-references">
<title>Bibliography</title>
<itemizedlist>
<listitem>
<para>
Sunar, Eisenbarth, Inci, Gorka Irazoqui Apecechea. Fine Grain
Cross-VM Attacks on Xen and VMware are possible!. 2014. <link xlink:href="https://eprint.iacr.org/2014/248.pdf"
>https://eprint.iacr.org/2014/248.pdf</link>
</para>
</listitem>
<listitem>
<para>
Artho, Yagi, Iijima, Kuniyasu Suzaki. Memory Deduplication as
a Threat to the Guest OS. 2011. <link xlink:href="https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf"
>https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf</link>
</para>
</listitem>
<listitem>
<para>KVM: Kernal-based Virtual Machine. Kernal Samepage Merging. 2010. <link xlink:href="http://www.linux-kvm.org/page/KSM">KVM:
Kernel Samepage Merging</link></para>
</listitem>
<listitem>
<para>Xen Project, Xen Security Modules: XSM-FLASK. 2014. <link
xlink:href="http://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK"
>XSM: Xen Security Modules</link></para>
</listitem>
<listitem>
<para>SELinux Project, SVirt. 2011. <link xlink:href="http://selinuxproject.org/page/SVirt"
>xVirt: Mandatory Access Control for Linux-based
virtualization</link></para>
</listitem>
<listitem>
<para>Intel.com, Trusted Compute Pools with Intel Trusted
Execution Technology (Intel TXT). <link xlink:href="http://www.intel.com/txt">TXT: Intel
Trusted Execution Technology</link></para>
</listitem>
<listitem>
<para>AppArmor.net, AppArmor Main Page. 2011. <link xlink:href="http://wiki.apparmor.net/index.php/Main_Page"
>AppArmor: Linux security module implementing
MAC</link></para>
</listitem>
<listitem>
<para>Kernal.org, CGroups. 2004. <link xlink:href="https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt"
>cgroups: Linux kernel feature to control resource
usage</link></para>
</listitem>
<listitem>
<para>Computer Security Resource Centre. Guide to Security for
Full Virtualization Technologies. 2011. <link xlink:href="http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf"
>Guide to Security for Full Virtualization Technologies</link></para>
</listitem>
<listitem>
<para>National Information Assurance Partnership, National Security
Telecommunications and Information Systems Security Policy. 2003. <link xlink:href="http://www.niap-ccevs.org/cc-scheme/nstissp_11_revised_factsheet.pdf"
>National Security Telecommunications and Information Systems Security Policy No. 11</link></para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,93 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-authentication-methods">
<?dbhtml stop-chunking?>
<title>Authentication methods</title>
<section xml:id="identity-authentication-methods-internally-implemented-authentication-methods">
<title>Internally implemented authentication methods</title>
<para>The Identity service can store user credentials in an SQL
Database, or may use an LDAP-compliant directory server. The
Identity database may be separate from databases used by other
OpenStack services to reduce the risk of a compromise of the
stored credentials.</para>
<para>
When you use a user name and password to authenticate,
Identity does not enforce policies on password strength,
expiration, or failed authentication attempts as recommended
by NIST Special Publication 800-118 (draft). Organizations
that desire to enforce stronger password policies should
consider using Identity extensions or external authentication
services.</para>
<para>LDAP simplifies integration of Identity authentication
into an organization's existing directory service and user
account management processes.</para>
<para>Authentication and authorization policy in OpenStack may be
delegated to another service. A typical use case is an organization
that seeks to deploy a private cloud and already has a database of employees
and users in an LDAP system. Using this as the authentication authority,
requests to the Identity service are delegated to the LDAP system,
which will then authorize or deny based on its policies. Upon successful authentication,
the Identity service then generates a token that is used for access to
authorized services.</para>
<para>Note that if the LDAP system has attributes defined for
the user such as admin, finance, HR etc, these must be mapped
into roles and groups within Identity for use by the various
OpenStack services. The <filename>/etc/keystone/keystone.conf</filename>
file maps LDAP attributes to Identity attributes.</para>
<para>The Identity service <emphasis role="bold">MUST
NOT</emphasis> be allowed to write to LDAP services used for
authentication outside of the OpenStack deployment as this
would allow a sufficiently privileged keystone user to make
changes to the LDAP directory. This would allow privilege
escalation within the wider organization or facilitate
unauthorized access to other information and resources. In
such a deployment, user provisioning would be out of the realm
of the OpenStack deployment.</para>
<note>
<para>There is an <link
xlink:href="https://bugs.launchpad.net/ossn/+bug/1168252"
>OpenStack Security Note (OSSN) regarding keystone.conf
permissions</link>.</para>
<para>There is an <link
xlink:href="https://bugs.launchpad.net/ossn/+bug/1155566"
>OpenStack Security Note (OSSN) regarding potential DoS
attacks</link>.</para>
</note>
</section>
<section xml:id="identity-authentication-methods-external-authentication-methods">
<title>External authentication methods</title>
<para>Organizations may desire to implement external
authentication for compatibility with existing authentication
services or to enforce stronger authentication policy
requirements. Although passwords are the most common form of
authentication, they can be compromised through numerous
methods, including keystroke logging and password compromise.
External authentication services can provide alternative forms
of authentication that minimize the risk from weak
passwords.</para>
<para>These include:</para>
<itemizedlist>
<listitem>
<para>Password policy enforcement: Requires user passwords
to conform to minimum standards for length, diversity of
characters, expiration, or failed login attempts.</para>
</listitem>
<listitem>
<para>Multi-factor authentication: The authentication
service requires the user to provide information based on
something they have, such as a one-time password token or
X.509 certificate, and something they know, such as a
password.</para>
</listitem>
<listitem>
<para>Kerberos</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,69 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-authentication">
<?dbhtml stop-chunking?>
<title>Authentication</title>
<para>Authentication is an integral part of any real world OpenStack
deployment and so careful thought should be given to this aspect
of system design. A complete treatment of this topic is beyond
the scope of this guide however some key topics are presented in
the following sections.</para>
<para>At its most basic, authentication is the process of confirming
identity - that a user is actually who they claim to be. A familiar
example is providing a username and password when logging in to a
system.</para>
<para>The OpenStack Identity service (keystone) supports multiple
methods of authentication, including user name &amp; password,
LDAP, and external authentication methods. Upon successful
authentication, The Identity service provides the user with an
authorization token used for subsequent service requests.</para>
<para>Transport Layer Security (TLS) provides authentication
between services and persons using X.509 certificates. Although
the default mode for TLS is server-side only authentication,
certificates may also be used for client authentication.</para>
<section xml:id="identity-authentication-invalid-login-attempts">
<title>Invalid login attempts</title>
<para>The Identity service does not provide a method to limit
access to accounts after repeated unsuccessful login attempts.
A pattern of repetitive failed login attempts is generally an
indicator of brute-force attacks (refer to
<xref linkend="security-boundaries-and-threats-attack-types" />).
This type of attack is more prevalent in public cloud deployments.
</para>
<para>Prevention is possible by using an external authentication
system that blocks out an account after some configured number
of failed login attempts. The account then may only be
unlocked with further side-channel intervention.</para>
<para>If prevention is not an option, detection can be used to
mitigate damage. Detection involves frequent review of access
control logs to identify unauthorized attempts to access
accounts. Possible remediation would include reviewing the
strength of the user password, or blocking the network source
of the attack through firewall rules. Firewall rules on the
keystone server that restrict the number of connections could
be used to reduce the attack effectiveness, and thus dissuade
the attacker.</para>
<para>In addition, it is useful to examine account activity for
unusual login times and suspicious actions, and take corrective
actions such as disabling the account. Oftentimes this approach
is taken by credit card providers for fraud detection and alert.
</para>
</section>
<section xml:id="identity-multi-factor-authentication">
<title>Multi-factor authentication</title>
<para>Employ multi-factor authentication for network access to
privileged user accounts. The Identity service supports
external authentication services through the Apache web server
that can provide this functionality. Servers may also enforce
client-side authentication using certificates.</para>
<para>This recommendation provides insulation from brute force,
social engineering, and both spear and mass phishing attacks
that may compromise administrator passwords.</para>
</section>
</section>

View File

@ -1,101 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-authorization">
<?dbhtml stop-chunking?>
<title>Authorization</title>
<para>The Identity service supports the notion of groups and
roles. Users belong to groups while a group has a list of roles.
OpenStack services reference the roles of the user attempting to
access the service. The OpenStack policy enforcer middleware
takes into consideration the policy rule associated with each
resource then the user's group/roles and association to
determine if access is allowed to the requested resource.</para>
<para>The policy enforcement middleware enables fine-grained
access control to OpenStack resources. Only admin users can
provision new users and have access to various management
functionality. The cloud users would only be able to spin up
instances, attach volumes, and perform other operational tasks.</para>
<section xml:id="identity-authorization-establish-formal-access-control-policies">
<title>Establish formal access control policies</title>
<para>Prior to configuring roles, groups, and users, document
your required access control policies for the OpenStack
installation. The policies should be consistent with any
regulatory or legal requirements for the organization. Future
modifications to the access control configuration should be done
consistently with the formal policies. The policies should
include the conditions and processes for creating, deleting,
disabling, and enabling accounts, and for assigning privileges
to the accounts. Periodically review the policies and ensure
that the configuration is in compliance with approved
policies.</para>
</section>
<section xml:id="identity-authorization-service-authorization">
<title>Service authorization</title>
<para>Cloud administrators must define a user with the role of
admin for each service, as described in the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/index.html"
><citetitle>OpenStack Cloud Administrator
Guide</citetitle></link>. This service
account provides the service with the authorization to
authenticate users.</para>
<para>The Compute and Object Storage services can be configured
to use the Identity service to store authentication information.
Other options to store authentication information include the use
of the "tempAuth" file, however this should not be deployed in a
production environment as the password is displayed in plain text.
</para>
<para>The Identity service supports client authentication for
TLS which may be enabled. TLS client authentication provides
an additional authentication factor, in addition to the
user name and password, that provides greater reliability on user
identification. It reduces the risk of unauthorized access
when user names and passwords may be compromised. However,
there is additional administrative overhead and cost to issue
certificates to users that may not be feasible in every
deployment.</para>
<note>
<para>We recommend that you use client authentication with TLS
for the authentication of services to the Identity
service.</para>
</note>
<para>The cloud administrator should protect sensitive
configuration files from unauthorized modification. This can be
achieved with mandatory access control frameworks such as
SELinux, including <filename>/etc/keystone/keystone.conf</filename> and
X.509 certificates.</para>
<para>Client authentication with TLS requires certificates be issued
to services. These certificates can be signed by an external or
internal certificate authority. OpenStack services check the validity
of certificate signatures against trusted CAs by default and
connections will fail if the signature is not valid or the CA is not
trusted. Cloud deployers may use self-signed certificates. In this case,
the validity check must be disabled or the certificate should be marked as trusted.
To disable validation of self-signed certificates, set
<code>insecure=False</code> in the
<code>[filter:authtoken]</code> section in the
<filename>/etc/nova/api.paste.ini</filename> file. This
setting also disables certificates for other
components.</para>
</section>
<section xml:id="identity-authorization-administrative-users">
<title>Administrative users</title>
<para>We recommend that admin users authenticate using Identity
service and an external authentication service that supports
2-factor authentication, such as a certificate. This reduces
the risk from passwords that may be compromised. This
recommendation is in compliance with NIST 800-53 IA-2(1)
guidance in the use of multi-factor authentication for network
access to privileged accounts.</para>
</section>
<section xml:id="identity-authorization-end-users">
<title>End users</title>
<para>The Identity service can directly provide end-user
authentication, or can be configured to use external
authentication methods to conform to an organization's
security policies and requirements.</para>
</section>
</section>

View File

@ -1,143 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-checklist">
<?dbhtml stop-chunking?>
<title>Checklist</title>
<section xml:id="check-identity-01">
<title>Check-Identity-01: Is user and group ownership of
Identity configuration files set to keystone?</title>
<para>Configuration files contain critical parameters and
information required for smooth functioning of the
component. If an unprivileged user, either intentionally or
accidently modifies or deletes any of the parameters or the
file itself then it would cause severe availability issue
causing denial of service to the other end users. Thus user
and group ownership of such critical configuration files
must be set to that component owner.</para>
<para>Run the following commands:</para>
<screen><prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/keystone.conf | egrep "keystone keystone"</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/keystone-paste.ini | egrep "keystone keystone"</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/policy.json | egrep "keystone keystone"</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/logging.conf | egrep "keystone keystone"</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/ssl/certs/signing_cert.pem | egrep "keystone keystone"</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/ssl/private/signing_key.pem | egrep "keystone keystone"</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%U %G" /etc/keystone/ssl/certs/ca.pem | egrep "keystone keystone"</userinput></screen>
<para><emphasis role="bold">Pass:</emphasis> If user and
group ownership of all these config files is set to
keystone. The above commands show output of keystone
keystone.</para>
<para><emphasis role="bold">Fail:</emphasis> If the above
commands does not return any output as the user or group
ownership might have set to any user other than keystone.
</para>
<para>Recommended in:
<xref linkend="identity-authentication-methods-internally-implemented-authentication-methods"/>
</para>
</section>
<section xml:id="check-identity-02">
<title>Check-Identity-02: Are strict permissions set for
Identity configuration files?</title>
<para>Similar to previous check, it is recommended to set
strict access permissions for such configuration files.
</para>
<para>Run the following commands:</para>
<screen><prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/keystone.conf</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/keystone-paste.ini</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/policy.json</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/logging.conf</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/ssl/certs/signing_cert.pem</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/ssl/private/signing_key.pem</userinput>
<prompt>$</prompt> <userinput>stat -L -c "%a" /etc/keystone/ssl/certs/ca.pem</userinput></screen>
<para><emphasis role="bold">Pass:</emphasis> If permissions
are set to 640 or stricter.</para>
<para><emphasis role="bold">Fail:</emphasis> If permissions
are not set to at least 640.</para>
<para>Recommended in:
<xref linkend="identity-authentication-methods-internally-implemented-authentication-methods"/>
</para>
</section>
<section xml:id="check-identity-03">
<title>Check-Identity-03: is SSL enabled for Identity?
</title>
<para>OpenStack components communicate with each other
using various protocols and the communication might involve
sensitive or confidential data. An attacker may try to
eavesdrop on the channel in order to get access to
sensitive information. Thus all the components must
communicate with each other using a secured communication
protocol like HTTPS.</para>
<para><emphasis role="bold">Pass:</emphasis> If value of
parameter <option>enable</option> under <literal>[ssl]</literal>
section in
<filename>/etc/keystone/keystone.conf</filename> is set to
<literal>True</literal>.</para>
<para><emphasis role="bold">Fail:</emphasis> If value of
parameter <option>enable</option> under <literal>[ssl]</literal>
section is not set to <literal>True</literal>.</para>
<para>Recommended in:
<xref linkend="secure_communication"/></para>
</section>
<section xml:id="check-identity-04">
<title>Check-Identity-04: Does Identity use strong hashing
algorithms for PKI tokens?</title>
<para>MD5 is a weak and depreciated hashing algorithm. It
can be cracked using bruteforce attack. Identity tokens are
sensitive and need to be protected with a stronger
hashing algorithm to prevent unauthorized disclosure and
subsequent access.</para>
<para><emphasis role="bold">Pass:</emphasis> If value of
parameter <option>hash_algorithm</option> under
<literal>[token]</literal> section in
<filename>/etc/keystone/keystone.conf</filename> is set to
SHA256.</para>
<para><emphasis role="bold">Fail:</emphasis> If value of
parameter <option>hash_algorithm</option> under
<literal>[token]</literal>section is set to MD5.</para>
<para>Recommended in:
<xref linkend="identity-tokens"/></para>
</section>
<section xml:id="check-identity-05">
<title>Check-Identity-05: Is value of parameter <option>
max_request_body_size</option> set to default (114688)?
</title>
<para>The parameter <option>max_request_body_size</option>
defines the
maximum body size per request in bytes. If the maximum size
is not defined, the attacker could craft an arbitary
request of large size causing the service to crash and
finally resulting in Denial Of Service attack. Assigning
the maximum value ensures that any malicious oversized
request gets blocked ensuring continued availability of the
component.</para>
<para><emphasis role="bold">Pass:</emphasis> If value of
parameter <option>max_request_body_size</option> in
<filename>/etc/keystone/keystone.conf</filename> is set to
default (114688) or some reasonable value based on your
environment.</para>
<para><emphasis role="bold">Fail:</emphasis> If value of
parameter <option>max_request_body_size</option> is not
set.</para>
</section>
<section xml:id="check-identity-06">
<title>Check-Identity-06: is admin token disabled in
<filename>/etc/keystone/keystone.conf</filename>?</title>
<para>Admin token is generally used to bootstrap Identity.
This token is the most valuable Identity asset, which could
be used to gain cloud admin privileges.</para>
<para><emphasis role="bold">Pass:</emphasis> If
<option>admin_token</option> under <literal>[DEFAULT]</literal>
section in <filename>/etc/keystone/keystone.conf</filename>
is disabled. And, AdminTokenAuthMiddleware under
<literal>[filter:admin_token_auth]</literal> is deleted
from <filename>/etc/keystone/keystone-paste.ini</filename>
</para>
<para><emphasis role="bold">Fail:</emphasis> If
<option>admin_token</option> under [DEFAULT] section is set
and AdminTokenAuthMiddleware exists in
<filename>keystone-paste.ini</filename>.
</para>
</section>
</section>

View File

@ -1,38 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-domains">
<?dbhtml stop-chunking?>
<title>Domains</title>
<para>Domains are high-level containers for projects, users and
groups. As such, they can be used to centrally manage all
keystone-based identity components. With the introduction of
account domains, server, storage and other resources can now be
logically grouped into multiple projects (previously called
tenants) which can themselves be grouped under a master
account-like container. In addition, multiple users can be
managed within an account domain and assigned roles that vary
for each project.</para>
<para>The Identity V3 API supports multiple domains. Users of
different domains may be represented in different authentication
back ends and even have different attributes that must be mapped
to a single set of roles and privileges, that are used in the
policy definitions to access the various service resources.</para>
<para>Where a rule may specify access to only admin users and
users belonging to the tenant, the mapping may be trivial. In
other scenarios the cloud administrator may need to approve the
mapping routines per tenant.</para>
<para>Domain-specific authentication drivers allow the Identity service
to be configured for multiple domains using domain-specific configuration
files. Enabling the drivers and setting the domain-specific
configuration file location occur in the <literal>[identity]</literal>
section of the <filename>keystone.conf</filename> file:</para>
<programlisting>[identity]
domain_specific_drivers_enabled = <replaceable>True</replaceable>
domain_config_dir = /etc/keystone/domains</programlisting>
<para>Any domains without a domain-specific configuration file
will use options in the primary <filename>keystone.conf</filename>
file.</para>
</section>

View File

@ -1,469 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="section_identity-federated-keystone">
<?dbhtml stop-chunking?>
<title>Federated Identity</title>
<para><glossterm baseform="federated identity">Federated Identity</glossterm>
is a mechanism to establish trusts between Identity Providers and
Service Providers (SP), in this case, between Identity Providers and
the services provided by an OpenStack Cloud.</para>
<para>Federated Identity provides a way to securely use existing credentials
to access cloud resources such as servers, volumes, and databases,
across multiple endpoints provided in multiple authorized clouds using a
single set of credentials, without having to provision additional identities
or log in multiple times. The credential is maintained by the user's
Identity Provider.</para>
<para>Some important definitions:</para>
<variablelist>
<varlistentry>
<term><glossterm baseform="service provider">Service Provider</glossterm> (SP)</term>
<listitem><para>A system entity that provides services to principals or
other system entities, in this case, OpenStack Identity is the
Service Provider.</para></listitem>
</varlistentry>
<varlistentry>
<term><glossterm baseform="identity provider">Identity Provider</glossterm> (IdP)</term>
<listitem><para>A directory service, such as LDAP, RADIUS and Active
Directory, which allows users to login with a user name and password,
is a typical source of authentication tokens (e.g. passwords) at an
identity provider.</para></listitem>
</varlistentry>
<varlistentry>
<term><glossterm>SAML assertion</glossterm></term>
<listitem><para>Contains information about a user as provided
by an IdP. It is an indication that a user has been authenticated.
</para></listitem>
</varlistentry>
<varlistentry>
<term>Mapping</term>
<listitem><para>Adds a set of rules to map Federation protocol
attributes to Identity API objects. An Identity Provider has exactly
one mapping specified per protocol.</para></listitem>
</varlistentry>
<varlistentry>
<term>Protocol</term>
<listitem><para>Contains information that dictates which Mapping
rules to use for an incoming request made by an IdP. An IdP may support
multiple protocols. There are three major protocols for federated
identity: OpenID, SAML, and OAuth.</para></listitem>
</varlistentry>
<varlistentry>
<term><glossterm baseform="unscoped token">Unscoped token</glossterm></term>
<listitem><para>Allows a user to authenticate with the Identity
service to exchange the unscoped token for a scoped token, by providing
a project ID or a domain ID.</para></listitem>
</varlistentry>
<varlistentry>
<term><glossterm baseform="scoped token">Scoped token</glossterm></term>
<listitem><para>Allows a user to use all OpenStack services
apart from the Identity service.</para></listitem>
</varlistentry>
</variablelist>
<section xml:id="section_identity-federated-overview">
<title>Why use Federated Identity?</title>
<itemizedlist>
<listitem><para>Provisioning new identities often incurs some security
risk. It is difficult to secure credential storage and to deploy it
with proper policies. A common identity store is useful as it can be
set up properly once and used in multiple places. With Federated Identity,
there is no longer a need to provision user entries in Identity service,
since the user entries already exist in the IdP's databases.</para>
<para>This does introduce new challenges around protecting that
identity. However, this is a worthwhile tradeoff given the greater
control, and fewer credential databases that come with a centralized
common identity store.</para>
</listitem>
<listitem><para>It is a burden on the clients to deal with multiple tokens
across multiple cloud service providers. Federated Identity provides
single sign on to the user, who can use the credentials provided and
maintained by the user's IdP to access many different services on
the Internet.</para></listitem>
<listitem><para>Users spend too much time logging in or going through
'Forget Password' workflows. Federated identity allows for single sign
on, which is easier and faster for users and requires fewer password
resets. The IdPs manage user identities and passwords so OpenStack
does not have to.</para></listitem>
<listitem><para>Too much time is spent administering identities in various
service providers.</para></listitem>
<listitem><para>The best test of interoperability in the cloud is the
ability to enable a user with one set of credentials in an IdP to access
multiple cloud services. Organizations, each using its own IdP can easily
allow their users to collaborate and quickly share the same cloud
services.</para></listitem>
<listitem><para>Removes a blocker to cloud brokering and multi-cloud
workload management. There is no need to build additional authentication
mechanisms ito authenticate users, since the IdPs take care of authenticating
their own users using whichever technologies they deem to be appropriate.
In most organizations, multiple authentication technologies are already in use.</para></listitem>
</itemizedlist>
</section>
<section xml:id="section-identity-configuring-identity-for-federation">
<title>Configuring Identity service for Federation</title>
<para>Federated users are not mirrored in the Identity service back end
(for example, using the SQL driver). The external IdP is responsible for
authenticating users, and communicates the result of the authentication
to Identity service using SAML assertions. Identity service maps the
SAML assertions to keystone user groups and assignments created in
Identity service.</para>
<section xml:id="section_identity-enabling-federation">
<title>Enabling Federation</title>
<para>To enable Federation, perform the following steps:</para>
<procedure>
<step><para>Run the Identity service under Apache, instead of using
<command>keystone-all</command>.</para>
<substeps>
<step><para>Enable TLS support. Install <literal>mod_nss</literal>
according to your distribution, then apply the following patch
and restart HTTPD:</para>
<programlisting>--- /etc/httpd/conf.d/nss.conf.orig 2012-03-29 12:59:06.319470425 -0400
+++ /etc/httpd/conf.d/nss.conf 2012-03-29 12:19:38.862721465 -0400
@@ -17,7 +17,7 @@
# Note: Configurations that use IPv6 but not IPv4-mapped addresses need two
# Listen directives: "Listen [::]:8443" and "Listen 0.0.0.0:443"
#
-Listen 8443
+Listen 443
##
## SSL Global Context
@@ -81,7 +81,7 @@
## SSL Virtual Host Context
##
-&lt;virtualhost _default_:8443="">
+&lt;virtualhost _default_:443="">
# General setup for the virtual host
#DocumentRoot "/etc/httpd/htdocs"
&lt;/virtualhost>&lt;/virtualhost></programlisting>
</step>
<step><para>If you have a firewall in place, configure it to
allow TLS traffic. For example:</para>
<programlisting>-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT</programlisting>
<para>Note this needs to be added before your reject all rule which might be:</para>
<programlisting>-A INPUT -j REJECT --reject-with icmp-host-prohibited</programlisting>
</step>
<step><para>Copy the <filename>httpd/wsgi-keystone.conf</filename>
file to the appropriate location for your Apache server,
for example, <filename>/etc/httpd/conf.d/wsgi-keystone.conf
</filename> file.</para>
</step>
<step><para>Create the directory <literal>/var/www/cgi-bin/keystone/</literal>.
Then link the files <literal>main</literal> and <literal>admin</literal>
to the <filename>keystone.py</filename> file in this directory.</para>
<para>For a distribution appropriate place, it should probably be copied to
<literal>/usr/share/openstack/keystone/httpd/keystone.py</literal>.</para>
<note><para>This path is Ubuntu-specific. For other
distributions, replace with appropriate path.</para></note>
</step>
<step><para>If you are running with SELinux enabled ensure
that the file has the appropriate SELinux context to access
the linked file. For example, if you have the file in
<literal>/var/www/cgi-bin</literal> location, you can do
this by running:</para>
<screen><prompt>#</prompt> <userinput>restorecon /var/www/cgi-bin</userinput></screen>
<para>Adding it in a different location requires you set up
your SELinux policy accordingly.</para>
</step>
<step><para>Make sure you use either the SQL or the <literal>memcached</literal>
driver for tokens, otherwise the tokens will not be shared
between the processes of the Apache HTTPD server.</para>
<para>For SQL, in <filename>/etc/keystone/keystone.conf</filename>
, set:</para>
<programlisting language="ini">[token]
driver = keystone.token.backends.sql.Token</programlisting>
<para>For <literal>memcached</literal>, in
<filename>/etc/keystone/keystone.conf</filename>, set:</para>
<programlisting language="ini">[token]
driver = keystone.token.backends.memcache.Token</programlisting>
<para>In both cases, all servers that are storing tokens need
a shared back end. This means either that both point to the
same database server, or both point to a common memcached
instance.</para>
</step>
<step><para>Install Shibboleth:</para>
<screen><prompt>#</prompt> <userinput>apt-get install libapache2-mod-shib2</userinput></screen>
<note><para>The <literal>apt-get</literal> command is Ubuntu
specific. For other distributions, replace with appropriate
command.</para></note></step>
<step><para>Configure the Identity service virtual host and
adjust the config to properly handle SAML2 workflow.</para>
<para>Add <literal>WSGIScriptAlias</literal> directive to your vhost configuration:</para>
<programlisting>WSGIScriptAliasMatch ^(/v3/OS-FEDERATION/identity_providers/.*?/protocols/.*?/auth)$ /var/www/keystone/main/$1</programlisting>
</step>
<step><para>Add two <literal>&lt;Location></literal> directives
to the <filename>wsgi-keystone.conf</filename> file:</para>
<programlisting>&lt;Location /Shibboleth.sso>
SetHandler shib
&lt;/Location>
&lt;LocationMatch /v3/OS-FEDERATION/identity_providers/.*?/protocols/saml2/auth>
ShibRequestSetting requireSession 1
AuthType shibboleth
ShibRequireAll On
ShibRequireSession On
ShibExportAssertion Off
Require valid-user
&lt;/LocationMatch></programlisting>
<note><para>The option <literal>saml2</literal> may be different in your
deployment, but do not use a wildcard value. Otherwise every
Federated protocol will be handled by Shibboleth.</para>
<para>The <literal>ShibRequireSession</literal> rule is invalid
in Apache 2.4 or newer and should be dropped in that specific setup.</para></note>
</step>
<step><para>Enable the Identity service virtual host:</para>
<screen><prompt>#</prompt> <userinput>a2ensite wsgi-keystone.conf</userinput></screen>
</step>
<step><para>Enable the <literal>ssl</literal> and <literal>shib2</literal>
modules:</para>
<screen><prompt>#</prompt> <userinput>a2enmod ssl</userinput>
<prompt>#</prompt> <userinput>a2enmod shib2</userinput></screen>
</step>
<step><para>Restart Apache:</para>
<screen><prompt>#</prompt> <userinput>service apache2 restart</userinput></screen>
<note><para>The <literal>service apache2 restart</literal>
command is Ubuntu-specific. For other distributions, replace
with appropriate command.</para></note>
</step>
</substeps>
</step>
<step><para>Configure Apache to use a Federation capable authentication
method.</para>
<substeps>
<step><para>Once you have your Identity service virtual host ready,
configure Shibboleth and upload your metadata to the Identity Provider.</para>
<para>If new certificates are required, they can be easily created by executing:</para>
<screen><prompt>$</prompt> <userinput>shib-keygen -y <replaceable>NUMBER_OF_YEARS</replaceable></userinput></screen>
<para>The newly created file will be stored under
<filename>/etc/shibboleth/sp-key.pem</filename></para>
</step>
<step><para>Upload your Service Providers metadata file to your
Identity Provider.</para>
</step>
<step><para>Configure your Service Provider by editing
<filename>/etc/shibboleth/shibboleth2.xml</filename>.</para>
<para>For more information, see <link xlink:href="https://wiki.shibboleth.net/confluence/display/SHIB2/Configuration">
<citetitle>Shibboleth Service Provider Configuration</citetitle></link>.</para>
</step>
<step><para>Identity service enforces <literal>external</literal>
authentication when environment variable <literal>REMOTE_USER</literal>
is present so make sure Shibboleth does not set the
<literal>REMOTE_USER</literal> environment variable. To do
so, scan through the <filename>/etc/shibboleth/shibboleth2.xml</filename>
configuration file and remove the <literal>REMOTE_USER</literal>
directives.</para>
</step>
<step><para>Examine your attributes map in the
<filename>/etc/shibboleth/attributes-map.xml</filename> file
and adjust your requirements if needed. For more information
see <link xlink:href="https://wiki.shibboleth.net/confluence/display/SHIB2/NativeSPAddAttribute">
<citetitle>Shibboleth Attributes</citetitle></link>.</para>
</step>
<step><para>Restart the Shibboleth daemon:</para>
<screen><prompt>#</prompt> <userinput>service shibd restart</userinput>
<prompt>#</prompt> <userinput>service apache2 restart</userinput></screen>
</step>
</substeps>
</step>
<step><para>Enable <literal>OS-FEDERATION</literal> extension:</para>
<substeps>
<step><para>Add the Federation extension driver to the
<literal>[federation]</literal> section in the <filename>keystone.conf</filename>
file. For example:</para>
<programlisting language="ini">[federation]
driver = keystone.contrib.federation.backends.sql.Federation</programlisting>
</step>
<step><para>Add the saml2 authentication method to the
<literal>[auth]</literal> section in <filename>keystone.conf</filename>
file:</para>
<programlisting language="ini">[auth]
methods = external,password,token,saml2
saml2 = keystone.auth.plugins.saml2.Saml2</programlisting>
<note><para>The <literal>external</literal> method should be
dropped to avoid any interference with some Apache and Shibboleth
SP setups, where a <literal>REMOTE_USER</literal> environment variable is
always set, even as an empty value.</para></note>
</step>
<step><para>Add the <literal>federation_extension</literal>
middleware to the <literal>api_v3</literal> pipeline in the
<filename>keystone-paste.ini</filename> file. For example:</para>
<programlisting language="ini">[pipeline:api_v3]
pipeline = access_log sizelimit url_normalize token_auth admin_token_auth
xml_body json_body ec2_extension s3_extension federation_extension
service_v3</programlisting>
</step>
<step><para>Create the Federation extension tables if using the provided SQL back end. For example:</para>
<screen><prompt>$</prompt> <userinput>keystone-manage db_sync --extension federation</userinput></screen>
</step>
</substeps>
</step>
</procedure>
<para>Ideally, to test that the Identity Provider and the Identity
service are communicating, navigate to the protected URL and attempt
to sign in. If you get a response back from keystone,
even if it is a wrong response, indicates the communication.</para>
</section>
<section xml:id="section_identity-federated-config">
<title>Configuring Federation</title>
<para>Now that the Identity Provider and Identity service are communicating,
you can start to configure the <literal>OS-FEDERATION</literal> extension.</para>
<procedure>
<step><para>Create Identity groups and assign roles.</para>
<para>No new users will be added to the Identity back end, but the Identity
service requires group-based role assignments to authorize federated
users. The Federation mapping function will map the user into local
Identity service groups objects, and hence to local role
assignments.</para>
<para>Thus, it is required to create the necessary Identity service
groups that correspond to the Identity Providers groups; additionally,
these groups should be assigned roles on one or more projects or
domains. For example, groups here refers to the Identity service
groups that should be created so that when mapping from the SAML
attribute <literal>Employees</literal>, you can map it to a
Identity service group <literal>devs</literal>.</para>
<para>The Identity service administrator can create as many groups
as there are SAML attributes, whatever the mapping calls for.</para></step>
<step><para>Add Identity Providers, Mappings and Protocols.</para>
<para>To utilize Federation, create the following in the Identity service:
Identity Provider, Mapping, Protocol.</para></step>
</procedure>
</section>
<section xml:id="section_identity-federated-authentication">
<title>Performing Federation authentication</title>
<procedure>
<step><para>Authenticate externally and generate an unscoped token in
Identity service.</para>
<para>To start Federated authentication a user must access the dedicated
URL with Identity Providers and Protocols identifiers stored within
a protected URL. The URL has a format of:
<literal>/v3/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth</literal>.</para>
<para>This instance follows a standard SAML2 authentication procedure,
that is, the user will be redirected to the Identity Providers
authentication webpage and be prompted for credentials. After successfully
authenticating the user will be redirected to the Service Providers
endpoint. If using a web browser, a token will be returned in XML format.
As an alternative to using a web browser, you can use Enhanced Client
or Proxy (ECP), which is available in the <literal>keystoneclient</literal>
in the Identity service API.</para>
<para>In the returned unscoped token, a list of Identity service groups
the user belongs to will be included.</para>
<para>For example, the following URL would be considered protected
by <literal>mod_shib</literal> and Apache, as such a request made
to the URL would be redirected to the Identity Provider, to start
the SAML authentication procedure.</para>
<screen><prompt>#</prompt> <userinput>curl -X GET \
-D - http://localhost:5000/v3/OS-FEDERATION/identity_providers/{identity_provider}/protocols/{protocol}/auth</userinput></screen>
<note><para>It is assumed that the <literal>keystone</literal> service
is running on port <literal>5000</literal>.</para></note>
</step>
<step><para>Determine accessible resources.</para>
<para>By using the previously returned token, the user can issue
requests to the list projects and domains that are accessible.</para>
<itemizedlist>
<listitem><para>List projects a federated user can access:
<literal>GET /OS-FEDERATION/projects</literal>
</para></listitem>
<listitem><para>List domains a federated user can access:
<literal>GET /OS-FEDERATION/domains</literal></para></listitem>
</itemizedlist>
<para>For example,</para>
<screen><prompt>#</prompt> <userinput>curl -X GET \
-H "X-Auth-Token: &lt;unscoped token>" http://localhost:5000/v3/OS-FEDERATION/projects</userinput></screen>
<para>or</para>
<screen><prompt>#</prompt> <userinput>curl -X GET \
-H "X-Auth-Token: &lt;unscoped token>" http://localhost:5000/v3/OS-FEDERATION/domains</userinput></screen></step>
<step><para>Get a scoped token.</para>
<para>A federated user may request a scoped token, by using the
unscoped token. A project or domain may be specified by either
ID or name. An ID is sufficient to uniquely identify a project
or domain. For example,</para>
<screen><prompt>#</prompt> <userinput>curl -X POST \
-H "Content-Type: application/json" \
-d '{"auth":{"identity":{"methods":["saml2"],"saml2":{"id":"&lt;unscoped_token_id>"}},"scope":{"project":{"domain": {"name": "Default"},"name":"service"}}}}' \
-D - http://localhost:5000/v3/auth/tokens</userinput></screen></step>
</procedure>
</section>
</section>
<section xml:id="section_federated-config-options">
<title>Setting Identity service as Identity Provider</title>
<section xml:id="section_configuration-options">
<title>Configuration options</title>
<para>Before attempting to federate multiple Identity service deployments,
you must setup certain configuration options in the <filename>keystone.conf</filename>
file.</para>
<para>Within the <filename>keystone.conf</filename> assign values to the
<literal>[saml]</literal> related fields, for example:</para>
<programlisting language="ini">[saml]
certfile=/etc/keystone/ssl/certs/ca.pem
keyfile=/etc/keystone/ssl/private/cakey.pem
idp_entity_id=https://keystone.example.com/v3/OS-FEDERATION/saml2/idp
idp_sso_endpoint=https://keystone.example.com/v3/OS-FEDERATION/saml2/sso
idp_metadata_path=/etc/keystone/saml2_idp_metadata.xml</programlisting>
<para>It is recommended that the following <literal>Organization</literal>
configuration options be setup.</para>
<programlisting language="ini">
idp_organization_name=example_company
idp_organization_display_name=Example Corp.
idp_organization_url=example.com</programlisting>
<para>It is also recommended the following <literal>Contact</literal>
options are set.</para>
<programlisting language="ini">
idp_contact_company=example_company
idp_contact_name=John
idp_contact_surname=Smith
idp_contact_email=jsmith@example.com
idp_contact_telephone=555-55-5555
idp_contact_type=technical</programlisting>
</section>
<section xml:id="section-get-metadata">
<title>Generate metadata</title>
<para>In order to create a trust between the Identity Provider and the
Service Provider, metadata must be exchanged. To create metadata for
your Identity service, run the <command>keystone-manage</command>
command and pipe the output to a file. For example:</para>
<screen><prompt>$</prompt> <userinput>keystone-manage saml_idp_metadata > /etc/keystone/saml2_idp_metadata.xml</userinput></screen>
<note><para>The file location should match the value of the configuration
option <option>idp_metadata_path</option> that was assigned in the list
of <literal>[saml]</literal> updates.</para></note>
</section>
<section xml:id="section_service-provider-region">
<title>Create a region for the Service Provider</title>
<para>Create a new region for the service provider, for example, create
a new region with an <literal>ID</literal> of <replaceable>BETA</replaceable>,
and <literal>URL</literal> of <replaceable>https://beta.com/Shibboleth.sso/SAML2/POST</replaceable>.
This URL will be used when creating a SAML assertion for <replaceable>BETA</replaceable>,
and signed by the current keystone Identity Provider.</para>
<screen><prompt>$</prompt> <userinput>curl -s -X PUT \
-H "X-Auth-Token: $OS_TOKEN" \
-H "Content-Type: application/json" \
-d '{"region": {"url": "<replaceable>http://beta.com/Shibboleth.sso/SAML2/POST</replaceable>"}}' \
http://localhost:5000/v3/regions/<replaceable>BETA</replaceable> | python -mjson.tool</userinput></screen>
</section>
<section xml:id="section-testing-federated-identity">
<title>Testing it all out</title>
<para>Lastly, if a scoped token and a Service Provider region are presented
to keystone, the result will be a full SAML Assertion, signed by the IdP
keystone, specifically intended for the Service Provider keystone.</para>
<screen><prompt>$</prompt> <userinput>curl -s -X POST \
-H "Content-Type: application/json" \
-d '{"auth": {"scope": {"region": {"id": "<replaceable>BETA</replaceable>"}}, "identity": {"token": {"id": "<replaceable>d793d935b9c343f783955cf39ee7dc3c</replaceable>"}, "methods": ["token"]}}}' \
http://localhost:5000/v3/auth/OS-FEDERATION/saml2</userinput></screen>
<para>At this point the SAML Assertion can be sent to the Service Provider
keystone, and a valid OpenStack token, issued by a Service Provider keystone,
will be returned.</para>
</section>
</section>
<section xml:id="section-federated-future">
<title>Future</title>
<para>Currently, the CLI supports the Enhanced Client or Proxy (ECP),
(the non-browser) support for <literal>keystoneclient</literal> from an
API perspective. So, if you are using the <literal>keystoneclient</literal>,
you can create a client instance and use the SAML authorization plugin.
There is no support for dashboard available presently. With the upcoming
OpenStack releases, Federated Identity should be supported with both CLI
and the dashboard.</para>
</section>
</section>

View File

@ -1,28 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-policies">
<?dbhtml stop-chunking?>
<title>Policies</title>
<para>Each OpenStack service has a policy file in JSON format,
called <filename>policy.json</filename>. The policy
file specifies rules, and the rule that governs each resource. A
resource could be API access, the ability to attach to a volume,
or to fire up instances.</para>
<para>The policies can be updated by the cloud administrator to
further control access to the various resources. The middleware
could also be further customized. Note that your users must be
assigned to groups/roles that you refer to in your
policies.</para>
<para>Below is a snippet of the Block Storage service
<filename>policy.json</filename> file.</para>
<programlisting language="json"><xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/samples/policy.json" parse="text"/></programlisting>
<para>Note the <emphasis role="bold">default</emphasis> rule
specifies that the user must be either an admin or the owner of
the volume. It essentially says only the owner of a volume or
the admin may create/delete/update volumes. Certain other
operations such as managing volume types are accessible only to
admin users.</para>
</section>

View File

@ -1,34 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="identity-tokens">
<?dbhtml stop-chunking?>
<title>Tokens</title>
<para>Once a user is authenticated a token is generated for
authorization and access to an OpenStack environment. A
token can have a variable life span; however since the release
of OpenStack Icehouse, the default value for expiry has been
reduced to one hour. The recommended expiry value should be
set to a lower value that allows enough time for internal
services to complete tasks. In the event that the token expires
before tasks complete, the cloud may become unresponsive or stop
providing services. An example of expended time during use would
be the time needed by the Compute service to transfer a disk image
onto the hypervisor for local caching.</para>
<para>The following example shows a PKI token. Note that token id
values are typically 3500 bytes. In this example, the value has
been truncated.</para>
<programlisting language="json"><xi:include href="http://git.openstack.org/cgit/openstack/openstack-manuals/plain/doc/common/samples/token.json" parse="text"/></programlisting>
<para>The token is often passed within the structure of a larger
context of an Identity service response. These responses also
provide a catalog of the various OpenStack services. Each service
is listed with its name, access endpoints for internal, admin,
and public access.</para>
<para>The Identity service supports token revocation. This
manifests as an API to revoke a token, to list revoked tokens
and individual OpenStack services that cache tokens to query for
the revoked tokens and remove them from their cache and append
the same to their list of cached revoked tokens.</para>
</section>

View File

@ -1,432 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="integrity-life-cycle">
<?dbhtml stop-chunking?>
<title>Integrity life-cycle</title>
<para>We define integrity life cycle as a deliberate process that
provides assurance that we are always running the expected
software with the expected configurations throughout the cloud.
This process begins with secure bootstrapping and is maintained
through configuration management and security monitoring. This
chapter provides recommendations on how to approach the integrity
life-cycle process.</para>
<section xml:id="integrity-life-cycle-secure-bootstrapping">
<title>Secure bootstrapping</title>
<para>Nodes in the cloud&mdash;including compute, storage, network,
service, and hybrid nodes&mdash;should have an automated
provisioning process. This ensures that nodes are provisioned
consistently and correctly. This also facilitates security
patching, upgrading, bug fixing, and other critical changes.
Since this process installs new software that runs at the
highest privilege levels in the cloud, it is important to verify
that the correct software is installed. This includes the
earliest stages of the boot process.</para>
<para>There are a variety of technologies that enable verification
of these early boot stages. These typically require hardware
support such as the trusted platform module (TPM), Intel Trusted
Execution Technology (TXT), dynamic root of trust measurement
(DRTM), and Unified Extensible Firmware Interface (UEFI) secure
boot. In this book, we will refer to all of these collectively
as <emphasis>secure boot technologies</emphasis>. We recommend
using secure boot, while acknowledging that many of the pieces
necessary to deploy this require advanced technical skills in
order to customize the tools for each environment. Utilizing
secure boot will require deeper integration and customization
than many of the other recommendations in this guide. TPM
technology, while common in most business class laptops and
desktops for several years, and is now becoming available in
servers together with supporting BIOS. Proper planning is
essential to a successful secure boot deployment.</para>
<para>A complete tutorial on secure boot deployment is beyond the
scope of this book. Instead, here we provide a framework for how
to integrate secure boot technologies with the typical node
provisioning process. For additional details, cloud architects
should refer to the related specifications and software
configuration manuals.</para>
<section xml:id="integrity-life-cycle-secure-bootstrapping-node-provisioning">
<title>Node provisioning</title>
<para>Nodes should use Preboot eXecution Environment (PXE) for
provisioning. This significantly reduces the effort required
for redeploying nodes. The typical process involves the node
receiving various boot stages&mdash;that is progressively more
complex software to execute&mdash; from a server.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="203" contentwidth="274"
fileref="static/node-provisioning-pxe.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/node-provisioning-pxe.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>We recommend using a separate, isolated network within the
management security domain for provisioning. This network will
handle all PXE traffic, along with the subsequent boot stage
downloads depicted above. Note that the node boot process begins
with two insecure operations: DHCP and TFTP. Then the boot process
uses TLS to download the remaining information required to deploy
the node. This may be an operating system installer, a basic
install managed by
<link xlink:href="http://www.opscode.com/chef/">Chef</link>
or <link xlink:href="https://puppetlabs.com/">Puppet</link>,
or even a complete file system image that is written directly
to disk.</para>
<para>While utilizing TLS during the PXE boot process is
somewhat more challenging, common PXE firmware projects, such
as iPXE, provide this support. Typically this involves
building the PXE firmware with knowledge of the allowed TLS
certificate chain(s) so that it can properly validate the
server certificate. This raises the bar for an attacker by
limiting the number of insecure, plain text network
operations.</para>
</section>
<section xml:id="integrity-life-cycle-secure-bootstrapping-runtime-verfication">
<title>Verified boot</title>
<para>In general, there are two different strategies for
verifying the boot process. Traditional <emphasis>secure
boot</emphasis> will validate the code run at each step in
the process, and stop the boot if code is incorrect.
<emphasis>Boot attestation</emphasis> will record which code
is run at each step, and provide this information to another
machine as proof that the boot process completed as expected.
In both cases, the first step is to measure each piece of code
before it is run. In this context, a measurement is
effectively a SHA-1 hash of the code, taken before it is
executed. The hash is stored in a platform configuration
register (PCR) in the TPM.</para>
<para>Note: SHA-1 is used here because this is what the TPM
chips support.</para>
<para>Each TPM has at least 24 PCRs. The TCG Generic Server
Specification, v1.0, March 2005, defines the PCR assignments
for boot-time integrity measurements. The table below shows a
typical PCR configuration. The context indicates if the values
are determined based on the node hardware (firmware) or the
software provisioned onto the node. Some values are influenced
by firmware versions, disk sizes, and other low-level
information. Therefore, it is important to have good practices
in place around configuration management to ensure that each
system deployed is configured exactly as desired.</para>
<informaltable rules="all" width="80%">
<colgroup>
<col/>
<col/>
<col/>
</colgroup>
<tbody>
<tr>
<td><para><emphasis role="bold"
>Register</emphasis></para></td>
<td><para><emphasis role="bold">What is
measured</emphasis></para></td>
<td><para><emphasis role="bold"
>Context</emphasis></para></td>
</tr>
<tr>
<td><para>PCR-00</para></td>
<td><para>Core Root of Trust Measurement (CRTM), BIOS
code, Host platform extensions</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-01</para></td>
<td><para>Host platform configuration</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-02</para></td>
<td><para>Option ROM code</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-03</para></td>
<td><para>Option ROM configuration and data</para></td>
<td><para>Hardware</para></td>
</tr>
<tr>
<td><para>PCR-04</para></td>
<td><para>Initial Program Loader (IPL) code. For example,
master boot record.</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-05</para></td>
<td><para>IPL code configuration and data</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-06</para></td>
<td><para>State transition and wake events</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-07</para></td>
<td><para>Host platform manufacturer control</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-08</para></td>
<td><para>Platform specific, often kernel, kernel
extensions, and drivers</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-09</para></td>
<td><para>Platform specific, often Initramfs</para></td>
<td><para>Software</para></td>
</tr>
<tr>
<td><para>PCR-10 to PCR-23</para></td>
<td><para>Platform specific</para></td>
<td><para>Software</para></td>
</tr>
</tbody>
</informaltable>
<para>At the time of this writing, very few clouds are using
secure boot technologies in a production environment. As a
result, these technologies are still somewhat immature. We
recommend planning carefully in terms of hardware selection.
For example, ensure that you have a TPM and Intel TXT support.
Then verify how the node hardware vendor populates the PCR
values. For example, which values will be available for
validation. Typically the PCR values listed under the software
context in the table above are the ones that a cloud architect
has direct control over. But even these may change as the
software in the cloud is upgraded. Configuration management
should be linked into the PCR policy engine to ensure that the
validation is always up to date.</para>
<para>Each manufacturer must provide the BIOS and firmware code
for their servers. Different servers, hypervisors, and
operating systems will choose to populate different PCRs. In
most real world deployments, it will be impossible to validate
every PCR against a known good quantity ("golden
measurement"). Experience has shown that, even within a single
vendor's product line, the measurement process for a given PCR
may not be consistent. We recommend establishing a baseline
for each server and monitoring the PCR values for unexpected
changes. Third-party software may be available to assist in
the TPM provisioning and monitoring process, depending upon
your chosen hypervisor solution.</para>
<para>The initial program loader (IPL) code will most likely be
the PXE firmware, assuming the node deployment strategy
outlined above. Therefore, the secure boot or boot attestation
process can measure all of the early stage boot code, such as
BIOS, firmware, the PXE firmware, and the kernel image.
Ensuring that each node has the correct versions of
these pieces installed provides a solid foundation on which to
build the rest of the node software stack.</para>
<para>Depending on the strategy selected, in the event of a
failure the node will either fail to boot or it can report the
failure back to another entity in the cloud. For secure boot,
the node will fail to boot and a provisioning service within
the management security domain must recognize this and log the
event. For boot attestation, the node will already be running
when the failure is detected. In this case the node should be
immediately quarantined by disabling its network access. Then
the event should be analyzed for the root cause. In either
case, policy should dictate how to proceed after a failure. A
cloud may automatically attempt to re-provision a node a
certain number of times. Or it may immediately notify a cloud
administrator to investigate the problem. The right policy
here will be deployment and failure mode specific.</para>
</section>
<section xml:id="integrity-life-cycle-secure-bootstrapping-node-hardening">
<title>Node hardening</title>
<para>At this point we know that the node has booted with the
correct kernel and underlying components. There are many paths
for hardening a given operating system deployment. The
specifics on these steps are outside of the scope of this
book. We recommend following the guidance from a hardening
guide specific to your operating system. For example, the
<link xlink:href="http://iase.disa.mil/stigs/">security
technical implementation guides</link> (STIG) and the <link
xlink:href="http://www.nsa.gov/ia/mitigation_guidance/security_configuration_guides/"
>NSA guides</link> are useful starting places.</para>
<para>The nature of the nodes makes additional hardening
possible. We recommend the following additional steps for
production nodes:</para>
<itemizedlist>
<listitem>
<para>Use a read-only file system where possible. Ensure
that writeable file systems do not permit execution. This
can be handled through the mount options provided in
<filename>/etc/fstab</filename>.</para>
</listitem>
<listitem>
<para>Use a mandatory access control policy to contain the
instances, the node services, and any other critical
processes and data on the node. See the discussions on
sVirt / SELinux and AppArmor below.</para>
</listitem>
<listitem>
<para>Remove any unnecessary software packages. This should
result in a very stripped down installation because a
compute node has a relatively small number of
dependencies.</para>
</listitem>
</itemizedlist>
<para>Finally, the node kernel should have a mechanism to
validate that the rest of the node starts in a known good
state. This provides the necessary link from the boot
validation process to validating the entire system. The steps
for doing this will be deployment specific. As an example, a
kernel module could verify a hash over the blocks comprising
the file system before mounting it using <link
xlink:href="https://code.google.com/p/cryptsetup/wiki/DMVerity"
>dm-verity</link>.</para>
</section>
</section>
<section xml:id="integrity-life-cycle-runtime-verification">
<title>Runtime verification</title>
<para>Once the node is running, we need to ensure that it remains
in a good state over time. Broadly speaking, this includes both
configuration management and security monitoring. The goals for
each of these areas are different. By checking both, we achieve
higher assurance that the system is operating as desired. We
discuss configuration management in the management section, and
security monitoring below.</para>
<section xml:id="integrity-life-cycle-runtime-verification-intrusion-detection-system">
<title>Intrusion detection system</title>
<para>Host-based intrusion detection tools are also useful for
automated validation of the cloud internals. There are a wide
variety of host-based intrusion detection tools available.
Some are open source projects that are freely available, while
others are commercial. Typically these tools analyze data from
a variety of sources and produce security alerts based on rule
sets and/or training. Typical capabilities include log
analysis, file integrity checking, policy monitoring, and
rootkit detection. More advanced -- often custom -- tools can
validate that in-memory process images match the on-disk
executable and validate the execution state of a running
process.</para>
<para>One critical policy decision for a cloud architect is what
to do with the output from a security monitoring tool. There
are effectively two options. The first is to alert a human to
investigate and/or take corrective action. This could be done
by including the security alert in a log or events feed for
cloud administrators. The second option is to have the cloud
take some form of remedial action automatically, in addition
to logging the event. Remedial actions could include anything
from re-installing a node to performing a minor service
configuration. However, automated remedial action can be
challenging due to the possibility of false positives.</para>
<para>False positives occur when the security monitoring tool
produces a security alert for a benign event. Due to the
nature of security monitoring tools, false positives will most
certainly occur from time to time. Typically a cloud
administrator can tune security monitoring tools to reduce the
false positives, but this may also reduce the overall
detection rate at the same time. These classic trade-offs must
be understood and accounted for when setting up a security
monitoring system in the cloud.</para>
<para>The selection and configuration of a host-based intrusion
detection tool is highly deployment specific. We recommend
starting by exploring the following open source projects which
implement a variety of host-based intrusion detection and file
monitoring features.</para>
<itemizedlist>
<listitem>
<para><link xlink:href="http://www.ossec.net/"
>OSSEC</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://la-samhna.de/samhain/"
>Samhain</link></para>
</listitem>
<listitem>
<para><link
xlink:href="http://sourceforge.net/projects/tripwire/"
>Tripwire</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://aide.sourceforge.net/"
>AIDE</link></para>
</listitem>
</itemizedlist>
<para>Network intrusion detection tools complement the
host-based tools. OpenStack doesn't have a specific network
IDS built-in, but OpenStack Networking
provides a plug-in mechanism to enable different technologies
through the Networking API. This plug-in architecture will allow
tenants to develop API extensions to insert and configure
their own advanced networking services like a firewall, an
intrusion detection system, or a VPN between the VMs.</para>
<para>Similar to host-based tools, the selection and
configuration of a network-based intrusion detection tool is
deployment specific. <link xlink:href="http://www.snort.org/"
>Snort</link> is the leading open source networking
intrusion detection tool, and a good starting place to learn
more.</para>
<para>There are a few important security considerations for
network and host-based intrusion detection systems.</para>
<itemizedlist>
<listitem>
<para>It is important to consider the placement of the
Network IDS on the cloud (for example, adding it to the
network boundary and/or around sensitive networks). The
placement depends on your network environment but make
sure to monitor the impact the IDS may have on your
services depending on where you choose to add it.
Encrypted traffic, such as TLS, cannot generally be
inspected for content by a Network IDS. However, the
Network IDS may still provide some benefit in identifying
anomalous unencrypted traffic on the network.</para>
</listitem>
<listitem>
<para>In some deployments it may be required to add
host-based IDS on sensitive components on security domain
bridges. A host-based IDS may detect anomalous activity
by compromised or unauthorized processes on the component.
The IDS should transmit alert and log information on the
Management network.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="integrity-life-cycle-server-hardening">
<title>Server hardening</title>
<para>
Servers in the cloud, including undercloud and overcloud
infrastructure, should implement hardening best practices. As
OS and server hardening is common, applicable best practices
including but not limited to logging, user account restrictions,
and regular updates will not be covered here, but should be
applied to all infrastructure.
</para>
<section xml:id="integrity-life-cycle-file-integrity-management">
<title>File integrity management (FIM)</title>
<para>
File integrity management (FIM) is the method of ensuring that
files such as sensitive system or application configuration
files are not corrupted or changed to allow unauthorized access
or malicious behavior. This can be done through a utility such
as Samhain that will create a checksum hash of the specified
resource and then validate that hash at regular intervals, or
through a tool such as DMVerity that can take a hash of block
devices and will validate those hashes as they are accessed by
the system before they are presented to the user.
</para>
<para>
These should be put in place to monitor and report on changes
to system, hypervisor, and application configuration files such
as <filename>/etc/pam.d/system-auth</filename> and <filename>
/etc/keystone.conf</filename>, as well as kernel modules (such
as virtio). Best practice is to use the <command>lsmod</command>
command to show what is regularly being loaded on a system to
help determine what should or should not be included in FIM
checks.
</para>
</section>
</section>
</section>

View File

@ -1,44 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="introduction-to-case-studies">
<?dbhtml stop-chunking?>
<title>Introduction to case studies</title>
<para>This guide refers to two running case studies, which are
introduced here and referred to at the end of each chapter.</para>
<section xml:id="introduction-to-case-studies-case-study-alice-the-private-cloud-builder">
<title>Case study: Alice, the private cloud builder</title>
<para>Alice is a technical manager overseeing a new OpenStack
deployment for the US government in support of Healthcare.gov. The
load on the cloud is expected to be variable, with moderate usage
increasing to heavy usage during annual enrollment periods. She is
aware that her private cloud will need to be certified against
FISMA through the FedRAMP accreditation process required for all
federal agencies, departments, and contractors as well as being
under HIPAA purview. These compliance frameworks will place the
burden of effort around logging, reporting, and policy. While
technical controls will require Alice to use Public Key
Infrastructure to encrypt wire-level communication, and SELinux
for Mandatory Access Controls, Alice will invest in tool
development to automate the reporting. Additionally, comprehensive
documentation is expected covering application and network
architecture, controls, and other details. The FedRAMP
classification of Alice's system is High per FIPS-199. Alice will
leverage existing authentication/authorization infrastructure in
the form of Microsoft Active Directory, and an existing enterprise
SEIM deployment that she will use to build new views and correlation
rules to better monitor the state of her cloud.</para>
</section>
<section xml:id="introduction-to-case-studies-case-study-bob-the-public-cloud-provider">
<title>Case study: Bob, the public cloud provider</title>
<para>Bob is a lead architect for a company that deploys a large
greenfield public cloud. This cloud provides IaaS for the masses
and enables any consumer with a valid credit card access to
utility computing and storage, but the primary focus is
enterprise customers. Data privacy concerns are a big priority
for Bob as they are seen as a major barrier to large-scale
adoption of the cloud by organizations.</para>
</section>
</section>

View File

@ -1,257 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="introduction-to-openstack">
<?dbhtml stop-chunking?>
<title>Introduction to OpenStack</title>
<para>This guide provides security insight into <glossterm>OpenStack</glossterm>
deployments. The intended audience is cloud architects, deployers,
and administrators. In addition, cloud users will find the guide
both educational and helpful in provider selection, while auditors
will find it useful as a reference document to support their
compliance certification efforts. This guide is also recommended
for anyone interested in cloud security.</para>
<para>Each OpenStack deployment embraces a wide variety of
technologies, spanning Linux distributions, database systems,
messaging queues, OpenStack components themselves, access control
policies, logging services, security monitoring tools, and much
more. It should come as no surprise that the security issues
involved are equally diverse, and their in-depth analysis would
require several guides. We strive to find a balance, providing
enough context to understand OpenStack security issues and their
handling, and provide external references for further information.
The guide could be read from start to finish or sampled as
necessary like a reference.</para>
<para>We briefly introduce the kinds of clouds: private, public, and
hybrid before presenting an overview of the OpenStack components
and their related security concerns in the remainder of the
chapter.</para>
<section xml:id="introduction-to-openstack-cloud-types">
<title>Cloud types</title>
<para>OpenStack is a key enabler in adoption of cloud technology
and has several common deployment use cases. These are commonly
known as Public, Private, and Hybrid models. The following
sections use the National Institute of Standards and Technology
(NIST) <link
xlink:href="http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf"
>definition of cloud</link> to introduce these different types
of cloud as they apply to OpenStack.</para>
<section xml:id="introduction-to-openstack-cloud-types-public-cloud">
<title>Public cloud</title>
<para>According to NIST, a public cloud is one in which the
infrastructure is open to the general public for consumption.
OpenStack public clouds are typically run by a service
provider and can be consumed by individuals, corporations, or
any paying customer. A public cloud provider may expose a full
set of features such as software-defined networking, block
storage, in addition to multiple instance types. Due to the
nature of public clouds, they are exposed to a higher degree
of risk. As a consumer of a public cloud you should validate
that your selected provider has the necessary certifications,
attestations, and other regulatory considerations. As a public
cloud provider, depending on your target customers, you may be
subject to one or more regulations. Additionally, even if not
required to meet regulatory requirements, a provider should
ensure tenant isolation as well as protecting management
infrastructure from external attacks.</para>
</section>
<section xml:id="introduction-to-openstack-cloud-types-private-cloud">
<title>Private cloud</title>
<para>At the opposite end of the spectrum is the private cloud.
As NIST defines it, a private cloud is provisioned for
exclusive use by a single organization comprising multiple
consumers, such as business units. It may be owned, managed,
and operated by the organization, a third-party, or some
combination of them, and it may exist on or off premises.
Private cloud use cases are diverse, as such, their individual
security concerns vary.</para>
</section>
<section xml:id="introduction-to-openstack-cloud-types-community-cloud">
<title>Community cloud</title>
<para>NIST defines a community cloud as one whose
infrastructure is provisioned for the exclusive use by a
specific community of consumers from organizations that have
shared concerns. For example, mission, security requirements,
policy, and compliance considerations. It may be owned,
managed, and operated by one or more of the organizations in
the community, a third-party, or some combination of them, and
it may exist on or off premises.</para>
</section>
<section xml:id="introduction-to-openstack-cloud-types-hybrid-cloud">
<title>Hybrid cloud</title>
<para>A hybrid cloud is defined by NIST as a composition of two
or more distinct cloud infrastructures, such as private, community, or
public, that remain unique entities, but are bound together by
standardized or proprietary technology that enables data and
application portability, such as cloud bursting for load
balancing between clouds. For example an online retailer may
have their advertising and catalogue presented on a public
cloud that allows for elastic provisioning. This would enable
them to handle seasonal loads in a flexible, cost-effective
fashion. Once a customer begins to process their order, they
are transferred to the more secure private cloud back end that
is PCI compliant.</para>
<para>For the purposes of this document, we treat Community and
Hybrid similarly, dealing explicitly only with the extremes of
Public and Private clouds from a security perspective. Your
security measures depend where your deployment falls upon the
private public continuum.</para>
</section>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview">
<title>OpenStack service overview</title>
<para>OpenStack embraces a modular architecture to provide a set
of core services that facilitates scalability and elasticity as
core design tenets. This chapter briefly reviews OpenStack
components, their use cases and security considerations.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="374" contentwidth="976"
fileref="static/marketecture-diagram.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/marketecture-diagram.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<section xml:id="introduction-to-openstack-openstack-service-overview-compute">
<title>Compute</title>
<para>OpenStack <glossterm>Compute</glossterm> service
(<glossterm>nova</glossterm>) provides services to
support the management of virtual machine instances at scale,
instances that host multi-tiered applications, dev/test
environments, "Big Data" crunching Hadoop clusters, and/or
high performance computing.</para>
<para>The Compute service facilitates this management through an
abstraction layer that interfaces with supported hypervisors,
which we address later on in more detail.</para>
<para>Later in the guide, we focus generically on the
virtualization stack as it relates to hypervisors.</para>
<para>For information about the current state of feature
support, see <link
xlink:href="https://wiki.openstack.org/wiki/HypervisorSupportMatrix"
>OpenStack Hypervisor Support Matrix</link>.</para>
<para>The security of Compute is critical for an OpenStack
deployment. Hardening techniques should include support for
strong instance isolation, secure communication between
Compute sub-components, and resiliency of public-facing
<glossterm>API</glossterm> endpoints.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-object-storage">
<title>Object Storage</title>
<para>The OpenStack <glossterm>Object Storage</glossterm>
service (<glossterm>swift</glossterm>) provides
support for storing and retrieving arbitrary data in the
cloud. The Object Storage service provides both a native API
and an Amazon Web Services S3 compatible API. The service
provides a high degree of resiliency through data replication
and can handle petabytes of data.</para>
<para>It is important to understand that object storage differs
from traditional file system storage. It is best used for
static data such as media files (MP3s, images, videos),
virtual machine images, and backup files.</para>
<para>Object security should focus on access control and
encryption of data in transit and at rest. Other concerns may
relate to system abuse, illegal or malicious content storage,
and cross authentication attack vectors.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-block-storage">
<title>Block Storage</title>
<para>The OpenStack <glossterm>Block Storage</glossterm>
service (<glossterm>cinder</glossterm>) provides
persistent block storage for compute instances. The Block
Storage service is responsible for managing the life-cycle of
block devices, from the creation and attachment of volumes to
instances, to their release.</para>
<para>Security considerations for block storage are similar to
that of object storage.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-networking">
<title>Networking</title>
<para>The OpenStack <glossterm>Networking</glossterm> service
(<glossterm>neutron</glossterm>, previously
called quantum) provides various networking services to cloud
users (tenants) such as IP address management,
<glossterm>DNS</glossterm>, <glossterm>DHCP</glossterm>,
load balancing, and security groups (network access rules,
like firewall policies). It provides a framework for software
defined networking (SDN) that allows for pluggable integration
with various networking solutions.</para>
<para>OpenStack Networking allows cloud tenants to manage their
guest network configurations. Security concerns with the
networking service include network traffic isolation,
availability, integrity and confidentiality.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-dashboard">
<title>Dashboard</title>
<para>The OpenStack <glossterm>dashboard</glossterm>
(<glossterm>horizon</glossterm>) provides a
web-based interface for both cloud administrators and cloud
tenants. Through this interface administrators and tenants can
provision, manage, and monitor cloud resources. Horizon is
commonly deployed in a public facing manner with all the usual
security concerns of public web portals.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-identity-service">
<title>Identity service</title>
<para>The OpenStack <glossterm baseform="Identity Service">Identity</glossterm>
service (<glossterm>keystone</glossterm>) is a <emphasis
role="bold">shared service</emphasis> that provides
authentication and authorization services throughout the
entire cloud infrastructure. The Identity service has
pluggable support for multiple forms of authentication.</para>
<para>Security concerns here pertain to trust in authentication,
management of authorization tokens, and secure
communication.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-image-service">
<title>Image service</title>
<para>The OpenStack <glossterm>Image service</glossterm>
(<glossterm>glance</glossterm>) provides disk image
management services. The Image service provides image
discovery, registration, and delivery services to the
Compute service, as needed.</para>
<para>Trusted processes for managing the life cycle of disk
images are required, as are all the previously mentioned
issues with respect to data security.</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-data-processing-service">
<title>Data processing service</title>
<para>
The <glossterm>Data processing service</glossterm> for OpenStack
(<glossterm>sahara</glossterm>) provides a platform for the
provisioning, management, and usage of clusters running popular
processing frameworks.
</para>
<para>
Security considerations for data processing should focus on data
privacy and secure communications to provisioned clusters.
</para>
</section>
<section xml:id="introduction-to-openstack-openstack-service-overview-other-supporting-technology">
<title>Other supporting technology</title>
<para>OpenStack relies on messaging for internal communication
between several of its services. By default, OpenStack uses
message queues based on the Advanced Message Queue Protocol
(<glossterm baseform="Advanced Message Queuing Protocol (AMQP)">AMQP
</glossterm>). Similar to most OpenStack services, it supports
pluggable components. Today the implementation back end could be
<glossterm>RabbitMQ</glossterm>,
<glossterm>Qpid</glossterm>, or
<glossterm>ZeroMQ</glossterm>.</para>
<para>As most management commands flow through the message
queuing system, it is a primary security concern for any
OpenStack deployment. Message queuing security is discussed
in detail later in this guide.</para>
<para>Several of the components use databases though it is not
explicitly called out. Securing the access to the databases
and their contents is yet another security concern, and
consequently discussed in more detail later in this
guide.</para>
</section>
</section>
</section>

View File

@ -1,152 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="introduction-to-ssl-tls">
<?dbhtml stop-chunking?>
<title>Introduction to TLS and SSL</title>
<para>
There are a number of situations where there is a security
requirement to assure the confidentiality or integrity of
network traffic in an OpenStack deployment. This is generally
achieved using cryptographic measures, such as the Transport
Layer Security (TLS) protocol.
</para>
<para>
In a typical deployment all traffic transmitted over public
networks is secured, but security best practice dictates that
internal traffic must also be secured. It is insufficient to rely
on security domain separation for protection. If an attacker
gains access to the hypervisor or host resources, compromises an
API endpoint, or any other service, they must not be able to
easily inject or capture messages, commands, or otherwise affect
the management capabilities of the cloud.
</para>
<para>
All domains should be secured with TLS, including the management
domain services and intra-service communications. TLS provides the
mechanisms to ensure authentication, non-repudiation,
confidentiality, and integrity of user communications to the
OpenStack services and between the OpenStack services themselves.
</para>
<para>
Due to the published vulnerabilities in the Secure Sockets Layer
(SSL) protocols, we strongly recommend that TLS is used in preference
to SSL, and that SSL is disabled in all cases, unless compatibility
with obsolete browsers or libraries is required.</para>
<para>
Public Key Infrastructure (PKI) is the framework for securing
communication in a network. It consists of a set of systems and
processes to ensure traffic can be sent securely while validating
the identities of the parties. The core components of PKI are:</para>
<variablelist>
<varlistentry>
<term>End entity</term>
<listitem>
<para>User, process, or system that is the subject of a
certificate.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Certification Authority (<glossterm>CA</glossterm>)</term>
<listitem>
<para>Defines certificate policies, management, and issuance
of certificates.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Registration Authority (RA)</term>
<listitem>
<para>An optional system to which a CA delegates certain
management functions.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Repository</term>
<listitem>
<para>
Where the end entity certificates and certificate
revocation lists are stored and looked up - sometimes
referred to as the <emphasis role="italic">certificate
bundle</emphasis>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Relying party</term>
<listitem>
<para>The endpoint that is trusting that the CA is valid.</para>
</listitem>
</varlistentry>
</variablelist>
<para>PKI builds the framework on which to provide encryption algorithms, cipher modes, and protocols for securing data and authentication. We strongly recommend securing all services with Public Key Infrastructure (PKI), including the use of TLS for API endpoints. It is impossible for the encryption or signing of transports or messages alone to solve all these problems. Hosts themselves must be secure and implement policy, namespaces, and other controls to protect their private credentials and keys. However, the challenges of key management and protection do not reduce the necessity of these controls, or lessen their importance.</para>
<section xml:id="introduction-to-ssl-tls-certificate-authorities">
<title>Certification authorities</title>
<para>Many organizations have an established Public Key Infrastructure with their own certification authority (CA), certificate policies, and management for which they should use to issue certificates for internal OpenStack users or services. Organizations in which the public security domain is Internet facing will additionally need certificates signed by a widely recognized public CA. For cryptographic communications over the management network, it is recommended one not use a public CA. Instead, we expect and recommend most deployments deploy their own internal CA.</para>
<para>It is recommended that the OpenStack cloud architect consider using separate PKI deployments for internal systems and customer facing services. This allows the cloud deployer to maintain control of their PKI infrastructure and among other things makes requesting, signing and deploying certificates for internal systems easier. Advanced configurations may use separate PKI deployments for different security domains. This allows deployers to maintain cryptographic separation of environments, ensuring that certificates issued to one are not recognized by another.</para>
<para>Certificates used to support TLS on internet facing cloud endpoints (or customer interfaces where the customer is not expected to have installed anything other than standard operating system provided certificate bundles) should be provisioned using Certificate Authorities that are installed in the operating system certificate bundle. Typical well known vendors include Verisign and Thawte but many others exist.</para>
<para>There are many management, policy, and technical challenges around creating and signing certificates. This is an area where cloud architects or operators may wish to seek the advice of industry leaders and vendors in addition to the guidance recommended here.</para>
</section>
<section xml:id="introduction-to-ssl-tls-ssl-tls-libraries">
<title>TLS libraries</title>
<para>Various components, services, and applications within the OpenStack ecosystem or dependencies of OpenStack are implemented and can be configured to use TLS libraries. The TLS and HTTP services within OpenStack are typically implemented using OpenSSL which has a module that has been validated for FIPS 140-2. However, keep in mind that each application or service can still introduce weaknesses in how they use the OpenSSL libraries.</para>
</section>
<section xml:id="introduction-to-ssl-tls-cryptographic-algorithms-cipher-modes-and-protocols">
<title>Cryptographic algorithms, cipher modes, and protocols</title>
<para>We recommend only using TLSv1.2. TLSv1.0 and TLSv1.1 may be used for broad client compatibility but we recommend using caution and only enabling these protocols if you have a strong requirement to do so. Other TLS versions, explicitly older versions, should not be used. These older versions include SSLv2 which is deprecated, and SSLv3 which suffers from the attack known as POODLE. When using TLS v1.2 and in control of both the clients and the server, the cipher suite should be limited to <literal>ECDHE-ECDSA-AES256-GCM-SHA384</literal>. Where you don't control both ends and are using TLS v1+, the more general <literal>HIGH:!aNULL:!eNULL:!DES:!3DES</literal> is a reasonable cipher selection.</para>
<para>However, as this book does not intend to be a thorough reference on cryptography we do not wish to be prescriptive about what specific algorithms or cipher modes you should enable or disable in your OpenStack services. However, there are some authoritative references we would like to recommend for further information:</para>
<itemizedlist><listitem>
<para><link xlink:href="http://www.nsa.gov/ia/programs/suiteb_cryptography/index.shtml">National Security Agency, Suite B Cryptography</link></para>
</listitem>
<listitem>
<para><link xlink:href="https://www.owasp.org/index.php/Guide_to_Cryptography">OWASP Guide to Cryptography</link></para>
</listitem>
<listitem>
<para><link xlink:href="https://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet">OWASP Transport Layer Protection Cheat Sheet</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.ieee-security.org/TC/SP2013/papers/4977a511.pdf">SoK: SSL and HTTPS: Revisiting past challenges and evaluating certificate trust model enhancements</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.cs.utexas.edu/~shmat/shmat_ccs12.pdf">The Most Dangerous Code in the World: Validating SSL Certificates in Non-Browser Software</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.openssl.org/docs/fips/fipsnotes.html">OpenSSL and FIPS 140-2</link></para>
</listitem>
</itemizedlist>
</section>
<section xml:id="introduction-to-ssl-tls-summary">
<title>Summary</title>
<para>
Given the complexity of the OpenStack components and the
number of deployment possibilities, you must take care to
ensure that each component gets the appropriate configuration
of TLS certificates, keys, and CAs. Subsequent sections discuss
the following services:
</para>
<itemizedlist>
<listitem>
<para>Compute API endpoints</para>
</listitem>
<listitem>
<para>Identity API endpoints</para>
</listitem>
<listitem>
<para>Networking API endpoints</para>
</listitem>
<listitem>
<para>Storage API endpoints</para>
</listitem>
<listitem>
<para>Messaging server</para>
</listitem>
<listitem>
<para>Database server</para>
</listitem>
<listitem>
<para>Dashboard</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,25 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="key-management">
<?dbhtml stop-chunking?>
<title>Key management</title>
<para>To address the often mentioned concern of tenant data privacy and limiting cloud provider liability, there is greater interest within the OpenStack community to make data encryption more ubiquitous. It is relatively easy for an end-user to encrypt their data prior to saving it to the cloud, and this is a viable path for tenant objects such as media files, database archives among others. In some instances, client-side encryption is utilized to encrypt data held by the virtualization technologies which requires client interaction, such as presenting keys, to decrypt data for future use. To seamlessly secure the data and have it accessible without burdening the client with having to manage their keys and interactively provide them calls for a key management service within OpenStack. Providing encryption and key management services as part of OpenStack eases data-at-rest security adoption and addresses customer concerns about privacy or misuse of data, while also limiting cloud provider liability. This can help reduce a provider's liability when handling tenant data during an incident investigation in multi-tenant public clouds.</para>
<para>The volume encryption and ephemeral disk encryption features rely on a key management service (for example, Barbican) for the creation and secure storage of keys. The key manager is pluggable to facilitate deployments that need a third-party Hardware Security Module (HSM) or the use of the Key Management Interchange Protocol (KMIP), which is supported by an open-source project called PyKMIP.</para>
<section xml:id="key-management-references">
<title>Bibliography:</title>
<itemizedlist>
<listitem>
<para>OpenStack.org, Welcome to Barbican's Developer Documentation!. 2014. <link xlink:href="http://docs.openstack.org/developer/barbican">Barbican developer documentation</link></para>
</listitem>
<listitem>
<para>oasis-open.org, OASIS Key Management Interoperability Protocol (KMIP). 2014. <link xlink:href="https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=kmip">KMIP</link></para>
</listitem>
<listitem>
<para>PyKMIP library https://github.com/OpenKMIP/PyKMIP</para>
</listitem>
</itemizedlist>
</section>
</section>

View File

@ -1,206 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="management-interfaces">
<?dbhtml stop-chunking?>
<title>Management interfaces</title>
<para>It is necessary for administrators to perform command and
control over the cloud for various operational functions. It is
important these command and control facilities are understood and
secured.</para>
<para>OpenStack provides several management interfaces for operators and tenants:</para>
<itemizedlist><listitem>
<para>OpenStack dashboard (horizon)</para>
</listitem>
<listitem>
<para>OpenStack API</para>
</listitem>
<listitem>
<para>Secure shell (SSH)</para>
</listitem>
<listitem>
<para>OpenStack management utilities such as
<systemitem class="service">nova-manage</systemitem> and
<systemitem class="service">glance-manage</systemitem></para>
</listitem>
<listitem>
<para>Out-of-band management interfaces, such as IPMI</para>
</listitem>
</itemizedlist>
<section xml:id="management-interfaces-dashboard">
<title>Dashboard</title>
<para>
The OpenStack dashboard (horizon) provides administrators and
tenants with a web-based graphical interface to provision and
access cloud-based resources. The dashboard communicates with
the back-end services through calls to the OpenStack API.
</para>
<section xml:id="management-interfaces-dashboard-capabilities">
<title>Capabilities</title>
<itemizedlist><listitem>
<para>As a cloud administrator, the dashboard provides an overall view of the size and state of your cloud. You can create users and tenants/projects, assign users to tenant/projects and set limits on the resources available for them.</para>
</listitem>
<listitem>
<para>The dashboard provides tenant-users a self-service portal to provision their own resources within the limits set by administrators.</para>
</listitem>
<listitem>
<para>The dashboard provides GUI support for routers and load-balancers. For example, the dashboard now implements all of the main Networking features.</para>
</listitem>
<listitem>
<para>It is an extensible <glossterm>Django</glossterm> web application that allows easy plug-in of third-party products and services, such as billing, monitoring, and additional management tools.</para>
</listitem>
<listitem>
<para>The dashboard can also be branded for service providers and other commercial vendors.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="management-interfaces-dashboard-security-considerations">
<title>Security considerations</title>
<itemizedlist><listitem>
<para>The dashboard requires cookies and JavaScript to be enabled in the web browser.</para>
</listitem>
<listitem>
<para>The web server that hosts the dashboard should be configured for TLS to ensure data is encrypted.</para>
</listitem>
<listitem>
<para>Both the horizon web service and the OpenStack API it
uses to communicate with the back end are susceptible to
web attack vectors such as denial of service and must be
monitored.</para>
</listitem>
<listitem>
<para>
It is now possible (though there are numerous
deployment/security implications) to upload an image
file directly from a user's hard disk to OpenStack Image
Service through the dashboard. For multi-gigabyte images it is
still strongly recommended that the upload be done using
the <command>glance</command> CLI.
</para>
</listitem>
<listitem>
<para>
Create and manage security groups through dashboard. The
security groups allows L3-L4 packet filtering for
security policies to protect virtual machines.
</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="management-interfaces-dashboard-references">
<title>References</title>
<para><link xlink:href="https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse"><citetitle>Icehouse Release Notes</citetitle></link></para>
</section>
</section>
<section xml:id="management-interfaces-openstack-api">
<title>OpenStack API</title>
<para>The OpenStack API is a RESTful web service endpoint to
access, provision and automate cloud-based resources. Operators
and users typically access the API through command-line
utilities (for example, <command>nova</command> or
<command>glance</command>), language-specific libraries, or
third-party tools.</para>
<section xml:id="management-interfaces-openstack-api-capabilities">
<title>Capabilities</title>
<itemizedlist><listitem>
<para>To the cloud administrator, the API provides an
overall view of the size and state of the cloud deployment
and allows the creation of users, tenants/projects,
assigning users to tenants/projects, and specifying
resource quotas on a per tenant/project basis.</para>
</listitem>
<listitem>
<para>The API provides a tenant interface for provisioning, managing, and accessing their resources.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="management-interfaces-openstack-api-security-considerations">
<title>Security considerations</title>
<itemizedlist><listitem>
<para>The API service should be configured for TLS to ensure data is encrypted.</para>
</listitem>
<listitem>
<para>As a web service, OpenStack API is susceptible to familiar web site attack vectors such as denial of service attacks.</para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="management-interfaces-secure-shell-ssh">
<title>Secure shell (SSH)</title>
<para>It has become industry practice to use secure shell (SSH) access for the management of Linux and Unix systems. SSH uses secure cryptographic primitives for communication. With the scope and importance of SSH in typical OpenStack deployments, it is important to understand best practices for deploying SSH.</para>
<section xml:id="management-interfaces-secure-shell-ssh-host-key-fingerprints">
<title>Host key fingerprints</title>
<para>Often overlooked is the need for key management for SSH hosts. As most or all hosts in an OpenStack deployment will provide an SSH service, it is important to have confidence in connections to these hosts. It cannot be understated that failing to provide a reasonably secure and accessible method to verify SSH host key fingerprints is ripe for abuse and exploitation.</para>
<para>All SSH daemons have private host keys and, upon connection, offer a host key fingerprint. This host key fingerprint is the hash of an unsigned public key. It is important these host key fingerprints are known in advance of making SSH connections to those hosts. Verification of host key fingerprints is instrumental in detecting man-in-the-middle attacks.</para>
<para>Typically, when an SSH daemon is installed, host keys will be generated. It is necessary that the hosts have sufficient entropy during host key generation. Insufficient entropy during host key generation can result in the possibility to eavesdrop on SSH sessions.</para>
<para>Once the SSH host key is generated, the host key fingerprint should be stored in a secure and queryable location. One particularly convenient solution is DNS using SSHFP resource records as defined in RFC-4255. For this to be secure, it is necessary that DNSSEC be deployed.</para>
</section>
</section>
<section xml:id="management-interfaces-management-utilities">
<title>Management utilities</title>
<para>The OpenStack Management Utilities are open-source Python
command-line clients that make API calls. There is a client for
each OpenStack service (for example, <systemitem
class="service">nova</systemitem>, <systemitem
class="service">glance</systemitem>). In addition to the
standard CLI client, most of the services have a management
command-line utility which makes direct calls to the database. These
dedicated management utilities are slowly being
deprecated.</para>
<section xml:id="management-interfaces-management-utilities-security-considerations">
<title>Security considerations</title>
<itemizedlist><listitem>
<para>The dedicated management utilities (*-manage) in some cases use the direct database connection.</para>
</listitem>
<listitem>
<para>Ensure that the .rc file which has your credential information is secured.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="management-interfaces-management-utilities-references">
<title>References</title>
<para><citetitle>OpenStack End User Guide</citetitle> section <link xlink:href="http://docs.openstack.org/user-guide/cli.html">command-line clients overview</link></para>
<para><citetitle>OpenStack End User Guide</citetitle> section <link xlink:href="http://docs.openstack.org/user-guide/common/cli_set_environment_variables_using_openstack_rc.html">Download and source the OpenStack RC file</link></para>
</section>
</section>
<section xml:id="management-interfaces-out-of-band-management-interface">
<title>Out-of-band management interface</title>
<para>OpenStack management relies on out-of-band management
interfaces such as the IPMI protocol to access into nodes
running OpenStack components. IPMI is a very popular
specification to remotely manage, diagnose, and reboot servers
whether the operating system is running or the system has
crashed.</para>
<section xml:id="management-interfaces-out-of-band-management-interface-security-consideration">
<title>Security considerations</title>
<itemizedlist><listitem>
<para>Use strong passwords and safeguard them, or use client-side TLS authentication.</para>
</listitem>
<listitem>
<para>Ensure that the network interfaces are on their own private(management or a separate) network. Segregate management domains with firewalls or other network gear.</para>
</listitem>
<listitem>
<para>If you use a web interface to interact with the
<glossterm>BMC</glossterm>/IPMI, always use the TLS
interface, such as HTTPS or port 443. This TLS interface
should <emphasis role="bold">NOT</emphasis> use
self-signed certificates, as is often default, but should
have trusted certificates using the correctly defined
fully qualified domain names (FQDNs).</para>
</listitem>
<listitem>
<para>Monitor the traffic on the management network. The
anomalies might be easier to track than on the busier
compute nodes.</para>
</listitem>
</itemizedlist>
<para>Out of band management interfaces also often include graphical machine console access. It is often possible, although not necessarily default, that these interfaces are encrypted. Consult with your system software documentation for encrypting these interfaces.</para>
</section>
<section xml:id="management-interfaces-out-of-band-management-interface-references">
<title>References</title>
<para><link xlink:href="https://isc.sans.edu/diary/IPMI%3A+Hacking+servers+that+are+turned+%22off%22/13399">Hacking servers that are turned off</link></para>
</section>
</section>
</section>

View File

@ -1,237 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="messaging-security">
<?dbhtml stop-chunking?>
<title>Messaging security</title>
<para>
This section discusses security hardening approaches for the
three most common message queuing solutions used in OpenStack:
RabbitMQ, Qpid, and ZeroMQ.
</para>
<section xml:id="messaging-security-messaging-transport-security">
<title>Messaging transport security</title>
<para>
AMQP based solutions (Qpid and RabbitMQ) support
transport-level security using TLS. ZeroMQ messaging does not
natively support TLS, but transport-level security is possible
using labelled IPsec or CIPSO network labels.
</para>
<para>
We highly recommend enabling transport-level cryptography for
your message queue. Using TLS for the messaging client
connections provides protection of the communications from
tampering and eavesdropping in-transit to the messaging
server. Below is guidance on how TLS is typically configured
for the two popular messaging servers Qpid and RabbitMQ. When
configuring the trusted certificate authority (CA) bundle that
your messaging server uses to verify client connections, it is
recommended that this be limited to only the CA used for your
nodes, preferably an internally managed CA. The bundle of
trusted CAs will determine which client certificates will be
authorized and pass the client-server verification step of the
setting up the TLS connection. Note, when installing the
certificate and key files, ensure that the file permissions
are restricted, for example using <command>chmod 0600</command>,
and the ownership is
restricted to the messaging server daemon user to prevent
unauthorized access by other processes and users on the
messaging server.
</para>
<section xml:id="messaging-security-messaging-transport-security-rabbitmq-server-ssl-configuration">
<title>RabbitMQ server SSL configuration</title>
<para>The following lines should be added to the system-wide
RabbitMQ configuration file, typically
<filename>/etc/rabbitmq/rabbitmq.config</filename>:
</para>
<programlisting>[
{rabbit, [
{tcp_listeners, [] },
{ssl_listeners, [{"&lt;IP address or hostname of management network interface&gt;", 5671}] },
{ssl_options, [{cacertfile,"/etc/ssl/cacert.pem"},
{certfile,"/etc/ssl/rabbit-server-cert.pem"},
{keyfile,"/etc/ssl/rabbit-server-key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,true}]}
]}
].</programlisting>
<para>Note, the <literal>tcp_listeners</literal> option is set
to <literal>[]</literal> to prevent it from listening an on
non-SSL port. The <literal>ssl_listeners</literal> option
should be restricted to only listen on the management network
for the services.</para>
<para>For more information on RabbitMQ SSL configuration see:</para>
<itemizedlist><listitem>
<para>
<link xlink:href="http://www.rabbitmq.com/configure.html">RabbitMQ Configuration</link>
</para>
</listitem>
<listitem>
<para><link xlink:href="http://www.rabbitmq.com/ssl.html">RabbitMQ SSL</link></para>
</listitem>
</itemizedlist>
</section>
<section xml:id="messaging-security-messaging-transport-security-qpid-server-ssl-configuration">
<title>Qpid server SSL configuration</title>
<para>The Apache Foundation has a messaging security guide for Qpid. See:</para>
<itemizedlist><listitem>
<para><link xlink:href="http://qpid.apache.org/releases/qpid-0.32/cpp-broker/book/chap-Messaging_User_Guide-Security.html#sect-Messaging_User_Guide-Security-Encryption_using_SSL">Apache Qpid SSL</link></para>
</listitem>
</itemizedlist>
</section>
</section>
<section xml:id="messaging-security-queue-authentication-and-access-control">
<title>Queue authentication and access control</title>
<para>
RabbitMQ and Qpid offer authentication and access control
mechanisms for controlling access to queues. ZeroMQ offers no
such mechanisms.
</para>
<para>
Simple Authentication and Security Layer (SASL) is a framework
for authentication and data security in Internet
protocols. Both RabbitMQ and Qpid offer SASL and other
pluggable authentication mechanisms beyond simple user names
and passwords that allow for increased authentication
security. While RabbitMQ supports SASL, support in OpenStack
does not currently allow for requesting a specific SASL
authentication mechanism. RabbitMQ support in OpenStack allows
for either user name and password authentication over an
unencrypted connection or user name and password in
conjunction with X.509 client certificates to establish the
secure TLS connection.
</para>
<para>
We recommend configuring X.509 client certificates on all the
OpenStack service nodes for client connections to the
messaging queue and where possible (currently only Qpid)
perform authentication with X.509 client certificates. When
using user names and passwords, accounts should be created
per-service and node for finer grained auditability of access
to the queue.
</para>
<para>
Before deployment, consider the TLS libraries that the queuing
servers use. Qpid uses Mozilla's NSS library, whereas RabbitMQ
uses Erlang's TLS module which uses OpenSSL.</para>
<section xml:id="messaging-security-queue-authentication-and-access-control-authentication-configuration-example-rabbitmq">
<title>Authentication configuration example: RabbitMQ</title>
<para>On the RabbitMQ server, delete the default
<literal>guest</literal> user:</para>
<screen><prompt>#</prompt> <userinput>rabbitmqctl delete_user quest</userinput></screen>
<para>On the RabbitMQ server, for each OpenStack service or
node that communicates with the message queue set up user
accounts and privileges:</para>
<screen><prompt>#</prompt> <userinput>rabbitmqctl add_user compute01 <replaceable>RABBIT_PASS</replaceable></userinput>
<prompt>#</prompt> <userinput>rabbitmqctl set_permissions compute01 ".*" ".*" ".*"</userinput></screen>
<para>Replace <replaceable>RABBIT_PASS</replaceable> with a suitable password.</para>
<para>For additional configuration information see:</para>
<itemizedlist><listitem>
<para><link xlink:href="http://www.rabbitmq.com/access-control.html">RabbitMQ Access Control</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.rabbitmq.com/authentication.html">RabbitMQ Authentication</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.rabbitmq.com/plugins.html">RabbitMQ Plugins</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://hg.rabbitmq.com/rabbitmq-auth-mechanism-ssl/file/rabbitmq_v3_1_3/README">RabbitMQ SASL External Auth</link></para>
</listitem>
</itemizedlist>
</section>
<section xml:id="messaging-security-queue-authentication-and-access-control-openstack-service-configuration-rabbitmq">
<title>OpenStack service configuration: RabbitMQ</title>
<programlisting language="ini">[DEFAULT]
rpc_backend=nova.openstack.common.rpc.impl_kombu
rabbit_use_ssl=True
rabbit_host=
rabbit_port=5671
rabbit_user=compute01
rabbit_password=<replaceable>RABBIT_PASS</replaceable>
kombu_ssl_keyfile=/etc/ssl/node-key.pem
kombu_ssl_certfile=/etc/ssl/node-cert.pem
kombu_ssl_ca_certs=/etc/ssl/cacert.pem</programlisting>
</section>
<section xml:id="messaging-security-queue-authentication-and-access-control-authentication-configuration-example-qpid">
<title>Authentication configuration example: Qpid</title>
<para>For configuration information see:</para>
<itemizedlist><listitem>
<para><link xlink:href="http://qpid.apache.org/releases/qpid-0.32/cpp-broker/book/chap-Messaging_User_Guide-Security.html#sect-Messaging_User_Guide-Security-User_Authentication">Apache Qpid Authentication</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://qpid.apache.org/releases/qpid-0.32/cpp-broker/book/chap-Messaging_User_Guide-Security.html#sect-Messaging_User_Guide-Security-Authorization">Apache Qpid Authorization</link></para>
</listitem>
</itemizedlist>
</section>
<section xml:id="messaging-security-queue-authentication-and-access-control-openstack-service-configuration-qpid">
<title>OpenStack service configuration: Qpid</title>
<programlisting language="ini">
[DEFAULT]
rpc_backend=nova.openstack.common.rpc.impl_qpid
qpid_protocol=ssl
qpid_hostname=&lt;IP or hostname of management network interface of messaging server&gt;
qpid_port=5671
qpid_username=compute01
qpid_password=<replaceable>QPID_PASS</replaceable></programlisting>
<para>Optionally, if using SASL with Qpid specify the SASL mechanisms in use by adding:</para>
<programlisting language="ini">qpid_sasl_mechanisms=&lt;space separated list of SASL mechanisms to use for auth&gt;</programlisting>
</section>
</section>
<section xml:id="messaging-security-message-queue-process-isolation-and-policy">
<title>Message queue process isolation and policy</title>
<para>
Each project provides a number of services which send and
consume messages. Each binary which sends a message is
expected to consume messages, if only replies, from the queue.
</para>
<para>
Message queue service processes should be isolated from each
other and other processes on a machine.
</para>
<section xml:id="messaging-security-message-queue-process-isolation-and-policy-namespaces">
<title>Namespaces</title>
<para>
Network namespaces are highly recommended for all services
running on OpenStack Compute Hypervisors. This will help
prevent against the bridging of network traffic between VM
guests and the management network.
</para>
<para>
When using ZeroMQ messaging, each host must run at least one
ZeroMQ message receiver to receive messages from the network
and forward messages to local processes through IPC. It is
possible and advisable to run an independent message
receiver per project within an IPC namespace, along with
other services within the same project.</para>
</section>
<section xml:id="messaging-security-message-queue-process-isolation-and-policy-network-policy">
<title>Network policy</title>
<para>
Queue servers should only accept connections from the
management network. This applies to all
implementations. This should be implemented through
configuration of services and optionally enforced through
global network policy.
</para>
<para>
When using ZeroMQ messaging, each project should run a
separate ZeroMQ receiver process on a port dedicated to
services belonging to that project. This is equivalent to
the AMQP concept of control exchanges.
</para>
</section>
<section xml:id="messaging-security-message-queue-process-isolation-and-policy-mandatory-access-controls">
<title>Mandatory access controls</title>
<para>
Use both mandatory access controls (MACs) and discretionary
access controls (DACs) to restrict the configuration for
processes to only those processes. This restriction prevents
these processes from being isolated from other processes
that run on the same machine(s).
</para>
</section>
</section>
</section>

View File

@ -1,175 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="networking-architecture">
<?dbhtml stop-chunking?>
<title>Networking architecture</title>
<para>
OpenStack Networking is a standalone service that often deploys
several processes across a number of nodes. These processes
interact with each other and other OpenStack services. The main
process of the OpenStack Networking service is <systemitem
class="service">neutron-server</systemitem>, a Python daemon that
exposes the OpenStack Networking API and passes tenant requests to
a suite of plug-ins for additional processing.</para>
<para>
The OpenStack Networking components are:</para>
<variablelist>
<varlistentry>
<term>neutron server (<systemitem
class="service">neutron-server</systemitem> and <systemitem
class="service">neutron-*-plugin</systemitem>)</term>
<listitem>
<para>
This service runs on the network node to service the
Networking API and its extensions. It also enforces the
network model and IP addressing of each port. The
neutron-server and plugin agents require access to a
database for persistent storage and access to a message
queue for inter-communication.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>plugin agent (<systemitem class="service">neutron-*-agent</systemitem>)
</term>
<listitem>
<para>Runs on each compute node to manage local virtual
switch (vswitch) configuration. The plug-in that you use
determine which agents run. This service requires message
queue access and depends on the plugin used.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>DHCP agent (<systemitem class="service">neutron-dhcp-agent</systemitem>)
</term>
<listitem>
<para>
Provides DHCP services to tenant networks. This agent is
the same across all plug-ins and is responsible for
maintaining DHCP configuration. The <systemitem
class="service">neutron-dhcp-agent</systemitem> requires
message queue access.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>L3 agent (<systemitem class="service">neutron-l3-agent</systemitem>)
</term>
<listitem>
<para>
Provides L3/NAT forwarding for external network access of
VMs on tenant networks. Requires message queue
access. <emphasis>Optional depending on
plug-in.</emphasis></para>
</listitem>
</varlistentry>
<varlistentry>
<term>network provider services (SDN server/services)</term>
<listitem>
<para>
Provides additional networking services to tenant
networks. These SDN services may interact with
<systemitem class="service">neutron-server</systemitem>,
<systemitem class="service">neutron-plugin</systemitem>,
and plugin-agents through communication channels
such as REST APIs.</para>
</listitem>
</varlistentry>
</variablelist>
<para>
The following figure shows an architectural and networking
flow diagram of the OpenStack Networking components:</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="319" contentwidth="536"
fileref="static/sdn-connections.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/sdn-connections.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject>
</para>
<section xml:id="networking-architecture-openstack-networking-service-placement-on-physical-servers">
<title>OpenStack Networking service placement on physical servers</title>
<para>
This guide focuses on a standard architecture
that includes a <emphasis>cloud controller</emphasis> host, a
<emphasis>network</emphasis> host, and a set of
<emphasis>compute</emphasis> hypervisors for running
VMs.</para>
<section xml:id="networking-architecture-openstack-networking-service-placement-on-physical-servers-network-connectivity-of-physical-servers">
<title>Network connectivity of physical servers</title>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="364" contentwidth="536"
fileref="static/1aa-network-domains-diagram.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/1aa-network-domains-diagram.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject>
</para>
<para>A standard OpenStack Networking setup has up to four
distinct physical data center networks:</para>
<variablelist>
<varlistentry>
<term>Management network</term>
<listitem>
<para>
Used for internal communication between OpenStack
Components. The IP addresses on this network should be
reachable only within the data center and is
considered the Management Security Domain.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Guest network</term>
<listitem>
<para>
Used for VM data communication within the cloud
deployment. The IP addressing requirements of this
network depend on the OpenStack Networking plug-in in
use and the network configuration choices of the
virtual networks made by the tenant. This network is
considered the Guest Security Domain.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>External network</term>
<listitem>
<para>
Used to provide VMs with Internet access in some
deployment scenarios. The IP addresses on this network
should be reachable by anyone on the Internet. This network
is considered to be in the Public Security Domain.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>API network</term>
<listitem>
<para>
Exposes all OpenStack APIs, including the OpenStack
Networking API, to tenants. The IP addresses on this
network should be reachable by anyone on the
Internet. This may be the same network as the external
network, as it is possible to create a subnet for the
external network that uses IP allocation ranges to use
only less than the full range of IP addresses in an IP
block. This network is considered the Public Security
Domain.</para>
</listitem>
</varlistentry>
</variablelist>
<para>
For additional information see the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/ch_networking.html">Networking
chapter</link> in the <citetitle>OpenStack Cloud
Administrator Guide</citetitle>.</para>
</section>
</section>
</section>

View File

@ -1,113 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="networking-services-security-best-practices">
<?dbhtml stop-chunking?>
<title>Networking services security best practices</title>
<para>This section discusses OpenStack Networking configuration best practices as they apply to tenant network security within your OpenStack deployment.</para>
<section xml:id="networking-services-security-best-practices-tenant-network-services-workflow">
<title>Tenant network services workflow</title>
<para>OpenStack Networking provides users self services of network resources and configurations. It is important that cloud architects and operators evaluate their design use cases in providing users the ability to create, update, and destroy available network resources.</para>
</section>
<section xml:id="networking-services-security-best-practices-networking-resource-policy-engine">
<title>Networking resource policy engine</title>
<para>A policy engine and its configuration file,
<filename>policy.json</filename>, within OpenStack Networking
provides a method to provide finer grained authorization of
users on tenant networking methods and objects. The OpenStack Networking policy
definitions affect network availability, network security and overall OpenStack
security. Cloud architects and operators should carefully evaluate their policy
towards user and tenant access to administration of network resources. For a more detailed
explanation of OpenStack Networking policy definition, please
refer to the <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/section_networking_auth.html">Authentication
and authorization section</link> in the <citetitle>OpenStack
Cloud Administrator Guide</citetitle>.</para>
<note><para>It is important to review the default networking
resource policy, as this policy can be modified to suit your
security posture.</para></note>
<para>If your deployment of OpenStack provides multiple external
access points into different security domains it is important
that you limit the tenant's ability to attach multiple vNICs to
multiple external access points&mdash;this would bridge these
security domains and could lead to unforeseen security
compromise. It is possible mitigate this risk by utilizing the
host aggregates functionality provided by OpenStack Compute or
through splitting the tenant VMs into multiple tenant projects
with different virtual network configurations.</para>
</section>
<section xml:id="networking-services-security-best-practices-security-groups">
<title>Security groups</title>
<para>
The OpenStack Networking service provides security group
functionality using a mechanism that is more flexible and
powerful than the security group capabilities built into
OpenStack Compute. Thus, <filename>nova.conf</filename> should
always disable built-in security groups and proxy all security
group calls to the OpenStack Networking API when using OpenStack
Networking. Failure to do so results in conflicting security
policies being simultaneously applied by both services.
To proxy security groups to OpenStack Networking,
use the following configuration values:</para>
<itemizedlist><listitem>
<para><option>firewall_driver</option> must be set to
<literal>nova.virt.firewall.NoopFirewallDriver</literal> so
that <systemitem class="service">nova-compute</systemitem>
does not perform iptables-based filtering itself.</para>
</listitem>
<listitem>
<para><option>security_group_api</option> must be set to
<literal>neutron</literal> so that all security group
requests are proxied to the OpenStack Networking
service.</para>
</listitem>
</itemizedlist>
<para>A security group is a container for security group rules. Security groups and their rules allow administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a virtual interface port. When a virtual interface port is created in OpenStack Networking it is associated with a security group. If a security group is not specified, the port will be associated with a 'default' security group. By default this group will drop all ingress traffic and allow all egress. Rules can be added to this group in order to change the behaviour.</para>
<para>When using the OpenStack Compute API to modify security groups, the updated security group applies to all virtual interface ports on an instance. This is due to the OpenStack Compute security group APIs being instance-based rather than port-based, as found in OpenStack Networking.</para>
</section>
<section xml:id="networking-services-security-best-practices-quotas">
<title>Quotas</title>
<para>Quotas provide the ability to limit the number of network
resources available to tenants. You can enforce default quotas
for all tenants. The
<filename>/etc/neutron/neutron.conf</filename> includes these
options for quota:</para>
<programlisting language="ini">[QUOTAS]
# resource name(s) that are supported in quota features
quota_items = network,subnet,port
# default number of resource allowed per tenant, minus for unlimited
#default_quota = -1
# number of networks allowed per tenant, and minus means unlimited
quota_network = 10
# number of subnets allowed per tenant, and minus means unlimited
quota_subnet = 10
# number of ports allowed per tenant, and minus means unlimited
quota_port = 50
# number of security groups allowed per tenant, and minus means unlimited
quota_security_group = 10
# number of security group rules allowed per tenant, and minus means unlimited
quota_security_group_rule = 100
# default driver to use for quota checks
quota_driver = neutron.quota.ConfDriver</programlisting>
<para>
OpenStack Networking also supports per-tenant quotas limit
through a quota extension API. To enable per-tenant quotas,
you must set the <literal>quota_driver</literal> option in
<filename>neutron.conf</filename>.</para>
<programlisting language="ini">quota_driver = neutron.db.quota_db.DbQuotaDriver</programlisting>
</section>
</section>

View File

@ -1,136 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="networking-services">
<?dbhtml stop-chunking?>
<title>Networking services</title>
<para>In the initial architectural phases of designing your OpenStack Network infrastructure it is important to ensure appropriate expertise is available to assist with the design of the physical networking infrastructure, to identify proper security controls and auditing mechanisms.</para>
<para>OpenStack Networking adds a layer of virtualized network services which gives tenants the capability to architect their own virtual networks. Currently, these virtualized services are not as mature as their traditional networking counterparts. Consider the current state of these virtualized services before adopting them as it dictates what controls you may have to implement at the virtualized and traditional network boundaries.</para>
<section xml:id="networking-services-l2-isolation-using-vlans-and-tunneling">
<title>L2 isolation using VLANs and tunneling</title>
<para>OpenStack Networking can employ two different mechanisms for traffic segregation on a per tenant/network combination: VLANs (IEEE 802.1Q tagging) or L2 tunnels using GRE encapsulation. The scope and scale of your OpenStack deployment determines which method you should utilize for traffic segregation or isolation.</para>
<section xml:id="networking-services-l2-isolation-using-vlans-and-tunneling-vlans">
<title>VLANs</title>
<para>VLANs are realized as packets on a specific physical network containing IEEE 802.1Q headers with a specific VLAN ID (VID) field value. VLAN networks sharing the same physical network are isolated from each other at L2, and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094.</para>
<para>VLAN configuration complexity depends on your OpenStack design requirements. In order to allow OpenStack Networking to efficiently use VLANs, you must allocate a VLAN range (one for each tenant) and turn each compute node physical switch port into a VLAN trunk port.</para>
<note>
<para>NOTE: If you intend for your network to support more than 4094 tenants VLAN is probably not the correct option for you as multiple 'hacks' are required to extend the VLAN tags to more than 4094 tenants.</para>
</note>
</section>
<section xml:id="networking-services-l2-isolation-using-vlans-and-tunneling-l2-tunneling">
<title>L2 tunneling</title>
<para>Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.</para>
<para>OpenStack Networking currently supports both GRE and VXLAN encapsulation.</para>
<para>The choice of technology to provide L2 isolation is dependent upon the scope and size of tenant networks that will be created in your deployment. If your environment has limited VLAN ID availability or will have a large number of L2 networks, it is our recommendation that you utilize tunneling.</para>
</section>
</section>
<section xml:id="networking-services-network-services">
<title>Network services</title>
<para>The choice of tenant network isolation affects how the network security and control boundary is implemented for tenant services. The following additional network services are either available or currently under development to enhance the security posture of the OpenStack network architecture.</para>
<section xml:id="networking-services-network-services-access-control">
<title>Access control lists</title>
<para>OpenStack Compute supports tenant network traffic access controls directly when deployed with the legacy nova-network service, or may defer access control to the OpenStack Networking service.</para>
<para>Note, legacy nova-network security groups are applied to all virtual interface ports on an instance using iptables.</para>
<para>Security groups allow administrators and tenants the ability to specify the type of traffic, and direction (ingress/egress) that is allowed to pass through a virtual interface port. Security groups rules are stateful L2-L4 traffic filters.</para>
<para>When using the Networking service, we recommend that you enable security groups in this service and disable it in the Compute service.</para>
</section>
<section xml:id="networking-services-network-services-l3-routing-and-nat">
<title>L3 routing and NAT</title>
<para>OpenStack Networking routers can connect multiple L2 networks, and can also provide a <emphasis>gateway</emphasis> that connects one or more private L2 networks to a shared <emphasis>external</emphasis> network, such as a public network for access to the Internet.</para>
<para>The L3 router provides basic Network Address Translation (NAT) capabilities on <emphasis>gateway</emphasis> ports that uplink the router to external networks. This router SNATs (Static NAT) all traffic by default, and supports floating IPs, which creates a static one-to-one mapping from a public IP on the external network to a private IP on one of the other subnets attached to the router.</para>
<para>It is our recommendation to leverage per tenant L3 routing and Floating IPs for more granular connectivity of tenant VMs.</para>
</section>
<section xml:id="networking-services-network-services-quality-of-service-qos">
<title>Quality of Service (QoS)</title>
<para>The ability to set QoS on the virtual interface ports of tenant instances is a current deficiency for OpenStack Networking. The application of QoS for traffic shaping and rate-limiting at the physical network edge device is insufficient due to the dynamic nature of workloads in an OpenStack deployment and can not be leveraged in the traditional way. QoS-as-a-Service (QoSaaS) is currently in development for the OpenStack Networking Icehouse release as an experimental feature. QoSaaS is planning to provide the following services:</para>
<itemizedlist><listitem>
<para>Traffic shaping through DSCP markings</para>
</listitem>
<listitem>
<para>Rate-limiting on a per port/network/tenant basis.</para>
</listitem>
<listitem>
<para>Port mirroring (through open source or third-party plug-ins)</para>
</listitem>
<listitem>
<para>Flow analysis (through open source or third-party plug-ins)</para>
</listitem>
</itemizedlist>
<para>Tenant traffic port mirroring or Network Flow monitoring is currently not an exposed feature in OpenStack Networking. There are third-party plug-in extensions that do provide port mirroring on a per port/network/tenant basis. If Open vSwitch is used on the networking hypervisor, it is possible to enable sFlow and port mirroring, however it will require some operational effort to implement.</para>
</section>
<section xml:id="networking-services-network-services-load-balancing">
<title>Load balancing</title>
<para>Another feature in OpenStack Networking is Load-Balancer-as-a-service (LBaaS). The LBaaS reference implementation is based on HA-Proxy. There are third-party plug-ins in development for extensions in OpenStack Networking to provide extensive L4-L7 functionality for virtual interface ports.</para>
</section>
<section xml:id="networking-services-network-services-firewalls">
<title>Firewalls</title>
<para>FW-as-a-Service (FWaaS) is considered an experimental feature for
the Kilo release of OpenStack Networking. FWaaS addresses the need to
manage and leverage the rich set of security features provided by
typical firewall products which are typically far more comprehensive
than what is currently provided by security groups. Both Freescale and
Intel developed third-party plug-ins as extensions in OpenStack
Networking to support this component in the Kilo release. Documentation
for administration of FWaaS is located at <link
xlink:href="http://docs.openstack.org/admin-guide-cloud/content/fwaas.html"
> </link></para>
<para>During the design of an OpenStack Networking infrastructure it is important
that you understand the current features and limitations of available network
services. Understanding the boundaries of your virtual and physical
networks will assist in adding required security controls in your environment.</para>
</section>
</section>
<section xml:id="networking-services-network-services-extension">
<title>Network services extensions</title>
<para>A list of known plug-ins provided by the open source
community or by SDN companies that work with OpenStack Networking
is available at <link
xlink:href="https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers"
>OpenStack neutron plug-ins and drivers wiki page</link>.</para>
</section>
<section xml:id="networking-services-network-services-limitations">
<title>Networking services limitations</title>
<para>OpenStack Networking has the following known limitations:</para>
<variablelist>
<varlistentry>
<term>Overlapping IP addresses</term>
<listitem>
<para>
If nodes that run either <systemitem
class="service">neutron-l3-agent</systemitem> or
<systemitem
class="service">neutron-dhcp-agent</systemitem> use
overlapping IP addresses, those nodes must use Linux
network namespaces. By default, the DHCP and L3 agents
use Linux network namespaces. However, if the host does
not support these namespaces, run the DHCP and L3 agents
on different hosts.</para>
<para>
If network namespace support is not present, a further
limitation of the L3 agent is that only a single logical
router is supported.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>Multi-host DHCP-agent</term>
<listitem>
<para>
OpenStack Networking supports multiple L3 and DHCP
agents with load balancing. However, tight coupling of
the location of the virtual machine is not
supported.</para>
</listitem>
</varlistentry>
<varlistentry>
<term>No IPv6 support for L3 agents</term>
<listitem>
<para>
The neutron-l3-agent, used by many plug-ins to implement
L3 forwarding, supports only IPv4 forwarding.</para>
</listitem>
</varlistentry>
</variablelist>
</section>
</section>

View File

@ -1,16 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="privacy">
<?dbhtml stop-chunking?>
<title>Privacy</title>
<para>Privacy is an increasingly important element of a compliance program. Businesses are being held to a higher standard by their customers, who have increased interest in understanding how their data is treated from a privacy perspective.</para>
<para>An OpenStack deployment will likely need to demonstrate compliance with an organization's Privacy Policy, with the U.S.-E.U. Safe Harbor framework, the ISO/IEC 29100:2011 privacy framework or with other privacy-specific guidelines. In the U.S. the AICPA has <link xlink:href="http://www.aicpa.org/interestareas/informationtechnology/resources/privacy/generallyacceptedprivacyprinciples/">defined 10 privacy areas of focus</link>, OpenStack deployments within a commercial environment may desire to attest to some or all of these principles.</para>
<para>To aid OpenStack architects in the protection of personal data, it is recommended that OpenStack architects review the NIST publication 800-122, titled "<emphasis>Guide to Protecting the Confidentiality of Personally Identifiable Information (PII)</emphasis>." This guide steps through the process of protecting:</para>
<blockquote>
<para>"<emphasis>any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity, such as name, social security number, date and place of birth, mother's maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information</emphasis>"</para>
</blockquote>
<para>Comprehensive privacy management requires significant preparation, thought and investment. Additional complications are introduced when building global OpenStack clouds, for example navigating the differences between U.S. and more restrictive E.U. privacy laws. In addition, extra care needs to be taken when dealing with sensitive PII that may include information such as credit card numbers or medical records. This sensitive data is not only subject to privacy laws but also regulatory and governmental regulations. By deferring to established best practices, including those published by governments, a holistic privacy management policy may be created and practiced for OpenStack deployments.</para>
</section>

View File

@ -1,203 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="secure-reference-architectures">
<?dbhtml stop-chunking?>
<title>Secure reference architectures</title>
<para>We recommend using SSL/TLS on both public networks and
management networks in
<xref linkend="tls-proxies-and-http-services"/>.
However, if actually deploying SSL/TLS everywhere is too
difficult, we recommend evaluating your OpenStack SSL/TLS needs and
following one of the architectures discussed here.</para>
<para>The first thing one should do when evaluating their OpenStack
SSL/TLS needs is to identify the threats. You can divide these
threats into external and internal attacker categories, but the
lines tend to get blurred since certain components of OpenStack
operate on both the public and management networks.</para>
<para>For publicly facing services, the threats are pretty
straightforward. Users will be authenticating against horizon and
keystone with their username and password. Users will also be
accessing the API endpoints for other services using their
keystone tokens. If this network traffic is unencrypted, passwords
and tokens can be intercepted by an attacker using a
man-in-the-middle attack. The attacker can then use these valid
credentials to perform malicious operations. All real deployments
should be using SSL/TLS to protect publicly facing services.</para>
<para>For services that are deployed on management networks, the
threats aren't so clear due to the bridging of security domains with
network security. There is always the chance that an administrator
with access to the management network decides to do something
malicious. SSL/TLS isn't going to help in this situation if the
attacker is allowed to access the private key. Not everyone on the
management network would be allowed to access the private key of
course, so there is still value in using SSL/TLS to protect yourself
from internal attackers. Even if everyone that is allowed to access
your management network is 100% trusted, there is still a threat
that an unauthorized user gains access to your internal network by
exploiting a misconfiguration or software vulnerability. One must
keep in mind that you have users running their own code on instances
in the OpenStack Compute nodes, which are deployed on the management
network. If a vulnerability allows them to break out of the
hypervisor, they will have access to your management network. Using
SSL/TLS on the management network can minimize the damage that an
attacker can cause.</para>
<section xml:id="ssl-tls-proxy-in-front">
<title>SSL/TLS proxy in front</title>
<para>It is generally accepted that it is best to encrypt
sensitive data as early as possible and decrypt it as late as
possible. Despite this best practice, it seems that it's common to
use a SSL/TLS proxy in front of the OpenStack services and use
clear communication afterwards as shown below:</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="450" contentwidth="540"
fileref="static/secure-arch-ref-1.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/secure-arch-ref-1.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>Some of the concerns with the use of SSL/TLS proxies as
pictured above:</para>
<itemizedlist><listitem>
<para>
Native SSL/TLS in OpenStack services does not perform/scale
as well as SSL proxies (particularly for Python
implementations like Eventlet).
</para>
</listitem><listitem>
<para>
Native SSL/TLS in OpenStack services not as well scrutinized/
audited as more proven solutions.
</para>
</listitem><listitem>
<para>
Native SSL/TLS configuration is difficult (not well
documented, tested, or consistent across services).
</para>
</listitem><listitem>
<para>
Privilege separation (OpenStack service processes should not
have direct access to private keys used for SSL/TLS).
</para>
</listitem><listitem>
<para>
Traffic inspection needs for load balancing.
</para>
</listitem>
</itemizedlist>
<para>All of the above are valid concerns, but none of them
prevent SSL/TLS from being used on the management network. Let's
consider the next deployment model.</para>
</section>
<section xml:id="ssl-tls-proxy-on-same-physical-hosts-as-api-endpoints">
<title>SSL/TLS on same physical hosts as API endpoints</title>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="450" contentwidth="540"
fileref="static/secure-arch-ref-2.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/secure-arch-ref-2.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>This is very similar to the
<link linkend="ssl-tls-proxy-in-front">"SSL/TLS in front model"
</link> but the SSL/TLS proxy is on the same physical system as
the API endpoint. The API endpoint would be configured to only
listen on the local network interface. All remote communication
with the API endpoint would go through the SSL/TLS proxy. With
this deployment model, we address a number of the bullet points in
<link linkend="ssl-tls-proxy-in-front">"SSL/TLS in front model"
</link>. A proven SSL implementation that performs well would be
used. The same SSL proxy software would be used for all services,
so SSL configuration for the API endpoints would be consistent.
The OpenStack service processes would not have direct access to
the private keys used for SSL/TLS, as you would run the SSL
proxies as a different user and restrict access using permissions
(and additionally mandatory access controls using something like
SELinux). We would ideally have the API endpoints listen on a Unix
socket such that we could restrict access to it using permissions
and mandatory access controls as well. Unfortunately, this does not
seem to work currently in Eventlet from our testing. It is a good
future development goal.</para>
</section>
<section xml:id="ssl-tls-over-load-balancer">
<title>SSL/TLS over load balancer</title>
<para>What about high availability or load balanced deployments
that need to inspect traffic? The previous deployment model
(<link linkend="ssl-tls-proxy-on-same-physical-hosts-as-api-endpoints"
>SSL/TLS on same physical hosts as API endpoints</link>) would not
allow for deep packet inspection since the traffic is encrypted.
If the traffic only needs to be inspected for basic routing
purposes, it might not be necessary for the load balancer to have
access to the unencrypted traffic. HAProxy has the ability to
extract the SSL/TLS session ID during the handshake, which can
then be used to achieve session affinity
(<link
xlink:href="http://blog.exceliance.fr/2011/07/04/maintain-affinity-based-on-ssl-session-id/"
>configuration details here</link>).
HAProxy can also use the
TLS Server Name Indication (SNI) extension to determine where
traffic should be routed to
(<link
xlink:href="http://blog.exceliance.fr/2012/04/13/enhanced-ssl-load-balancing-with-server-name-indication-sni-tls-extension/"
>configuration details here</link>). These features likely cover
some of the most common load balancer needs. HAProxy would be able
to just pass the HTTPS traffic straight through to the API
endpoint systems in this case:</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="450" contentwidth="540"
fileref="static/secure-arch-ref-3.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/secure-arch-ref-3.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
</section>
<section xml:id="cryptographic-seperation-of-external-and-internal-environments">
<title>Cryptographic seperation of external and internal environments</title>
<para>What if you want cryptographic separation of your external and
internal environments? A public cloud provider would likely want
their public facing services (or proxies) to use certificates that
are issued by a CA that chains up to a trusted Root CA that is
distributed in popular web browser software for SSL/TLS. For the
internal services, one might want to instead use their own PKI to
issue certificates for SSL/TLS. This cryptographic separation can be
accomplished by terminating SSL at the network boundary, then
re-encrypting using the internally issued certificates. The traffic
will be unencrypted for a brief period on the public facing SSL/TLS
proxy, but it will never be transmitted over the network in the
clear. The same re-encryption approach that is used to achieve
cryptographic separation can also be used if deep packet inspection
is really needed on a load balancer. Here is what this deployment
model would look like:</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="450" contentwidth="540"
fileref="static/secure-arch-ref-4.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/secure-arch-ref-4.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>As with most things, there are trade-offs. The main
trade-off is going to be between security and performance.
Encryption has a cost, but so does being hacked. The security and
performance requirements are going to be different for every
deployment, so how SSL/TLS is used will ultimately be an
individual decision.</para>
</section>
</section>

View File

@ -1,87 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="securing-openstack-networking-services">
<?dbhtml stop-chunking?>
<title>Securing OpenStack Networking services</title>
<para>
To secure OpenStack Networking, you must understand how the
workflow process for tenant instance creation needs to be mapped
to security domains.</para>
<para>
There are four main services that interact with OpenStack
Networking. In a typical OpenStack deployment these services map
to the following security domains:</para>
<itemizedlist><listitem>
<para>OpenStack dashboard: Public and management</para>
</listitem>
<listitem>
<para>OpenStack Identity: Management</para>
</listitem>
<listitem>
<para>OpenStack compute node: Management and guest</para>
</listitem>
<listitem>
<para>OpenStack network node: Management, guest, and possibly
public depending upon neutron-plugin in use.</para>
</listitem>
<listitem>
<para>SDN services node: Management, guest and possibly
public depending upon product used.</para>
</listitem>
</itemizedlist>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="454" contentwidth="682"
fileref="static/1aa-logical-neutron-flow.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/1aa-logical-neutron-flow.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject>
</para>
<para>To isolate sensitive data communication between the
OpenStack Networking services and other OpenStack core
services, configure these communication channels to only allow
communication over an isolated management network.</para>
<section xml:id="securing-openstack-networking-services-openstack-networking-service-configuration">
<title>OpenStack Networking service configuration</title>
<section xml:id="securing-openstack-networking-services-openstack-networking-service-configuration-restrict-bind-address-of-the-api-server-neutron-server">
<title>Restrict bind address of the API server: neutron-server</title>
<para>
To restrict the interface or IP address on which the
OpenStack Networking API service binds a network socket for
incoming client connections, specify the bind_host and
bind_port in the neutron.conf file as shown:</para>
<programlisting language="ini">
# Address to bind the API server
bind_host = <replaceable>IP ADDRESS OF SERVER</replaceable>
# Port the bind the API server to
bind_port = 9696</programlisting>
</section>
<section xml:id="securing-openstack-networking-services-openstack-networking-service-configuration-restrict-db-and-rpc-communication-of-the-openstack">
<title>Restrict DB and RPC communication of the OpenStack
Networking services</title>
<para>
Various components of the OpenStack Networking services use
either the messaging queue or database connections to
communicate with other components in OpenStack
Networking.</para>
<para>
It is recommended that you follow the guidelines provided in
<xref linkend="database-access-control-database-authentication-and-access-control"/> for all components which require direct
DB connections.</para>
<para>
It is recommended that you follow the guidelines provided in
<xref linkend="messaging-security-queue-authentication-and-access-control"/> for all components which require RPC
communication.</para>
</section>
</section>
</section>

View File

@ -1,258 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="security-boundaries-and-threats">
<?dbhtml stop-chunking?>
<title>Security boundaries and threats</title>
<para>A cloud can be abstracted as a collection of logical components by virtue of their function,
users, and shared security concerns, which we call security domains. Threat actors and vectors
are classified based on their motivation and access to resources. Our goal is to provide you a
sense of the security concerns with respect to each domain depending on your risk/vulnerability
protection objectives.</para>
<section xml:id="security-boundaries-and-threats-security-domains">
<title>Security domains</title>
<para>A security domain comprises users, applications, servers or networks that share common
trust requirements and expectations within a system. Typically they have the same
authentication and authorization (AuthN/Z) requirements and users.</para>
<para>Although you may desire to break these domains down further (we later discuss where this
may be appropriate), we generally refer to four distinct security domains which form the bare
minimum that is required to deploy any OpenStack cloud securely. These security domains
are:</para>
<orderedlist>
<listitem>
<para>Public</para>
</listitem>
<listitem>
<para>Guest</para>
</listitem>
<listitem>
<para>Management</para>
</listitem>
<listitem>
<para>Data</para>
</listitem>
</orderedlist>
<para>We selected these security domains because they can be mapped independently or combined to
represent the majority of the possible areas of trust within a given OpenStack deployment. For
example, some deployment topologies may consist of a combination of guest and data domains onto
one physical network while other topologies have these domains separated. In each case, the
cloud operator should be aware of the appropriate security concerns. Security domains should be
mapped out against your specific OpenStack deployment topology. The domains and their trust
requirements depend upon whether the cloud instance is public, private, or hybrid.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="298" contentwidth="338" fileref="static/untrusted_trusted.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/untrusted_trusted.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<section xml:id="security-boundaries-and-threats-security-domains-public">
<title>Public</title>
<para>The public security domain is an entirely untrusted area of the cloud infrastructure. It
can refer to the Internet as a whole or simply to networks over which you have no authority.
Any data that transits this domain with confidentiality or integrity requirements should be
protected using compensating controls.</para>
<para>This domain should always be considered <emphasis>untrusted</emphasis>.</para>
</section>
<section xml:id="security-boundaries-and-threats-security-domains-guest">
<title>Guest</title>
<para>Typically used for compute instance-to-instance traffic, the guest security domain
handles compute data generated by instances on the cloud but not services that support the
operation of the cloud, such as API calls.</para>
<para>Public and private cloud providers that do not have stringent controls on
instance use or allow unrestricted internet access to VMs should consider this domain to
be <emphasis>untrusted</emphasis>. Private cloud providers may want to consider this network
as internal and <emphasis>trusted</emphasis>, only if the proper controls are implemented to assert that the instances and all associated tenants are to be trusted.</para>
</section>
<section xml:id="security-boundaries-and-threats-security-domains-management">
<title>Management</title>
<para>The management security domain is where services interact. Sometimes referred to as the
"control plane", the networks in this domain transport confidential data such as
configuration parameters, user names, and passwords. Command and Control traffic typically
resides in this domain, which necessitates strong integrity requirements. Access to this
domain should be highly restricted and monitored. At the same time, this domain should still
employ all of the security best practices described in this guide.</para>
<para>In most deployments this domain is considered <emphasis>trusted</emphasis>. However,
when considering an OpenStack deployment, there are many systems that bridge this domain
with others, potentially reducing the level of trust you can place on this domain. See <xref
linkend="security-boundaries-and-threats-bridging-security-domains"/> for more information.</para>
</section>
<section xml:id="security-boundaries-and-threats-security-domains-data">
<title>Data</title>
<para>The data security domain is concerned primarily with information pertaining to the
storage services within OpenStack. Most of the data transmitted across this network requires high
levels of integrity and confidentiality. In some cases, depending on the type of deployment there may
also be strong availability requirements.</para>
<para>The trust level of this network is heavily dependent on deployment decisions and as such
we do not assign this any default level of trust.</para>
</section>
</section>
<section xml:id="security-boundaries-and-threats-bridging-security-domains">
<title>Bridging security domains</title>
<para>A <emphasis>bridge</emphasis> is a component that exists inside more than one security
domain. Any component that bridges security domains with different trust levels or
authentication requirements must be carefully configured. These bridges are often the weak
points in network architecture. A bridge should always be configured to meet the security
requirements of the highest trust level of any of the domains it is bridging. In many cases
the security controls for bridges should be a primary concern due to the likelihood of
attack.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="266" contentwidth="222"
fileref="static/bridging_security_domains_1.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/bridging_security_domains_1.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>The diagram above shows a compute node bridging the data and management domains, as such
the compute node should be configured to meet the security requirements of the management
domain. Similarly, the API Endpoint in this diagram is bridging the untrusted public domain and
the management domain, which should be configured to protect against attacks from the public
domain propagating through to the management domain.</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="418" contentwidth="559"
fileref="static/bridging_domains_clouduser.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/bridging_domains_clouduser.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>In some cases deployers may want to consider securing a bridge to a higher standard than
any of the domains in which it resides. Given the above example of an API endpoint, an
adversary could potentially target the API endpoint from the public domain, leveraging it in
the hopes of compromising or gaining access to the management domain.</para>
<para>The design of OpenStack is such that separation of security domains is difficult - as core
services will usually bridge at least two domains, special consideration must be given when
applying security controls to them.</para>
</section>
<section xml:id="security-boundaries-and-threats-threat-classifcation-actors-and-attack-vectors">
<title>Threat classification, actors and attack vectors</title>
<para>Most types of cloud deployment, public or private, are exposed to some form of attack. In
this chapter we categorize attackers and summarize potential types of attacks in each security
domain.</para>
<section xml:id="security-boundaries-and-threats-threat-classifcation-actors-and-attack-vectors-threat-actors">
<title>Threat actors</title>
<para>A threat actor is an abstract way to refer to a class of adversary that you may attempt
to defend against. The more capable the actor, the more expensive the security controls that
are required for successful attack mitigation and prevention. Security is a tradeoff between
cost, usability and defense. In some cases it will not be possible to secure a cloud
deployment against all of the threat actors we describe here. Those deploying an OpenStack
cloud will have to decide where the balance lies for their deployment / usage.</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Intelligence services</emphasis> &#x2014; Considered by this
guide as the most capable adversary. Intelligence Services and other state actors can
bring tremendous resources to bear on a target. They have capabilities beyond that of
any other actor. It is very difficult to defend against these actors without incredibly
stringent controls in place, both human and technical.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Serious organized crime</emphasis> &#x2014; Highly capable and
financially driven groups of attackers. Able to fund in-house exploit development and
target research. In recent years the rise of organizations such as the Russian Business
Network, a massive cyber-criminal enterprise has demonstrated how cyber attacks have
become a commodity. Industrial espionage falls within the serious organized crime group.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Highly capable groups</emphasis> &#x2014; This refers to
'Hacktivist' type organizations who are not typically commercially funded but can pose a
serious threat to service providers and cloud operators.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Motivated individuals</emphasis> &#x2014; Acting alone, these
attackers come in many guises, such as rogue or malicious employees, disaffected
customers, or small-scale industrial espionage.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Script kiddies</emphasis> &#x2014; Automated vulnerability
scanning/exploitation. Non-targeted attacks. Often only a nuisance, compromise by one of
these actors presents a major risk to an organization's reputation.</para>
</listitem>
</itemizedlist>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="403" contentwidth="472" fileref="static/threat_actors.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/threat_actors.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
</section>
<section xml:id="security-boundaries-and-threats-threat-classifcation-actors-and-attack-vectors-public-and-private-cloud-considerations">
<title>Public and private cloud considerations</title>
<para>Private clouds are typically deployed by enterprises or institutions inside their
networks and behind their firewalls. Enterprises will have strict policies on what data is
allowed to exit their network and may even have different clouds for specific purposes.
Users of a private cloud are typically employees of the organization that owns the cloud and
are able to be held accountable for their actions. Employees often attend training sessions
before accessing the cloud and will likely take part in regular scheduled security awareness
training. Public clouds by contrast cannot make any assertions about their users, cloud
use-cases or user motivations. This immediately pushes the guest security domain into a
completely <emphasis>untrusted</emphasis> state for public cloud providers.</para>
<para>A notable difference in the attack surface of public clouds is that they must provide
internet access to their services. Instance connectivity, access to files over the internet
and the ability to interact with the cloud controlling fabric such as the API endpoints and
dashboard are must-haves for the public cloud.</para>
<para>Privacy concerns for public and private cloud users are typically diametrically opposed.
The data generated and stored in private clouds is normally owned by the operator of the
cloud, who is able to deploy technologies such as data loss prevention (DLP) protection,
file inspection, deep packet inspection and prescriptive firewalling. In contrast, privacy
is one of the primary barriers for the adoption of public cloud infrastructures, as many of
the previously mentioned controls do not exist.</para>
</section>
<section xml:id="security-boundaries-and-threats-threat-classifcation-actors-and-attack-vectors-outbound-attacks-and-reputational-risk">
<title>Outbound attacks and reputational risk</title>
<para>Careful consideration should be given to potential outbound abuse from a cloud
deployment. Whether public or private, clouds tend to have lots of resource available. An
attacker who has established a point of presence within the cloud, either through hacking or
entitled access, such as rogue employee, can bring these resources to bear against the
internet at large. Clouds with compute services make for ideal DDoS and brute force engines.
The issue is more pressing for public clouds as their users are largely unaccountable, and
can quickly spin up numerous disposable instances for outbound attacks. Major damage can be
inflicted upon a company's reputation if it becomes known for hosting malicious software or
launching attacks on other networks. Methods of prevention include egress security groups,
outbound traffic inspection, customer education and awareness, and fraud and abuse
mitigation strategies.</para>
</section>
<section xml:id="security-boundaries-and-threats-threat-classifcation-actors-and-attack-vectors-attack-types">
<title>Attack types</title>
<para>The diagram shows the types of attacks that may be expected from the actors described in
the previous section. Note that there will always be exceptions to this diagram but in
general, this describes the sorts of attack that could be typical for each actor.</para>
<para>
<figure xml:id="security-boundaries-and-threats-attack-types">
<title>Attack types</title>
<mediaobject>
<imageobject role="html">
<imagedata contentdepth="642" contentwidth="616" fileref="static/high-capability.png"
format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/high-capability.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</mediaobject>
</figure>
</para>
<para>The prescriptive defense for each form of attack is beyond the scope of this document.
The above diagram can assist you in making an informed decision about which types of
threats, and threat actors, should be protected against. For commercial public cloud
deployments this might include prevention against serious crime. For those deploying private
clouds for government use, more stringent protective mechanisms should be in place,
including carefully protected facilities and supply chains. In contrast those standing up
basic development or test environments will likely require less restrictive controls (middle
of the spectrum).</para>
</section>
</section>
</section>

View File

@ -1,357 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="security-services-for-instances">
<?dbhtml stop-chunking?>
<title>Security services for instances</title>
<section xml:id="security-services-for-instances-entropy-to-instances">
<title>Entropy to instances</title>
<para>We consider entropy to refer to the quality and source of
random data that is available to an instance. Cryptographic
technologies typically rely heavily on randomness, requiring a
high quality pool of entropy to draw from. It is typically hard
for a virtual machine to get enough entropy to support these
operations, which is referred to as entropy starvation. Entropy
starvation can manifest in instances as something seemingly
unrelated. For example, slow boot time may be caused by the
instance waiting for ssh key generation. Entropy starvation
may also motivate users to employ poor quality entropy sources
from within the instance, making applications running in the
cloud less secure overall.</para>
<para>Fortunately, a cloud architect may address these issues by
providing a high quality source of entropy to the cloud
instances. This can be done by having enough hardware random
number generators (HRNG) in the cloud to support the instances.
In this case, "enough" is somewhat domain specific. For
everyday operations, a modern HRNG is likely to produce enough
entropy to support 50-100 compute nodes. High bandwidth HRNGs,
such as the RdRand instruction available with Intel Ivy Bridge
and newer processors could potentially handle more nodes. For a
given cloud, an architect needs to understand the application
requirements to ensure that sufficient entropy is available.
</para>
<para>The Virtio RNG is a random number generator that uses
<filename>/dev/random</filename> as the source of entropy by
default, however can be configured to use a hardware RNG or a
tool such as the entropy gathering daemon (<link
xlink:href="http://egd.sourceforge.net/">EGD</link>) to provide
a way to fairly and securely distribute entropy through a
distributed system. The Virtio RNG is enabled using the
<literal>hw_rng</literal> property of the metadata used to
create the instance.</para>
</section>
<section xml:id="security-services-for-instances-scheduling-instances-to-nodes">
<title>Scheduling instances to nodes</title>
<para>Before an instance is created, a host for the image
instantiation must be selected. This selection is performed by
the <systemitem class="service">nova-scheduler</systemitem>
which determines how to dispatch compute and volume requests.
</para>
<para>The <literal>FilterScheduler</literal> is the default
scheduler for OpenStack Compute, although other schedulers
exist (see the section <link
xlink:href="http://docs.openstack.org/kilo/config-reference/content/section_compute-scheduler.html"
>Scheduling</link> in the <citetitle>OpenStack Configuration
Reference</citetitle>). This works in collaboration with
'filter hints' to decide where an instance should be started.
This process of host selection allows administrators to fulfill
many different security and compliance requirements. Depending
on the cloud deployment type for example, one could choose to
have tenant instances reside on the same hosts whenever possible
if data isolation was a primary concern. Conversely one could
attempt to have instances for a tenant reside on as many different hosts
as possible for availability or fault tolerance reasons.</para>
<para>Scheduler filters may be used to segregate customer data, or
even discard machines of the cloud that cannot be attested as
secure. This generally applies to all OpenStack projects
offering a scheduler. When building a cloud, you may choose to
implement scheduling filters for a variety of security-related
purposes.</para>
<para>Filter schedulers fall under four main categories:</para>
<variablelist>
<varlistentry><term>Resource based filters</term>
<listitem><para>These filters will create an instance based on
the utilizations of the hypervisor host sets and can trigger
on free or used properties such as RAM, IO, or CPU
utilization</para></listitem>
</varlistentry>
<varlistentry><term>Image based filters</term>
<listitem><para>This delegates instance creation based on the
image used, such as the operating system of the VM or type
of image used.</para></listitem>
</varlistentry>
<varlistentry><term>Environment based filters</term>
<listitem><para>This filter will create an instance based on
external details such as in a specific IP range, across
availability zones, or on the same host as another instance.
</para></listitem>
</varlistentry>
<varlistentry><term>Custom criteria</term>
<listitem><para>This filter will delegate instance creation
based on user or administrator provided criteria such as
trusts or metadata parsing.</para></listitem>
</varlistentry>
</variablelist>
<para>Multiple filters can be applied at once, such as the
<literal>ServerGroupAffinity</literal> filter to ensure an
instance is created on a member of a specific set of hosts and
<literal>ServerGroupAntiAffinity</literal> filter to ensure that
same instance is not created on another
specific set of hosts. These filters should be analyzed
carefully to ensure they do not conflict with each other and
result in rules that prevent the creation of instances.</para>
<para><inlinemediaobject><imageobject role="html">
<imagedata contentdepth="400" contentwidth="550" fileref="static/filteringWorkflow1.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/filteringWorkflow1.png" format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>The <literal>GroupAffinity</literal> and
<literal>GroupAntiAffinity</literal> filters conflict and should
not both be enabled at the same time.</para>
<para>The <literal>DiskFilter</literal> filter is capable of
oversubscribing disk space. While not normally an issue, this
can be a concern on storage devices that are thinly
provisioned, and this filter should be used with well-tested
quotas applied.</para>
<para>We recommend you disable filters that parse things that are
provided by users or are able to be manipulated such as
metadata.</para>
</section>
<section xml:id="security-services-for-instances-trusted-images">
<title>Trusted images</title>
<para>In a cloud environment, users work with either pre-installed images or images they upload themselves. In both cases, users should be able to ensure the image they are utilizing has not been tampered with. This requires a method of validation, such as a checksum for the known good image as well as verification of a running instance. While there are current best practices around these actions there are also several gaps in the process.</para>
<section xml:id="security-services-for-instances-trusted-images-image-creation-process">
<title>Image creation process</title>
<para>The OpenStack Documentation provides guidance on how to
create and upload an image to the Image service. Additionally
it is assumed that you have a process by which you install and
harden operating systems. Thus, the following items will
provide additional guidance on how to ensure your images are
transferred securely into OpenStack. There are a variety of
options for obtaining images. Each has specific steps that
help validate the image's provenance.</para>
<para>The first option is to obtain boot media from a trusted source.</para>
<screen><prompt>$</prompt> <userinput>mkdir -p /tmp/download_directorycd /tmp/download_directory</userinput>
<prompt>$</prompt> <userinput>wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/ubuntu-12.04.2-server-amd64.iso</userinput>
<prompt>$</prompt> <userinput>wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS</userinput>
<prompt>$</prompt> <userinput>wget http://mirror.anl.gov/pub/ubuntu-iso/CDs/precise/SHA256SUMS.gpg</userinput>
<prompt>$</prompt> <userinput>gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 0xFBB75451</userinput>
<prompt>$</prompt> <userinput>gpg --verify SHA256SUMS.gpg SHA256SUMSsha256sum -c SHA256SUMS 2&gt;&amp;1 | grep OK</userinput></screen>
<para>The second option is to use the <link
xlink:href="http://docs.openstack.org/image-guide/content/"
><citetitle>OpenStack Virtual Machine Image Guide</citetitle>
</link>. In this case, you will want to follow your
organizations OS hardening guidelines or those provided by
a trusted third-party such as the <link
xlink:href="http://iase.disa.mil/stigs/os/unix-linux/Pages/index.aspx"
>Linux STIGs</link>.</para>
<para>The final option is to use an automated image builder. The following example uses the Oz image builder. The OpenStack community has recently created a newer tool worth investigating: disk-image-builder. We have not evaluated this tool from a security perspective.</para>
<para>Example of RHEL 6 CCE-26976-1 which will help implement NIST 800-53 Section<emphasis>AC-19(d) in</emphasis> Oz.</para>
<programlisting>&lt;template&gt;
&lt;name&gt;centos64&lt;/name&gt;
&lt;os&gt;
&lt;name&gt;RHEL-6&lt;/name&gt;
&lt;version&gt;4&lt;/version&gt;
&lt;arch&gt;x86_64&lt;/arch&gt;
&lt;install type='iso'&gt;
&lt;iso&gt;http://trusted_local_iso_mirror/isos/x86_64/RHEL-6.4-x86_64-bin-DVD1.iso&lt;/iso&gt;
&lt;/install&gt;
&lt;rootpw&gt;CHANGE THIS TO YOUR ROOT PASSWORD&lt;/rootpw&gt;
&lt;/os&gt;
&lt;description&gt;RHEL 6.4 x86_64&lt;/description&gt;
&lt;repositories&gt;
&lt;repository name='epel-6'&gt;
&lt;url&gt;http://download.fedoraproject.org/pub/epel/6/$basearch&lt;/url&gt;
&lt;signed&gt;no&lt;/signed&gt;
&lt;/repository&gt;
&lt;/repositories&gt;
&lt;packages&gt;
&lt;package name='epel-release'/&gt;
&lt;package name='cloud-utils'/&gt;
&lt;package name='cloud-init'/&gt;
&lt;/packages&gt;
&lt;commands&gt;
&lt;command name='update'&gt;
yum update
yum clean all
sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n &gt; /etc/udev/rules.d/70-persistent-net.rules
echo -n &gt; /lib/udev/rules.d/75-persistent-net-generator.rules
chkconfig --level 0123456 autofs off
service autofs stop
&lt;/command&gt;
&lt;/commands&gt;
&lt;/template&gt;</programlisting>
<para>It is recommended to avoid the manual image building process as it is complex and prone to error. Additionally, using an automated system like Oz for image building or a configuration management utility like Chef or Puppet for post-boot image hardening gives you the ability to produce a consistent image as well as track compliance of your base image to its respective hardening guidelines over time.</para>
<para>If subscribing to a public cloud service, you should check with the cloud provider for an outline of the process used to produce their default images. If the provider allows you to upload your own images, you will want to ensure that you are able to verify that your image was not modified before using it to create an instance. To do this, refer to the following section on Image Provenance.</para>
</section>
<section xml:id="security-services-for-instances-trusted-images-image-provenance-and-validation">
<title>Image provenance and validation</title>
<para>Unfortunately, it is not currently possible to force Compute to validate an image hash immediately prior to starting an instance. To understand the situation, we begin with a brief overview of how images are handled around the time of image launch.</para>
<para>Images come from the glance service to the nova service on a node. This transfer should be protected by running over TLS. Once the image is on the node, it is verified with a basic checksum and then it's disk is expanded based on the size of the instance being launched. If, at a later time, the same image is launched with the same instance size on this node, it will be launched from the same expanded image. Since this expanded image is not re-verified before launching, it could be tampered with and the user would not have any way of knowing, beyond a manual inspection of the files in the resulting image.</para>
<para>We hope that future versions of Compute and/or the Image service will offer support for validating the image hash before each instance launch. An alternative option that would be even more powerful would be allow users to sign an image and then have the signature validated when the instance is launched.</para>
</section>
</section>
<section xml:id="security-services-for-instances-instance-migrations">
<title>Instance migrations</title>
<para>
OpenStack and the underlying virtualization layers provide for
the live migration of images between OpenStack nodes, allowing
you to seamlessly perform rolling upgrades of your OpenStack
compute nodes without instance downtime. However, live
migrations also carry significant risk. To understand the risks
involved, the following are the high-level steps performed
during a live migration:
</para>
<orderedlist>
<listitem><para>Start instance on destination host</para> </listitem>
<listitem><para>Transfer memory</para> </listitem>
<listitem><para>Stop the guest &amp; sync disks</para> </listitem>
<listitem><para>Transfer state</para> </listitem>
<listitem><para>Start the guest</para> </listitem>
</orderedlist>
<section xml:id="security-services-for-instances-instance-migrations-live-migration-risks">
<title>Live migration risks</title>
<para>At various stages of the live migration process the contents of an instances run time memory and disk are transmitted over the network in plain text. Thus there are several risks that need to be addressed when using live migration. The following in-exhaustive list details some of these risks:</para>
<itemizedlist><listitem>
<para><emphasis>Denial of Service (DoS)</emphasis>: If something fails during the migration process, the instance could be lost.</para>
</listitem>
<listitem>
<para><emphasis>Data exposure</emphasis>: Memory or disk transfers must be handled securely.</para>
</listitem>
<listitem>
<para><emphasis>Data manipulation</emphasis>: If memory or disk transfers are not handled securely, then an attacker could manipulate user data during the migration.</para>
</listitem>
<listitem>
<para><emphasis>Code injection</emphasis>: If memory or disk transfers are not handled securely, then an attacker could manipulate executables, either on disk or in memory, during the migration.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="security-services-for-instances-instance-migrations-live-migration-mitigations">
<title>Live migration mitigations</title>
<para>There are several methods to mitigate some of the risk associated with live migrations, the following list details some of these:</para>
<itemizedlist><listitem>
<para>Disable live migration</para>
</listitem>
<listitem>
<para>Isolated migration network</para>
</listitem>
<listitem>
<para>Encrypted live migration</para>
</listitem>
</itemizedlist>
<section xml:id="security-services-for-instances-instance-migrations-live-migration-mitigations-disable-live-migration">
<title>Disable live migration</title>
<para>At this time, live migration is enabled in OpenStack
by default. Live migrations can be disabled by adding the
following lines to the nova <filename>policy.json</filename>
file:</para>
<programlisting>"compute_extension:admin_actions:migrate": "!",
"compute_extension:admin_actions:migrateLive": "!",</programlisting>
</section>
<section xml:id="security-services-for-instances-instance-migrations-live-migration-mitigations-migration-network">
<title>Migration network</title>
<para>As a general practice, live migration traffic should be restricted to the management security domain, see <xref linkend="security-boundaries-and-threats-security-domains-management" />. With live migration traffic, due to its plain text nature and the fact that you are transferring the contents of disk and memory of a running instance, it is recommended you further separate live migration traffic onto a dedicated network. Isolating the traffic to a dedicated network can reduce the risk of exposure.</para>
</section>
<section xml:id="security-services-for-instances-instance-migrations-live-migration-mitigations-encrypted-live-migration">
<title>Encrypted live migration</title>
<para>If there is a sufficient business case for keeping live migration enabled, then libvirtd can provide encrypted tunnels for the live migrations. However, this feature is not currently exposed in either the OpenStack Dashboard or nova-client commands, and can only be accessed through manual configuration of libvirtd. The live migration process then changes to the following high-level steps:</para>
<orderedlist>
<listitem><para>Instance data is copied from the hypervisor to libvirtd.</para></listitem>
<listitem><para>An encrypted tunnel is created between libvirtd processes on both source and destination hosts.</para></listitem>
<listitem><para>Destination libvirtd host copies the instances back to an underlying hypervisor.</para></listitem>
</orderedlist>
</section>
</section>
</section>
<section xml:id="security-services-for-instances-monitoring-reporting">
<title>Monitoring, alerting, and reporting</title>
<para>As an OpenStack virtual machine is a server image able to
be replicated across hosts, best practice in logging applies
similarly between physical and virtual hosts. Operating
system-level and application-level events should be logged,
including access events to hosts and data, user additions and
removals, changes in privilege, and others as dictated by the
environment. Ideally, you can configure these logs to export to
a log aggregator that collects log events, correlates them for
analysis, and stores them for reference or further action. One
common tool to do this is an
<link xlink:href="http://www.elasticsearch.com"><citetitle> ELK
stack, or Elasticsearch, Logstash, and Kibana</citetitle>
</link>.</para>
<para>These logs should be reviewed at a regular cadence such as
a live view by a network operations center (NOC), or if the
environment is not large enough to necessitate a NOC, then logs
should undergo a regular log review process.</para>
<para>Many times interesting events trigger an alert which is
sent to a responder for action. Frequently this alert takes the
form of an email with the messages of interest. An interesting
event could be a significant failure, or known health indicator
of a pending failure. Two common utilities for managing alerts
are <link xlink:href="http://www.nagios.org">
<citetitle>Nagios</citetitle></link> and
<link xlink:href="http://www.zabbix.com"><citetitle>Zabbix
</citetitle></link>.</para>
</section>
<section xml:id="security-services-for-instances-updates">
<title>Updates and patches</title>
<para>A hypervisor runs independent virtual machines. This
hypervisor can run in an operating system or directly on the
hardware (called baremetal). Updates to the hypervisor are not
propogated down to the virtual machines. For example, if a
deployment is using XenServer and has a set of Debian virtual
machines, an update to XenServer will not update anything
running on the Debian virtual machines.</para>
<para>Therefore, we recommend that clear ownership of virtual
machines be assigned, and that those owners be responsible for
the hardening, deployment, and continued functionality of the
virtual machines. We also recommend that updates be deployed on
a regular schedule. These patches should be tested in an
environment as closely resembling production as possible to
ensure both stability and resolution of the issue behind the
patch.</para>
</section>
<section xml:id="security-services-for-instances-firewalls">
<title>Firewalls and other host-based security controls</title>
<para>Most common operating systems include host-based firewalls
for additional security. While we recommend that virtual
machines run as few applications as possible (to the point of
being single-purpose instances, if possible), all applications
running on a virtual machine should be profiled to determine
what system resources the application needs access to, the
lowest level of privilege required for it to run, and what the
expected network traffic is that will be going into and coming
from the virtual machine. This expected traffic should be added
to the host-based firewall as allowed traffic (or whitelisted),
along with any necessary logging and management communication
such as SSH or RDP. All other traffic should be explicitly
denied in the firewall configuration.</para>
<para>On Linux virtual machines, the application profile above
can be used in conjunction with a tool like <link
xlink:href="http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01">
<citetitle>audit2allow</citetitle></link> to build an SELinux
policy that will further protect sensitive system information
on most Linux distributions. SELinux uses a combination of
users, policies and security contexts to compartmentalize the
resources needed for an application to run, and segmenting it
from other system resources that are not needed.</para>
<para>OpenStack provides security groups for both hosts and the
network to add defense in depth to the virtual machines in a
given project. These are similar to host-based firewalls as
they allow or deny incoming traffic based on port, protocol,
and address, however security group rules are applied to
incoming traffic only, while host-based firewall rules are
able to be applied to both incoming and outgoing traffic. It
is also possible for host and network security group rules to
conflict and deny legitimate traffic. We recommend ensuring
that security groups are configured correctly for the
networking being used. See <xref
linkend="networking-services-security-best-practices-security-groups"
/> in this guide for more detail.
</para>
</section>
</section>

View File

@ -1,115 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="system-documentation-requirements">
<?dbhtml stop-chunking?>
<title>System documentation requirements</title>
<section xml:id="system-documentation-requirements-system-roles-and-types">
<title>System roles and types</title>
<para>The two broadly defined types of
nodes that generally make up an OpenStack installation are:</para>
<itemizedlist>
<listitem>
<para>Infrastructure nodes. The nodes that run the cloud
related services such as the OpenStack Identity service, the
message queuing service, storage, networking, and other
services required to support the operation of the
cloud.</para>
</listitem>
<listitem>
<para>Compute, storage, or other
resource nodes. Provide storage capacity or
virtual machines for your cloud.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="system-documentation-requirements-system-inventory">
<title>System inventory</title>
<para>Documentation should provide a general description of the
OpenStack environment and cover all systems used (production,
development, test, etc.). Documenting system components,
networks, services, and software often provides the bird's-eye
view needed to thoroughly cover and consider security concerns,
attack vectors and possible security domain bridging points. A
system inventory may need to capture ephemeral resources such as
virtual machines or virtual disk volumes that would otherwise be
persistent resources in a traditional IT system.</para>
<section xml:id="system-documentation-requirements-system-inventory-hardware-inventory">
<title>Hardware inventory</title>
<para>
Clouds without stringent compliance requirements for written
documentation might benefit from having a Configuration
Management Database (<glossterm>CMDB</glossterm>). CMDBs are
normally used for hardware asset tracking and overall
life-cycle management. By leveraging a CMDB, an organization
can quickly identify cloud infrastructure hardware such as
compute nodes, storage nodes, or network devices. A CMDB can
assist in identifying assets that exist on the network which
may have vulnerabilities due to inadequate maintenance,
inadequate protection or being displaced and forgotten. An
OpenStack provisioning system can provide some basic CMDB
functions if the underlying hardware supports the necessary
auto-discovery features.
</para>
</section>
<section xml:id="system-documentation-requirements-system-inventory-software-inventory">
<title>Software inventory</title>
<para>
As with hardware, all software components within the OpenStack
deployment should be documented. Examples include:
</para>
<itemizedlist>
<listitem>
<para>System databases, such as MySQL or mongoDB</para>
</listitem>
<listitem>
<para>OpenStack software components, such as Identity or
Compute</para>
</listitem>
<listitem>
<para>Supporting components, such as load-balancers, reverse
proxies, DNS or DHCP services</para>
</listitem>
</itemizedlist>
<para>
An authoritative list of software components may be critical
when assessing the impact of a compromise or vulnerability in
a library, application or class of software.
</para>
</section>
</section>
<section xml:id="system-documentation-requirements-network-topology">
<title>Network topology</title>
<para>A network topology should be provided with highlights
specifically calling out the data flows and bridging points
between the security domains. Network ingress and egress points
should be identified along with any OpenStack logical system
boundaries. Multiple diagrams may be needed to provide complete
visual coverage of the system. A network topology document
should include virtual networks created on behalf of tenants by
the system along with virtual machine instances and gateways
created by OpenStack.</para>
</section>
<section xml:id="system-documentation-requirements-services-protocols-and-ports">
<title>Services, protocols and ports</title>
<para>Knowing information about organizational assets is typically a best practice, therefore it is beneficial to create a table which contains information regarding the service, protocols and ports being utilized in the OpenStack deployment. The table can be created from information derived from a CMDB or can be constructed manually. The table can be customized to include an overview of all services running within the cloud infrastructure. The level of detail contained in this type of table can be beneficial as the information can immediately inform, guide, and assist with validating security requirements. Standard security components such as firewall configuration, service port conflicts, security remediation areas, and compliance become easier to maintain when concise information is available. An example of this type of table is provided below:</para>
<para><inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="166" contentwidth="967"
fileref="static/services-protocols-ports.png" format="PNG"
scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%"
fileref="static/services-protocols-ports.png" format="PNG"
scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>Referencing a table of services, protocols and ports can
help in understanding the relationship between OpenStack
components. It is highly recommended that OpenStack deployments
have information similar to this on record.</para>
</section>
</section>

View File

@ -1,308 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="tls-proxies-and-http-services">
<?dbhtml stop-chunking?>
<title>TLS proxies and HTTP services</title>
<para>OpenStack endpoints are HTTP services providing APIs to both end-users on public networks
and to other OpenStack services on the management network. It is highly recommended that all of
these requests, both internal and external, operate over TLS. To achieve this goal, API services
must be deployed behind a TLS proxy that can establish and terminate TLS sessions. The following
table offers a non-exhaustive list of open source software that can be used for this
purpose:</para>
<itemizedlist><listitem>
<para><link xlink:href="http://www.apsis.ch/pound">Pound</link></para>
</listitem>
<listitem>
<para><link xlink:href="https://github.com/bumptech/stud">Stud</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://nginx.org/">nginx</link></para>
</listitem>
<listitem>
<para><link xlink:href="http://www.apache.org/">Apache httpd</link></para>
</listitem>
</itemizedlist>
<para>In cases where software termination offers insufficient performance, hardware accelerators
may be worth exploring as an alternative option. It is important to be mindful of the size of
requests that will be processed by any chosen TLS proxy.</para>
<section xml:id="tls-proxies-and-http-services-examples">
<title>Examples</title>
<para>Below we provide sample recommended configuration settings for enabling TLS in some of
the more popular web servers/TLS terminators.</para>
<para>Before we delve into the configurations, we briefly discuss the ciphers' configuration element and its format. A more exhaustive treatment on available ciphers and the OpenSSL cipher list format can be found at: <link xlink:href="https://www.openssl.org/docs/apps/ciphers.html">ciphers</link>.</para>
<programlisting>
ciphers = "HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
</programlisting>
<para>or</para>
<programlisting>
ciphers = "kEECDH:kEDH:kRSA:HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
</programlisting>
<para>Cipher string options are separated by ":", while "!" provides negation of the immediately following element. Element order indicates preference unless overridden by qualifiers such as HIGH. Let us take a closer look at the elements in the above sample strings.</para>
<variablelist>
<varlistentry>
<term><code>kEECDH:kEDH</code></term>
<listitem>
<para>Ephemeral Elliptic Curve Diffie-Hellman (abbreviated as EECDH and ECDHE).</para>
<para>Ephemeral Diffie-Hellman (abbreviated either as EDH or DHE) uses prime field groups.</para>
<para>Both approaches provide
<link xlink:href="http://en.wikipedia.org/wiki/Forward_secrecy">
Perfect Forward Secrecy (PFS)</link>. See
<xref linkend="tls-proxies-and-http-services-perfect-forward-secrecy"/>
for additional discussion on properly configuring PFS.
</para>
<para>Ephemeral Elliptic Curves require the server to be configured with a named curve, and provide better security than prime field groups and at lower computational cost. However, prime field groups are more widely implemented, and thus typically both are included in list.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>kRSA</code></term>
<listitem>
<para>Cipher suites using the <link xlink:href="http://en.wikipedia.org/wiki/RSA_%28cryptosystem%29">RSA</link> exchange, authentication or either respectively.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>HIGH</code></term>
<listitem>
<para>Selects highest possible security cipher in the negotiation phase. These typically have keys of length 128 bits or longer.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>!RC4</code></term>
<listitem>
<para>No RC4. RC4 has flaws in the context of TLS V3. See <link xlink:href="http://cr.yp.to/streamciphers/rc4biases-20130708.pdf"> On the Security of RC4 in TLS and WPA</link>.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>!MD5</code></term>
<listitem>
<para>No MD5. MD5 is not collision resistant, and thus not acceptable for Message Authentication Codes (MAC) or signatures.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>!aNULL:!eNULL</code></term>
<listitem>
<para>Disallows clear text.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>!EXP</code></term>
<listitem>
<para>Disallows export encryption algorithms, which by design tend to be weak,
typically using 40 and 56 bit keys.</para>
<para>US Export restrictions on cryptography systems have been lifted and no longer need to be supported.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>!LOW:!MEDIUM</code></term>
<listitem>
<para>Disallows low (56 or 64 bit long keys) and medium (128 bit long keys) ciphers
because of their vulnerability to brute force attacks (example 2-DES). This rule still
permits Triple Data Encryption Standard (Triple DES) also known as Triple Data
Encryption Algorithm (TDEA) and the Advanced Encryption Standard (AES), each of which
has keys greater than equal to 128 bits and thus more secure.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><code>Protocols</code></term>
<listitem>
<para>Protocols are enabled/disabled through SSL_CTX_set_options. We recommend disabling SSLv2/v3 and enabling TLS.</para>
</listitem>
</varlistentry>
</variablelist>
<section xml:id="tls-proxies-and-http-services-examples-pound">
<title>Pound</title>
<para>This Pound example enables <literal>AES-NI</literal> acceleration, which helps to
improve performance on systems with processors that support this feature.</para>
<programlisting>## see pound(8) for details
daemon 1
######################################################################
## global options:
User "swift"
Group "swift"
#RootJail "/chroot/pound"
## Logging: (goes to syslog by default)
## 0 no logging
## 1 normal
## 2 extended
## 3 Apache-style (common log format)
LogLevel 0
## turn on dynamic scaling (off by default)
# Dyn Scale 1
## check backend every X secs:
Alive 30
## client timeout
#Client 10
## allow 10 second proxy connect time
ConnTO 10
## use hardware-acceleration card supported by openssl(1):
SSLEngine "aesni"
# poundctl control socket
Control "/var/run/pound/poundctl.socket"
######################################################################
## listen, redirect and ... to:
## redirect all swift requests on port 443 to local swift proxy
ListenHTTPS
Address 0.0.0.0
Port 443
Cert "/etc/pound/cert.pem"
## Certs to accept from clients
## CAlist "CA_file"
## Certs to use for client verification
## VerifyList "Verify_file"
## Request client cert - don't verify
## Ciphers "AES256-SHA"
## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
NoHTTPS11 0
## allow PUT and DELETE also (by default only GET, POST and HEAD)?:
xHTTP 1
Service
BackEnd
Address 127.0.0.1
Port 80
End
End
End</programlisting>
</section>
<section xml:id="tls-proxies-and-http-services-examples-stud">
<title>Stud</title>
<para>The <emphasis role="italic">ciphers</emphasis> line can be tweaked
based on your needs, however this is a reasonable starting place.</para>
<programlisting># SSL x509 certificate file.
pem-file = "
# SSL protocol.
tls = on
ssl = off
# List of allowed SSL ciphers.
# OpenSSL's high-strength ciphers which require authentication
# NOTE: forbids clear text, use of RC4 or MD5 or LOW and MEDIUM strength ciphers
ciphers = "HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM"
# Enforce server cipher list order
prefer-server-ciphers = on
# Number of worker processes
workers = 4
# Listen backlog size
backlog = 1000
# TCP socket keepalive interval in seconds
keepalive = 3600
# Chroot directory
chroot = ""
# Set uid after binding a socket
user = "www-data"
# Set gid after binding a socket
group = "www-data"
# Quiet execution, report only error messages
quiet = off
# Use syslog for logging
syslog = on
# Syslog facility to use
syslog-facility = "daemon"
# Run as daemon
daemon = off
# Report client address using SENDPROXY protocol for haproxy
# Disabling this until we upgrade to HAProxy 1.5
write-proxy = off</programlisting>
</section>
<section xml:id="tls-proxies-and-http-services-examples-nginx">
<title>nginx</title>
<para>This nginx example requires TLS v1.1 or v1.2 for maximum security. The <option>ssl_ciphers</option> line can be tweaked based on your needs, however this
is a reasonable starting place.</para>
<programlisting>server {
listen : ssl;
ssl_certificate ;
ssl_certificate_key ;
ssl_protocols TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
ssl_session_tickets off;
server_name _;
keepalive_timeout 5;
location / {
}
}</programlisting>
</section>
<section xml:id="tls-proxies-and-http-services-examples-apache">
<title>Apache</title>
<programlisting>&lt;VirtualHost &lt;ip address&gt;:80&gt;
ServerName &lt;site FQDN&gt;
RedirectPermanent / https://&lt;site FQDN&gt;/
&lt;/VirtualHost&gt;
&lt;VirtualHost &lt;ip address&gt;:443&gt;
ServerName &lt;site FQDN&gt;
SSLEngine On
SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2,
SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
SSLCertificateFile /path/&lt;site FQDN&gt;.crt
SSLCACertificateFile /path/&lt;site FQDN&gt;.crt
SSLCertificateKeyFile /path/&lt;site FQDN&gt;.key
WSGIScriptAlias / &lt;WSGI script location&gt;
WSGIDaemonProcess horizon user=&lt;user&gt; group=&lt;group&gt; processes=3 threads=10
Alias /static &lt;static files location&gt;
&lt;Directory &lt;WSGI dir&gt;&gt;
# For http server 2.2 and earlier:
Order allow,deny
Allow from all
# Or, in Apache http server 2.4 and later:
# Require all granted
&lt;/Directory&gt;
&lt;/VirtualHost&gt;</programlisting>
<para>Compute API SSL endpoint in Apache, which you must pair with
a short WSGI script.</para>
<programlisting>&lt;VirtualHost &lt;ip address&gt;:8447&gt;
ServerName &lt;site FQDN&gt;
SSLEngine On
SSLProtocol +TLSv1 +TLSv1.1 +TLSv1.2
SSLCipherSuite HIGH:!RC4:!MD5:!aNULL:!eNULL:!EXP:!LOW:!MEDIUM
SSLCertificateFile /path/&lt;site FQDN&gt;.crt
SSLCACertificateFile /path/&lt;site FQDN&gt;.crt
SSLCertificateKeyFile /path/&lt;site FQDN&gt;.key
SSLSessionTickets Off
WSGIScriptAlias / &lt;WSGI script location&gt;
WSGIDaemonProcess osapi user=&lt;user&gt; group=&lt;group&gt; processes=3 threads=10
&lt;Directory &lt;WSGI dir&gt;&gt;
# For http server 2.2 and earlier:
Order allow,deny
Allow from all
# Or, in Apache http server 2.4 and later:
# Require all granted
&lt;/Directory&gt;
&lt;/VirtualHost&gt;</programlisting>
</section>
</section>
<section xml:id="tls-proxies-and-http-services-HTTP-strict-transport-security">
<title>HTTP strict transport security</title>
<para>We recommend that all production deployments use HTTP strict transport security (HSTS).
This header prevents browsers from making insecure connections after they have made a single
secure one. If you have deployed your HTTP services on a public or an untrusted domain, HSTS
is especially important. To enable HSTS, configure your web server to send a header like this
with all requests:</para>
<screen><computeroutput>Strict-Transport-Security: max-age=31536000; includeSubDomains</computeroutput></screen>
<para>Start with a short timeout of 1 day during testing, and raise it to one year after
testing has shown that you have not introduced problems for users. Note that once this header
is set to a large timeout, it is (by design) very difficult to disable.</para>
</section>
<section xml:id="tls-proxies-and-http-services-perfect-forward-secrecy">
<title>Perfect forward secrecy</title>
<para>
Configuring TLS servers for perfect forward secrecy requires
careful planning around key size, session IDs, and session
tickets. In addition, for multi-server deployments, shared
state is also an important consideration. The example
configurations for Apache and Nginx above disable the session
tickets options to help mitigate some of these concerns.
Real-world deployments may desire to enable this feature for
improved performance. This can be done securely, but would
require special consideration around key management. Such
configurations are beyond the scope of this guide. We suggest
reading
<link xlink:href="https://www.imperialviolet.org/2013/06/27/botchingpfs.html">
How to botch TLS forward secrecy by ImperialViolet</link> as
a starting place for understanding the problem space.
</para>
</section>
</section>

View File

@ -1,73 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="understanding-the-audit-process">
<?dbhtml stop-chunking?>
<title>Understanding the audit process</title>
<para>Information system security compliance is reliant on the
completion of two foundational processes:</para>
<orderedlist>
<listitem>
<para>
<emphasis role="bold">Implementation and operation of
security controls.</emphasis> Aligning the information
system with in-scope standards and regulations involves
internal tasks which must be conducted before a formal
assessment. Auditors may be involved at this state to
conduct gap analysis, provide guidance, and increase the
likelihood of successful certification.</para>
</listitem>
<listitem>
<para>
<emphasis role="bold">Independent verification and
validation.</emphasis> Demonstration to a neutral
third-party that system security controls are implemented
and operating effectively, in compliance with in-scope
standards and regulations, is required before many
information systems achieve certified status. Many
certifications require periodic audits to ensure continued
certification, considered part of an overarching continuous
monitoring practice.</para>
</listitem>
</orderedlist>
<section xml:id="understanding-the-audit-process-determining-audit-scope">
<title>Determining audit scope</title>
<para>Determining audit scope, specifically what controls are needed and how to design or modify an OpenStack deployment to satisfy them, should be the initial planning step.</para>
<para>When scoping OpenStack deployments for compliance purposes, consider prioritizing controls around sensitive services, such as command and control functions and the base virtualization technology. Compromises of these facilities may impact an OpenStack environment in its entirety.</para>
<para>Scope reduction helps ensure OpenStack architects establish high quality security controls which are tailored to a particular deployment, however it is paramount to ensure these practices do not omit areas or features from security hardening. A common example is applicable to PCI-DSS guidelines, where payment related infrastructure may be scrutinized for security issues, but supporting services are left ignored, and vulnerable to attack.</para>
<para>When addressing compliance, you can increase efficiency and reduce work effort by identifying common areas and criteria that apply across multiple certifications. Much of the audit principles and guidelines discussed in this book will assist in identifying these controls, additionally a number of external entities provide comprehensive lists. The following are some examples:</para>
<para>The <link xlink:href="https://cloudsecurityalliance.org/research/ccm/">Cloud Security Alliance Cloud Controls Matrix</link> (CCM) assists both cloud providers and consumers in assessing the overall security of a cloud provider. The CSA CMM provides a controls framework that map to many industry-accepted standards and regulations including the ISO 27001/2, ISACA, COBIT, PCI, NIST, Jericho Forum and NERC CIP.</para>
<para>The <link xlink:href="https://fedorahosted.org/scap-security-guide/">SCAP Security Guide</link> is another useful reference. This is still an emerging source, but we anticipate that this will grow into a tool with controls mappings that are more focused on the US federal government certifications and recommendations. For example, the SCAP Security Guide currently has some mappings for security technical implementation guides (STIGs) and NIST-800-53.</para>
<para>These control mappings will help identify common control criteria across certifications, and provide visibility to both auditors and auditees on problem areas within control sets for particular compliance certifications and attestations.</para>
</section>
<section xml:id="understanding-the-audit-process-internal-audit">
<title>Internal audit</title>
<para>Once a cloud is deployed, it is time for an internal audit. This is the time compare the controls you identified above with the design, features, and deployment strategies utilized in your cloud. The goal is to understand how each control is handled and where gaps exist. Document all of the findings for future reference.</para>
<para>When auditing an OpenStack cloud it is important to appreciate the multi-tenant environment inherent in the OpenStack architecture. Some critical areas for concern include data disposal, hypervisor security, node hardening, and authentication mechanisms.</para>
</section>
<section xml:id="understanding-the-audit-process-prepare-for-external-audit">
<title>Prepare for external audit</title>
<para>Once the internal audit results look good, it is time to prepare for an external audit. There are several key actions to take at this stage, these are outlined below:</para>
<itemizedlist><listitem>
<para>Maintain good records from your internal audit. These will prove useful during the external audit so you can be prepared to answer questions about mapping the compliance controls to a particular deployment.</para>
</listitem>
<listitem>
<para>Deploy automated testing tools to ensure that the cloud remains compliant over time.</para>
</listitem>
<listitem>
<para>Select an auditor.</para>
</listitem>
</itemizedlist>
<para>Selecting an auditor can be challenging. Ideally, you are looking for someone with experience in cloud compliance audits. OpenStack experience is another big plus. Often it is best to consult with people who have been through this process for referrals. Cost can vary greatly depending on the scope of the engagement and the audit firm considered.</para>
</section>
<section xml:id="understanding-the-audit-process-external-audit">
<title>External audit</title>
<para>This is the formal audit process. Auditors will test security controls in scope for a specific certification, and demand evidentiary requirements to prove that these controls were also in place for the audit window (for example SOC 2 audits generally evaluate security controls over a 6-12 months period). Any control failures are logged, and will be documented in the external auditors final report. Dependent on the type of OpenStack deployment, these reports may be viewed by customers, so it is important to avoid control failures. This is why audit preparation is so important.</para>
</section>
<section xml:id="understanding-the-audit-process-compliance-maintenance">
<title>Compliance maintenance</title>
<para>The process doesn't end with a single external audit. Most certifications require continual compliance activities which means repeating the audit process periodically. We recommend integrating automated compliance verification tools into a cloud to ensure that it is compliant at all times. This should be in done in addition to other security monitoring tools. Remember that the goal is both security <emphasis>and</emphasis> compliance. Failing on either of these fronts will significantly complicate future audits.</para>
</section>
</section>

View File

@ -1,232 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE section [
<!ENTITY % openstack SYSTEM "openstack.ent">
%openstack;
]>
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xi="http://www.w3.org/2001/XInclude"
xmlns:xlink="http://www.w3.org/1999/xlink"
version="5.0"
xml:id="why-and-how-we-wrote-this-book">
<?dbhtml stop-chunking?>
<title>Why and how we wrote this book</title>
<para>As OpenStack adoption continues to grow and the product matures,
security has become a priority. The OpenStack Security Group has
recognized the need for a comprehensive and authoritative security
guide. The <emphasis role="bold">OpenStack Security Guide</emphasis>
has been written to provide an overview of security best practices,
guidelines, and recommendations for increasing the security of an
OpenStack deployment. The authors bring their expertise from deploying
and securing OpenStack in a variety of environments.</para>
<para>This guide augments the
<link xlink:href="http://docs.openstack.org/ops/">
<citetitle>OpenStack Operations Guide</citetitle></link> and can be
referenced to harden existing OpenStack deployments or to evaluate the
security controls of OpenStack cloud providers.</para>
<section xml:id="why-and-how-we-wrote-this-book-objectives">
<title>Objectives</title>
<itemizedlist>
<listitem>
<para>Identify the security domains in OpenStack</para>
</listitem>
<listitem>
<para>Provide guidance to secure your OpenStack deployment</para>
</listitem>
<listitem>
<para>Highlight security concerns and potential mitigations in present
day OpenStack</para>
</listitem>
<listitem>
<para>Discuss upcoming security features</para>
</listitem>
<listitem>
<para>To provide a community driven facility for knowledge capture and
dissemination</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="why-and-how-we-wrote-this-book-how">
<title>How</title>
<para>As with the OpenStack Operations Guide, we followed the book sprint
methodology. The book sprint process allows for rapid development and
production of large bodies of written work. Coordinators from the
OpenStack Security Group re-enlisted the services of Adam Hyde as
facilitator. Corporate support was obtained and the project was formally
announced during the OpenStack summit in Portland, Oregon.</para>
<para>The team converged in Annapolis, MD due to the close proximity of
some key members of the group. This was a remarkable collaboration
between public sector intelligence community members, silicon valley
startups and some large, well-known technology companies. The book sprint
ran during the last week in June 2013 and the first edition was created
in five days.</para>
<para>
<inlinemediaobject>
<imageobject role="html">
<imagedata contentdepth="450" contentwidth="540"
fileref="static/group.png" format="PNG" scalefit="1"/>
</imageobject>
<imageobject role="fo">
<imagedata contentdepth="100%" fileref="static/group.png"
format="PNG" scalefit="1" width="100%"/>
</imageobject>
</inlinemediaobject></para>
<para>The team included:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Bryan D. Payne</emphasis>, Nebula</para>
<para>Dr. Bryan D. Payne is the Director of Security Research at Nebula
and co-founder of the OpenStack Security Group (OSSG). Prior to
joining Nebula, he worked at Sandia National Labs, the National
Security Agency, BAE Systems, and IBM Research. He graduated with
a Ph.D. in Computer Science from the Georgia Tech College of
Computing, specializing in systems security.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Robert Clark</emphasis>, HP</para>
<para>Robert Clark is the Lead Security Architect for HP Cloud
Services and co-founder of the OpenStack Security Group (OSSG).
Prior to being recruited by HP, he worked in the UK Intelligence
Community. Robert has a strong background in threat modeling,
security architecture and virtualization technology. Robert has a
master's degree in Software Engineering from the University of
Wales.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Keith Basil</emphasis>, Red Hat</para>
<para>Keith Basil is a Principal Product Manager for Red Hat OpenStack
and is focused on Red Hat's OpenStack product management, development
and strategy. Within the US public sector, Basil brings previous
experience from the design of an authorized, secure, high-performance
cloud architecture for Federal civilian agencies and
contractors.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Cody Bunch</emphasis>, Rackspace</para>
<para>Cody Bunch is a Private Cloud architect with Rackspace. Cody has
co-authored an update to "The OpenStack Cookbook" as well as books
on VMware automation.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Malini Bhandaru</emphasis>, Intel</para>
<para>Malini Bhandaru is a security architect at Intel. She has a
varied background, having worked on platform power and performance
at Intel, speech products at Nuance, remote monitoring and management
at ComBrio, and web commerce at Verizon. She has a Ph.D. in Artificial
Intelligence from the University of Massachusetts, Amherst.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Gregg Tally</emphasis>, Johns Hopkins
University Applied Physics Laboratory</para>
<para>Gregg Tally is the Chief Engineer at JHU/APL's Cyber Systems
Group within the Asymmetric Operations Department. He works primarily
in systems security engineering. Previously, he has worked at SPARTA,
McAfee, and Trusted Information Systems where he was involved in
cyber security research projects.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Eric Lopez</emphasis>, VMware</para>
<para>Eric Lopez is Senior Solution Architect at VMware's Networking
and Security Business Unit where he helps customers implement
OpenStack and VMware NSX (formerly known as Nicira's Network
Virtualization Platform). Prior to joining VMware (through the
company's acquisition of Nicira), he worked for Q1 Labs, Symantec,
Vontu, and Brightmail. He has a B.S in Electrical Engineering/Computer
Science and Nuclear Engineering from U.C. Berkeley and MBA from the
University of San Francisco.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Shawn Wells</emphasis>, Red Hat</para>
<para>Shawn Wells is the Director, Innovation Programs at Red Hat,
focused on improving the process of adopting, contributing to, and
managing open source technologies within the U.S. Government.
Additionally, Shawn is an upstream maintainer of the SCAP Security
Guide project which forms virtualization and operating system
hardening policy with the U.S. Military, NSA, and DISA. Formerly an
NSA civilian, Shawn developed SIGINT collection systems utilizing
large distributed computing infrastructures.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Ben de Bont</emphasis>, HP</para>
<para>Ben de Bont is the CSO for HP Cloud Services. Prior to his
current role Ben led the information security group at MySpace and
the incident response team at MSN Security. Ben holds a master's
degree in Computer Science from the Queensland University of
Technology.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Nathanael Burton</emphasis>, National
Security Agency</para>
<para>Nathanael Burton is a Computer Scientist at the National Security
Agency. He has worked for the Agency for over 10 years working on
distributed systems, large-scale hosting, open source initiatives,
operating systems, security, storage, and virtualization technology.
He has a B.S. in Computer Science from Virginia Tech.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Vibha Fauver</emphasis></para>
<para>Vibha Fauver, GWEB, CISSP, PMP, has over fifteen years of
experience in Information Technology. Her areas of specialization
include software engineering, project management and
information security. She has a B.S. in Computer &amp; Information
Science and a M.S. in Engineering Management with specialization and
a certificate in Systems Engineering.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Eric Windisch</emphasis>,
Cloudscaling</para>
<para>Eric Windisch is a Principal Engineer at Cloudscaling where he
has been contributing to OpenStack for over two years. Eric has been
in the trenches of hostile environments, building tenant isolation
and infrastructure security through more than a decade of experience
in the web hosting industry. He has been building cloud computing
infrastructure and automation since 2007.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Andrew Hay</emphasis>, CloudPassage</para>
<para>Andrew Hay is the Director of Applied Security Research at
CloudPassage, Inc. where he leads the security research efforts for
the company and its server security products purpose-built for
dynamic public, private, and hybrid cloud hosting environments.</para>
</listitem>
<listitem>
<para><emphasis role="bold">Adam Hyde</emphasis></para>
<para>Adam facilitated this Book Sprint. He also founded the Book
Sprint methodology and is the most experienced Book Sprint facilitator
around. Adam founded FLOSS Manuals&mdash;a community of some 3,000
individuals developing Free Manuals about Free Software. He is also
the founder and project manager for Booktype, an open source project
for writing, editing, and publishing books online and in print.</para>
</listitem>
</itemizedlist>
<para>During the sprint we also had help from Anne Gentle, Warren Wang,
Paul McMillan, Brian Schott and Lorin Hochstein.</para>
<para>This Book was produced in a 5 day book sprint. A book
sprint is an intensely collaborative, facilitated process which
brings together a group to produce a book in 3-5 days. It is a
strongly facilitated process with a specific methodology founded
and developed by Adam Hyde. For more information visit the book
sprint web page at <link xlink:href="http://www.booksprints.net">
http://www.booksprints.net</link>.</para>
<para>After initial publication, the following added new content:</para>
<itemizedlist>
<listitem>
<para><emphasis role="bold">Rodney D. Beede</emphasis>, Seagate
Technology</para>
<para>Rodney D. Beede is the Cloud Security Engineer for
Seagate Technology. He contributed the missing chapter on
securing OpenStack Object Storage (swift). He holds a M.S.
in Computer Science from the University of Colorado.</para>
</listitem>
</itemizedlist>
</section>
<section xml:id="why-and-how-we-wrote-this-book-how-to-contribute-to-this-book">
<title>How to contribute to this book</title>
<para>The initial work on this book was conducted in an overly
air-conditioned room that served as our group office for the
entirety of the documentation sprint.</para>
<para>Learn more about how to contribute to the OpenStack
docs:
<link xlink:href="http://wiki.openstack.org/Documentation/HowTo">
http://wiki.openstack.org/Documentation/HowTo</link>.</para>
</section>
</section>

Some files were not shown because too many files have changed in this diff Show More