Updating Compute chapter to RST

+ Concatenated all Compute chapter sections into one file
+ Broke all files out
+ Stripped DocBook formatting
+ Added RST formatting
+ Updated formatting
+ Added diagram fo figures/
+ Updating line lengths and file reference

Change-Id: Ib9409f9df3da1bd1a6fc1d5780d327b671969faa
Partial-Bug: #1463111
This commit is contained in:
Nathaniel Dillon 2015-07-20 21:26:24 -07:00 committed by Michael McCune
parent a42fec7a2e
commit 9843d36e40
8 changed files with 1142 additions and 0 deletions

View File

@ -1,3 +1,25 @@
=======
Compute
=======
The OpenStack Compute service (nova) is one of the more complex OpenStack
services. It runs in many locations throughout the cloud and interacts with a
variety of internal services. There are a variety of configuration options
when using the OpenStack Compute service, and these can be
deployment-specific. In this chapter we will call out general best practice
around Compute security as well as specific known configurations that can lead
to security issues. In general, the :file:`nova.conf` file and the
:file:`/var/lib/nova` locations should be secured. Controls like centralized
logging, the :file:`policy.json` file, and a mandatory access control framework
should be implemented. Additionally, there are environmental considerations to
keep in mind, depending on what functionality is desired for your cloud.
.. toctree::
:maxdepth: 2
compute/hypervisor-selection.rst
compute/hardening-the-virtualization-layers.rst
compute/hardening-deployments.rst
compute/vulnerability-awareness.rst
compute/how-to-select-virtual-consoles.rst
compute/case-studies.rst

View File

@ -0,0 +1,51 @@
============
Case studies
============
Earlier in :doc:`../introduction/introduction-to-case-studies` we
introduced the Alice and Bob case studies where Alice is deploying a
private government cloud and Bob is deploying a public cloud each with
different security requirements. Here we discuss how Alice and Bob
would ensure that their instances are properly isolated. First we consider
hypervisor selection, and then techniques for hardening QEMU and applying
mandatory access controls.
Alice's private cloud
~~~~~~~~~~~~~~~~~~~~~
Alice chooses Xen for the hypervisor in her cloud due to a strong internal
knowledge base and a desire to use the Xen security modules (XSM) for
fine-grained policy enforcement.
Alice is willing to apply a relatively large amount of resources to software
packaging and maintenance. She will use these resources to build a highly
customized version of QEMU that has many components removed, thereby reducing
the attack surface. She will also ensure that all compiler hardening options
are enabled for QEMU. Alice accepts that these decisions will increase
long-term maintenance costs.
Alice writes XSM policies (for Xen) and SELinux policies (for Linux domain 0,
and device domains) to provide stronger isolation between the instances. Alice
also uses the Intel TXT support in Xen to measure the hypervisor launch in the
TPM.
Bob's public cloud
~~~~~~~~~~~~~~~~~~
Bob is very concerned about instance isolation since the users in a public
cloud represent anyone with a credit card, meaning they are inherently
untrusted. Bob has just started hiring the team that will deploy the cloud, so
he can tailor his candidate search for specific areas of expertise. With this
in mind, Bob chooses a hypervisor based on its technical features,
certifications, and community support. KVM has an EAL 4+ common criteria
rating, with a labeled security protection profile (LSPP) to provide added
assurance for instance isolation. This, combined with the strong support for
KVM within the OpenStack community drives Bob's decision to use KVM.
Bob weighs the added cost of repackaging QEMU and decides that he cannot commit
those resources to the project. Fortunately, his Linux distribution has already
enabled the compiler hardening options. So he decides to use this QEMU package.
Finally, Bob leverages sVirt to manage the SELinux polices associated with the
virtualization stack.

View File

@ -0,0 +1,103 @@
=============================
Hardening Compute deployments
=============================
One of the main security concerns with any OpenStack deployment is the security
and controls around sensitive files, such as the :file:`nova.conf` file.
Normally contained in the :file:`/etc` directory, this configuration file
contains many sensitive options including configuration details and service
passwords. All such sensitive files should be given strict file level
permissions, and monitored for changes through file integrity monitoring (FIM)
tools such as iNotify or Samhain. These utilities will take a hash of the
target file in a known good state, and then periodically take a new hash of the
file and compare it to the known good hash. An alert can be created if it was
found to have been modified unexpectedly.
The permissions of a file can be examined my moving into the directory the file
is contained in and running the ``ls -lh`` command. This will show the
permissions, owner, and group that have access to the file, as well as other
information such as the last time the file was modified and when it was
created.
The :file:`/var/lib/nova` directory is used to hold details about the instances
on a given Compute host. This directory should be considered sensitive as well,
with strictly enforced file permissions. Additionally, it should be backed up
regularly as it contains information and metadata for the instances associated
with that host.
If your deployment does not require full virtual machine backups, we recommend
excluding the :file:`/var/lib/nova/instances` directory as it will be as large
as the combined space of each vm running on that node. If your deployment does
require full vm backups, you will need to ensure this directory is backed up
successfully.
Monitoring is a critical component of IT infrastructure, and we recommend the
`Compute logfiles
<http://docs.openstack.org/kilo/config-reference/content/section_nova-logs.html>`__
be monitored and analyzed so that meaningful alerts can be created.
OpenStack vulnerability management team
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We recommend keeping up to date on security issues and advisories as they are
published. The OpenStack Security Portal (`https://security.openstack.org
<https://security.openstack.org>`__) is the central portal where advisories,
notices, meetings, and processes can be coordinated. Additionally, the
OpenStack Vulnerability Management Team (VMT) portal
(`https://security.openstack.org/#openstack-vulnerability-management-team
<https://security.openstack.org/#openstack-vulnerability-management-team>`__)
coordinates remediation within the OpenStack project, as well as the process of
investigating reported bugs which are responsibly disclosed (privately) to the
VMT, by marking the bug as 'This bug is a security vulnerability'. Further
detail is outlined in the VMT process page
(`https://security.openstack.org/vmt-process.html#process
<https://security.openstack.org/vmt-process.html#process>`__) and results in an
OpenStack Security Advisory or OSSA. This OSSA outlines the issue and the fix,
as well as linking to both the original bug, and the location where the where
the patch is hosted.
OpenStack security notes
~~~~~~~~~~~~~~~~~~~~~~~~
Reported security bugs that are found to be the result of a misconfiguration,
or are not strictly part of OpenStack are drafted into Openstack Security Notes
or OSSNs. These include configuration issues such as ensuring Identity provider
mappings as well as non-OpenStack but critical issues such as the Bashbug/Ghost
or Venom vulnerabilities that affect the platform OpenStack utilizes. The
current set of OSSNs is in the Security Note wiki
(`https://wiki.openstack.org/wiki/Security_Notes
<https://wiki.openstack.org/wiki/Security_Notes>`__).
OpenStack-dev mailinglist
~~~~~~~~~~~~~~~~~~~~~~~~~
All bugs, OSSAs and OSSNs are publicly disseminated through the openstack-dev
mailinglist with the [security] topic in the subject line. We recommend
subscribing to this list as well as mail filtering rules that ensure OSSNs,
OSSAs, and other important advisories are not missed. The openstack-dev
mailinglist is managed through
`http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`__.
The openstack-dev list has a high traffic rate, and filtering is discussed in
the thread
`http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html
<http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html>`__.
Hypervisor mailinglists
~~~~~~~~~~~~~~~~~~~~~~~
When implementing OpenStack, one of the core decisions is which hypervisor to
utilize. We recommend being informed of advisories pertaining to the
hypervisor(s) you have chosen. Several common hypervisor security lists are
below:
Xen:
`http://xenbits.xen.org/xsa/ <http://xenbits.xen.org/xsa/>`__
VMWare:
`http://blogs.vmware.com/security/ <http://blogs.vmware.com/security/>`__
Others (KVM, and more):
`http://seclists.org/oss-sec <http://seclists.org/oss-sec>`__

View File

@ -0,0 +1,309 @@
===================================
Hardening the virtualization layers
===================================
In the beginning of this chapter we discuss the use of both physical and
virtual hardware by instances, the associated security risks, and some
recommendations for mitigating those risks. We conclude the chapter with a
discussion of sVirt, an open source project for integrating SELinux mandatory
access controls with the virtualization components.
Physical hardware (PCI passthrough)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many hypervisors offer a functionality known as PCI passthrough. This allows an
instance to have direct access to a piece of hardware on the node. For example,
this could be used to allow instances to access video cards or GPUs offering
the compute unified device architecture (CUDA) for high performance
computation. This feature carries two types of security risks: direct memory
access and hardware infection.
Direct memory access (DMA) is a feature that permits certain hardware devices
to access arbitrary physical memory addresses in the host computer. Often
video cards have this capability. However, an instance should not be given
arbitrary physical memory access because this would give it full view of both
the host system and other instances running on the same node. Hardware vendors
use an input/output memory management unit (IOMMU) to manage DMA access in
these situations. Therefore, cloud architects should ensure that the hypervisor
is configured to utilize this hardware feature.
KVM:
`How to assign devices with VT-d in KVM
<http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM>`__
Xen:
`Xen VTd Howto <http://wiki.xen.org/wiki/VTd_HowTo>`__
.. note::
The IOMMU feature is marketed as VT-d by Intel and AMD-Vi by AMD.
A hardware infection occurs when an instance makes a malicious modification to
the firmware or some other part of a device. As this device is used by other
instances or the host OS, the malicious code can spread into those systems. The
end result is that one instance can run code outside of its security domain.
This is a significant breach as it is harder to reset the state of physical
hardware than virtual hardware, and can lead to additional exposure such as
access to the management network.
.. TODO (elmiko) fixup link to management chapter to point to integrity
life cycle secure bootstrapping
Solutions to the hardware infection problem are domain specific. The strategy
is to identify how an instance can modify hardware state then determine how to
reset any modifications when the instance is done using the hardware. For
example, one option could be to re-flash the firmware after use. Clearly there
is a need to balance hardware longevity with security as some firmwares will
fail after a large number of writes. TPM technology, described in
:doc:`../management` a solution for detecting
unauthorized firmware changes. Regardless of the strategy selected, it is
important to understand the risks associated with this kind of hardware sharing
so that they can be properly mitigated for a given deployment scenario.
Additionally, due to the risk and complexities associated with PCI passthrough,
it should be disabled by default. If enabled for a specific need, you will need
to have appropriate processes in place to ensure the hardware is clean before
re-issue.
Virtual hardware (QEMU)
~~~~~~~~~~~~~~~~~~~~~~~
When running a virtual machine, virtual hardware is a software layer that
provides the hardware interface for the virtual machine. Instances use this
functionality to provide network, storage, video, and other devices that may be
needed. With this in mind, most instances in your environment will exclusively
use virtual hardware, with a minority that will require direct hardware access.
The major open source hypervisors use QEMU for this functionality. While QEMU
fills an important need for virtualization platforms, it has proven to be a
very challenging software project to write and maintain. Much of the
functionality in QEMU is implemented with low-level code that is difficult for
most developers to comprehend. Furthermore, the hardware virtualized by QEMU
includes many legacy devices that have their own set of quirks. Putting all of
this together, QEMU has been the source of many security problems, including
hypervisor breakout attacks.
Therefore, it is important to take proactive steps to harden QEMU. Three
specific steps are recommended: minimizing the code base, using compiler
hardening, and using mandatory access controls such as sVirt, SELinux, or
AppArmor.
Additionally, ensure iptables has the default policy filtering network traffic,
and consider examining the existing rule set to understand each rule and
determine if the policy needs to be expanded upon.
Minimizing the QEMU code base
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The first recommendation is to minimize the QEMU code base by removing unused
components from the system. QEMU provides support for many different virtual
hardware devices, however only a small number of devices are needed for a given
instance. The most common hardware devices are the virtio devices. Some legacy
instances will need access to specific hardware, which can be specified using
glance metadata:
.. code:: console
$ glance image-update \
--property hw_disk_bus=ide \
--property hw_cdrom_bus=ide \
--property hw_vif_model=e1000 \
f16-x86_64-openstack-sda
A cloud architect should decide what devices to make available to cloud users.
Anything that is not needed should be removed from QEMU. This step requires
recompiling QEMU after modifying the options passed to the QEMU configure
script. For a complete list of up-to-date options simply run
``command:./configure --help`` from within the QEMU source directory. Decide
what is needed for your deployment, and disable the remaining options.
Compiler hardening
~~~~~~~~~~~~~~~~~~
The next step is to harden QEMU using compiler hardening options. Modern
compilers provide a variety of compile time options to improve the security of
the resulting binaries. These features, which we will describe in more detail
below, include relocation read-only (RELRO), stack canaries, never execute
(NX), position independent executable (PIE), and address space layout
randomization (ASLR).
Many modern Linux distributions already build QEMU with compiler hardening
enabled, so you may want to verify your existing executable before
proceeding with the information below. One tool that can assist you with this
verification is called
`checksec.sh <http://www.trapkit.de/tools/checksec.html>`__
RELocation Read-Only (RELRO)
Hardens the data sections of an executable. Both full and partial RELRO
modes are supported by gcc. For QEMU full RELRO is your best choice.
This will make the global offset table read-only and place various
internal data sections before the program data section in the resulting
executable.
Stack canaries
Places values on the stack and verifies their presence to help prevent
buffer overflow attacks.
Never eXecute (NX)
Also known as Data Execution Prevention (DEP), ensures that data sections
of the executable can not be executed.
Position Independent Executable (PIE)
Produces a position independent executable, which is necessary for ASLR.
Address Space Layout Randomization (ASLR)
This ensures that placement of both code and data regions will be
randomized. Enabled by the kernel (all modern Linux kernels support ASLR),
when the executable is built with PIE.
The following compiler options are recommend for GCC when compiling QEMU:
.. code:: console
CFLAGS="-arch x86_64 -fstack-protector-all -Wstack-protector \
--param ssp-buffer-size=4 -pie -fPIE -ftrapv -D_FORTIFY_SOURCE=2 -O2 \
-Wl,-z,relro,-z,now"
We recommend testing your QEMU executable file after it is compiled to ensure
that the compiler hardening worked properly.
Most cloud deployments will not want to build software such as QEMU by hand. It
is better to use packaging to ensure that the process is repeatable and to
ensure that the end result can be easily deployed throughout the cloud. The
references below provide some additional details on applying compiler hardening
options to existing packages.
DEB packages:
`Hardening Walkthrough <http://wiki.debian.org/HardeningWalkthrough>`__
RPM packages:
`How to create an RPM package
<http://fedoraproject.org/wiki/How_to_create_an_RPM_package>`__
Mandatory access controls
~~~~~~~~~~~~~~~~~~~~~~~~~
Compiler hardening makes it more difficult to attack the QEMU process. However,
if an attacker does succeed, we would like to limit the impact of the attack.
Mandatory access controls accomplish this by restricting the privileges on QEMU
process to only what is needed. This can be accomplished using sVirt / SELinux
or AppArmor. When using sVirt, SELinux is configured to run each QEMU process
under a separate security context. AppArmor can be configured to provide
similar functionality. We provide more details on sVirt and instance isolation
in the section below
:ref:`hardening-the-virtualization-layers-svirt-selinux-and-virtualization`.
.. _hardening-the-virtualization-layers-svirt-selinux-and-virtualization:
sVirt: SELinux and virtualization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With unique kernel-level architecture and National Security Agency (NSA)
developed security mechanisms, KVM provides foundational isolation technologies
for multi-tenancy. With developmental origins dating back to 2002, the Secure
Virtualization (sVirt) technology is the application of SELinux against modern
day virtualization. SELinux, which was designed to apply separation control
based upon labels, has been extended to provide isolation between virtual
machine processes, devices, data files and system processes acting upon their
behalf.
OpenStack's sVirt implementation aspires to protect hypervisor hosts and
virtual machines against two primary threat vectors:
Hypervisor threats
A compromised application running within a virtual machine attacks the
hypervisor to access underlying resources. For example, when a virtual
machine is able to access the hypervisor OS, physical devices, or other
applications. This threat vector represents considerable risk as a
compromise on a hypervisor can infect the physical hardware as well as
exposing other virtual machines and network segments.
Virtual Machine (multi-tenant) threats
A compromised application running within a VM attacks the hypervisor to
access or control another virtual machine and its resources. This is a
threat vector unique to virtualization and represents considerable risk as
a multitude of virtual machine file images could be compromised due to
vulnerability in a single application. This virtual network attack is a
major concern as the administrative techniques for protecting real
networks do not directly apply to the virtual environment.
Each KVM-based virtual machine is a process which is labeled by SELinux,
effectively establishing a security boundary around each virtual machine. This
security boundary is monitored and enforced by the Linux kernel, restricting
the virtual machine's access to resources outside of its boundary such as host
machine data files or other VMs.
.. image:: ../figures/sVirt_Diagram_1.png
As shown above, sVirt isolation is provided regardless of the guest Operating
System running inside the virtual machine&mdash;Linux or Windows VMs can be
used. Additionally, many Linux distributions provide SELinux within the
operating system, allowing the virtual machine to protect internal virtual
resources from threats.
Labels and categories
~~~~~~~~~~~~~~~~~~~~~
KVM-based virtual machine instances are labelled with their own SELinux data
type, known as svirt_image_t. Kernel level protections prevent unauthorized
system processes, such as malware, from manipulating the virtual machine image
files on disk. When virtual machines are powered off, images are stored as
svirt_image_t as shown below:
.. code::
system_u:object_r:svirt_image_t:SystemLow image1
system_u:object_r:svirt_image_t:SystemLow image2
system_u:object_r:svirt_image_t:SystemLow image3
system_u:object_r:svirt_image_t:SystemLow image4
The *svirt_image_t* label uniquely identifies image files on disk, allowing for
the SELinux policy to restrict access. When a KVM-based Compute image is
powered on, sVirt appends a random numerical identifier to the image. sVirt is
capable of assigning numeric identifiers to a maximum of 524,288 virtual
machines per hypervisor node, however most OpenStack deployments are highly
unlikely to encounter this limitation.
This example shows the sVirt category identifier:
.. code::
system_u:object_r:svirt_image_t:s0:c87,c520 image1
system_u:object_r:svirt_image_t:s0:419,c172 image2
SELinux users and roles
~~~~~~~~~~~~~~~~~~~~~~~
SELinux can also manage user roles. These can be viewed through the *-Z* flag,
or with the ``semanage`` command. On the hypervisor, only administrators should
be able to access the system, and should have an appropriate context around
both the administrative users and any other users that are on the system.
SELinux users documentation:
`SELinux.org Users and Roles Overview
<http://selinuxproject.org/page/BasicConcepts#Users>`__
Booleans
~~~~~~~~
To ease the administrative burden of managing SELinux, many enterprise Linux
platforms utilize SELinux Booleans to quickly change the security posture of
sVirt.
Red Hat Enterprise Linux-based KVM deployments utilize the following sVirt
booleans:
.. list-table::
:header-rows: 1
:widths: 10 20
* - sVirt SELinux Boolean
- Description
* - virt_use_common
- Allow virt to use serial/parallel communication ports.
* - virt_use_fusefs
- Allow virt to read FUSE mounted files.
* - virt_use_nfs
- Allow virt to manage NFS mounted files.
* - virt_use_samba
- Allow virt to manage CIFS mounted files.
* - virt_use_sanlock
- Allow confined virtual guests to interact with the sanlock.
* - virt_use_sysfs
- Allow virt to manage device configuration (PCI).
* - virt_use_usb
- Allow virt to use USB devices.
* - virt_use_xserver
- Allow virtual machine to interact with the X Window System.

View File

@ -0,0 +1,90 @@
==============================
How to select virtual consoles
==============================
One decision a cloud architect will need to make regarding Compute service
configuration is whether to use :term:`VNC <Virtual Network Computing (VNC)>`
or :term:`SPICE`. Below we provide some details on the differences between
these options.
Virtual Network Computer (VNC)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
OpenStack can be configured to provide remote desktop console access to
instances for tenants and/or administrators using the Virtual Network Computer
(VNC) protocol.
Capabilities
------------
#. The OpenStack dashboard (horizon) can provide a VNC console for instances
directly on the web page using the HTML5 noVNC client. This requires the
*nova-novncproxy* service to bridge from the public network to the
management network.
#. The ``nova`` command-line utility can return a URL for the VNC console for
access by the *nova* Java VNC client. This requires the *nova-xvpvncproxy*
service to bridge from the public network to the management network.
Security considerations
-----------------------
#. The *nova-novncproxy* and *nova-xvpvncproxy* services by default open
public-facing ports that are token authenticated.
#. By default, the remote desktop traffic is not encrypted. TLS can be enabled
to encrypt the VNC traffic. Please refer to `Introduction to TLS and SSL
<introduction-to-ssl-tls>`__ for appropriate recommendations.
Bibliography
------------
#. blog.malchuk.ru, OpenStack VNC Security. 2013. `Secure Connections to VNC
ports <"http://blog.malchuk.ru/2013/05/21/47>`__
#. OpenStack Mailing List, [OpenStack] nova-novnc SSL configuration - Havana.
2014.
`OpenStack nova-novnc SSL Configuration
<http://lists.openstack.org/pipermail/openstack/2014-February/005357.html>`__
#. Redhat.com/solutions, Using SSL Encryption with OpenStack nova-novacproxy.
2014.
`OpenStack nova-novncproxy SSL encryption <https://access.redhat.com/solutions/514143>`__
Simple Protocol for Independent Computing Environments (SPICE)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As an alternative to VNC, OpenStack provides remote desktop access to guest
virtual machines using the Simple Protocol for Independent Computing
Environments (SPICE) protocol.
Capabilities
------------
#. SPICE is supported by the OpenStack dashboard (horizon) directly on the
instance web page. This requires the *nova-spicehtml5proxy* service.
#. The nova command-line utility can return a URL for SPICE console for access
by a SPICE-html client.
Limitations
-----------
#. Although SPICE has many advantages over VNC, the spice-html5 browser
integration currently doesn't really allow admins to take advantage of any
of the benefits. To take advantage of SPICE features like multi-monitor,
USB pass through, etc. admins are recommended to use a standalone SPICE
client within the Management Network.
Security considerations
-----------------------
#. The *nova-spicehtml5proxy* service by default opens public-facing ports that
are token authenticated.
#. The functionality and integration are still evolving. We will access the
features in the next release and make recommendations.
#. As is the case for VNC, at this time we recommend using SPICE from the
management network in addition to limiting use to few individuals.
Bibliography
------------
#. OpenStack Configuration Reference - Havana. SPICE Console. `SPICE Console
<http://docs.openstack.org/havana/config-reference/content/spice-console.html>`__.
#. bugzilla.redhat.com, Bug 913607 - RFE: Support Tunnelling SPICE over
websockets. 2013. `RedHat bug 913607 <https://bugzilla.redhat.com/show_bug.cgi?id=913607>`_.

View File

@ -0,0 +1,505 @@
====================
Hypervisor selection
====================
Hypervisors in OpenStack
~~~~~~~~~~~~~~~~~~~~~~~~
Whether OpenStack is deployed within private data centers or as a public cloud
service, the underlying virtualization technology provides enterprise-level
capabilities in the realms of scalability, resource efficiency, and uptime.
While such high-level benefits are generally available across many
OpenStack-supported hypervisor technologies, there are significant differences
in the security architecture and features for each hypervisor, particularly
when considering the security threat vectors which are unique to elastic
OpenStack environments. As applications consolidate into single
Infrastructure-as-a-Service (IaaS) platforms, instance isolation at the
hypervisor level becomes paramount. The requirement for secure isolation holds
true across commercial, government, and military communities.
Within the OpenStack framework, you can choose among many hypervisor platforms
and corresponding OpenStack plug-ins to optimize your cloud environment. In the
context of this guide, hypervisor selection considerations are highlighted as
they pertain to feature sets that are critical to security. However, these
considerations are not meant to be an exhaustive investigation into the pros
and cons of particular hypervisors. NIST provides additional guidance in
Special Publication 800-125, "*Guide to Security for Full Virtualization
Technologies*".
Selection criteria
~~~~~~~~~~~~~~~~~~
As part of your hypervisor selection process, you must consider a number of
important factors to help increase your security posture. Specifically, you
must become familiar with these areas:
* Team expertise
* Product or project maturity
* Common criteria
* Certifications and attestations
* Hardware concerns
* Hypervisor vs. baremetal
* Additional security features
Additionally, the following security-related criteria are highly encouraged to
be evaluated when selecting a hypervisor for OpenStack deployments:
* Has the hypervisor undergone Common Criteria certification? If so, to what
levels?
* Is the underlying cryptography certified by a third-party?
Team expertise
~~~~~~~~~~~~~~
Most likely, the most important aspect in hypervisor selection is the expertise
of your staff in managing and maintaining a particular hypervisor platform. The
more familiar your team is with a given product, its configuration, and its
eccentricities, the fewer the configuration mistakes. Additionally, having
staff expertise spread across an organization on a given hypervisor increases
availability of your systems, allows segregation of duties, and mitigates
problems in the event that a team member is unavailable.
Product or project maturity
~~~~~~~~~~~~~~~~~~~~~~~~~~~
The maturity of a given hypervisor product or project is critical to your
security posture as well. Product maturity has a number of effects once you
have deployed your cloud:
* Availability of expertise
* Active developer and user communities
* Timeliness and availability of updates
* Incidence response
One of the biggest indicators of a hypervisor's maturity is the size and
vibrancy of the community that surrounds it. As this concerns security, the
quality of the community affects the availability of expertise if you need
additional cloud operators. It is also a sign of how widely deployed the
hypervisor is, in turn leading to the battle readiness of any reference
architectures and best practices.
Further, the quality of community, as it surrounds an open source hypervisor
like KVM or Xen, has a direct impact on the timeliness of bug fixes and
security updates. When investigating both commercial and open source
hypervisors, you must look into their release and support cycles as well as
the time delta between the announcement of a bug or security issue and a patch
or response. Lastly, the supported capabilities of OpenStack compute vary
depending on the hypervisor chosen. See the `OpenStack Hypervisor Support
Matrix <https://wiki.openstack.org/wiki/HypervisorSupportMatrix>`__ for
OpenStack compute feature support by hypervisor.
Certifications and attestations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One additional consideration when selecting a hypervisor is the availability of
various formal certifications and attestations. While they may not be
requirements for your specific organization, these certifications and
attestations speak to the maturity, production readiness, and thoroughness of
the testing a particular hypervisor platform has been subjected to.
Common criteria
~~~~~~~~~~~~~~~
Common Criteria is an internationally standardized software evaluation process,
used by governments and commercial companies to validate software technologies
perform as advertised. In the government sector, NSTISSP No. 11 mandates that
U.S. Government agencies only procure software which has been Common Criteria
certified, a policy which has been in place since July 2002. It should be
specifically noted that OpenStack has not undergone Common Criteria
certification, however many of the available hypervisors have.
In addition to validating a technologies capabilities, the Common Criteria
process evaluates *how* technologies are developed.
* How is source code management performed?
* How are users granted access to build systems?
* Is the technology cryptographically signed before distribution?
The KVM hypervisor has been Common Criteria certified through the U.S.
Government and commercial distributions, which have been validated to separate
the runtime environment of virtual machines from each other, providing
foundational technology to enforce instance isolation. In addition to virtual
machine isolation, KVM has been Common Criteria certified to
.. code::
"*provide system-inherent separation mechanisms to the resources of virtual
machines. This separation ensures that large software component used for
virtualizing and simulating devices executing for each virtual machine
cannot interfere with each other. Using the SELinux multi-category
mechanism, the virtualization and simulation software instances are
isolated. The virtual machine management framework configures SELinux
multi-category settings transparently to the administrator*"
While many hypervisor vendors, such as Red Hat, Microsoft, and VMware have
achieved Common Criteria Certification their underlying certified feature set
differs. It is recommended to evaluate vendor claims to ensure they minimally
satisfy the following requirements:
.. list-table::
:widths: 20 80
:header-rows: 1
* - Identification and Authentication
- Identification and authentication using pluggable authentication modules
(PAM) based upon user passwords. The quality of the passwords used can
be enforced through configuration options.
* - Audit
- The system provides the capability to audit a large number of events
including individual system calls as well as events generated by trusted
processes. Audit data is collected in regular files in ASCII format. The
system provides a program for the purpose of searching the audit records.
The system administrator can define a rule base to restrict auditing to
the events they are interested in. This includes the ability to restrict
auditing to specific events, specific users, specific objects or a
combination of all of this.
Audit records can be transferred to a remote audit daemon.
* - Discretionary Access Control
- :term:`DAC` restricts access to
file system objects based on :term:`ACL`
that include the standard UNIX permissions for user,
group and others. Access control mechanisms also protect IPC objects
from unauthorized access.
The system includes the ext4 file system, which supports POSIX ACLs.
This allows defining access rights to files within this type of file
system down to the granularity of a single user.
* - Mandatory Access Control
- Mandatory Access Control (MAC) restricts access to objects based on
labels assigned to subjects and objects. Sensitivity labels are
automatically attached to processes and objects. The access control
policy enforced using these labels is derived from the
:term:`Bell-LaPadula model`.
SELinux categories are attached to virtual machines and its resources.
The access control policy enforced using these categories grant virtual
machines access to resources if the category of the virtual machine is
identical to the category of the accessed resource.
The TOE implements non-hierarchical categories to control access to
virtual machines.
* - Role-Based Access Control
- Role-based access control (RBAC) allows separation of roles to eliminate
the need for an all-powerful system administrator.
* - Object Reuse
- File system objects and memory and IPC objects are cleared before they
can be reused by a process belonging to a different user.
* - Security Management
- The management of the security critical parameters of the system is
performed by administrative users. A set of commands that require root
privileges (or specific roles when RBAC is used) are used for system
management. Security parameters are stored in specific files that are
protected by the access control mechanisms of the system against
unauthorized access by users that are not administrative users.
* - Secure Communication
- The system supports the definition of trusted channels using SSH.
Password based authentication is supported. Only a restricted number of
cipher suites are supported for those protocols in the evaluated
configuration.
* - Storage Encryption
- The system supports encrypted block devices to provide storage
confidentiality via dm_crypt.
* - TSF Protection
- While in operation, the kernel software and data are protected by the
hardware memory protection mechanisms. The memory and process management
components of the kernel ensure a user process cannot access kernel
storage or storage belonging to other processes.
Non-kernel TSF software and data are protected by DAC and process
isolation mechanisms. In the evaluated configuration, the reserved user
ID root owns the directories and files that define the TSF
configuration. In general, files and directories containing internal TSF
data, such as configuration files and batch job queues, are also
protected from reading by DAC permissions.
The system and the hardware and firmware components are required to be
physically protected from unauthorized access. The system kernel
mediates all access to the hardware mechanisms themselves, other than
program visible CPU instruction functions.
In addition, mechanisms for protection against stack overflow attacks
are provided.
Cryptography standards
~~~~~~~~~~~~~~~~~~~~~~
Several cryptography algorithms are available within OpenStack for
identification and authorization, data transfer and protection of data at rest.
When selecting a hypervisor, the following are recommended algorithms and
implementation standards to ensure the virtualization layer supports:
.. list-table::
:header-rows: 1
:widths: 15 10 20 50 20
* - Algorithm
- Key length
- Intended purpose
- Security function
- Implementation standard
* - AES
- 128, 192, or 256 bits
- Encryption / decryption
- Protected data transfer, protection for data at rest
- `RFC 4253 <http://www.ietf.org/rfc/rfc4253.txt>`__
* - TDES
- 168 bits
- Encryption / decryption
- Protected data transfer
- `RFC 4253 <http://www.ietf.org/rfc/rfc4253.txt>`__
* - RSA
- 1024, 2048, or 3072 bits
- Authentication, key exchange
- Identification and authentication, protected data transfer
- `U.S. NIST FIPS PUB 186-3
<http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf>`__
* - DSA
- L=1024, N=160 bits
- Authentication, key exchange
- Identification and authentication, protected data transfer
- `U.S. NIST FIPS PUB 186-3
<http://csrc.nist.gov/publications/fips/fips186-3/fips_186-3.pdf>`__
* - Serpent
- 128, 192, or 256 bits
- Encryption / decryption
- Protection of data at rest
- `http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf
<http://www.cl.cam.ac.uk/~rja14/Papers/serpent.pdf>`__
* - Twofish
- 128, 192, or 256 bit
- Encryption / decryption
- Protection of data at rest
- `http://www.schneier.com/paper-twofish-paper.html
<http://www.schneier.com/paper-twofish-paper.html>`__
* - SHA-1
- -
- Message Digest
- Protection of data at rest, protected data transfer
- `U.S. NIST FIPS PUB 180-3
<http://csrc.nist.gov/publications/fips/fips180-3/fips180-3_final.pdf>`__
* - SHA-2 (224, 256, 384, or 512 bits)
- -
- Message Digest
- Protection for data at rest, identification and authentication
- `U.S. NIST FIPS PUB 180-3
<http://csrc.nist.gov/publications/fips/fips180-3/fips180-3_final.pdf>`__
FIPS 140-2
~~~~~~~~~~
In the United States the National Institute of Science and Technology (NIST)
certifies cryptographic algorithms through a process known the Cryptographic
Module Validation Program. NIST certifies algorithms for conformance against
Federal Information Processing Standard 140-2 (FIPS 140-2), which ensures:
.. code::
*Products validated as conforming to FIPS 140-2 are accepted by the Federal
agencies of both countries [United States and Canada] for the protection of
sensitive information (United States) or Designated Information (Canada).
The goal of the CMVP is to promote the use of validated cryptographic
modules and provide Federal agencies with a security metric to use in
procuring equipment containing validated cryptographic modules.*
When evaluating base hypervisor technologies, consider if the hypervisor has
been certified against FIPS 140-2. Not only is conformance against FIPS 140-2
mandated per U.S. Government policy, formal certification indicates that a
given implementation of a cryptographic algorithm has been reviewed for
conformance against module specification, cryptographic module ports and
interfaces; roles, services, and authentication; finite state model; physical
security; operational environment; cryptographic key management;
electromagnetic interference/electromagnetic compatibility (EMI/EMC);
self-tests; design assurance; and mitigation of other attacks.
Hardware concerns
~~~~~~~~~~~~~~~~~
Further, when you evaluate a hypervisor platform, consider the supportability
of the hardware on which the hypervisor will run. Additionally, consider the
additional features available in the hardware and how those features are
supported by the hypervisor you chose as part of the OpenStack deployment. To
that end, hypervisors each have their own hardware compatibility lists (HCLs).
When selecting compatible hardware it is important to know in advance which
hardware-based virtualization technologies are important from a security
perspective.
.. list-table::
:header-rows: 1
:widths: 20 20 20
* - Description
- Technology
- Explanation
* - I/O MMU
- VT-d / AMD-Vi
- Required for protecting PCI-passthrough
* - Intel Trusted Execution Technology
- Intel TXT / SEM
- Required for dynamic attestation services
* - PCI-SIG I/O virtualization
- SR-IOV, MR-IOV, ATS
- Required to allow secure sharing of PCI Express devices
* - Network virtualization
- VT-c
- Improves performance of network I/O on hypervisors
Hypervisor vs. baremetal
~~~~~~~~~~~~~~~~~~~~~~~~
It is important to recognize the difference between using LXC (Linux
Containers) or baremetal systems vs using a hypervisor like KVM. Specifically,
the focus of this security guide is largely based on having a hypervisor and
virtualization platform. However, should your implementation require the use of
a baremetal or LXC environment, you must pay attention to the particular
differences in regard to deployment of that environment.
In particular, you must assure your end users that the node has been properly
sanitized of their data prior to re-provisioning. Additionally, prior to
reusing a node, you must provide assurances that the hardware has not been
tampered or otherwise compromised.
.. note::
While OpenStack has a baremetal project, a discussion of the particular
security implications of running baremetal is beyond the scope of this book.
Finally, due to the time constraints around a book sprint, the team chose to
use KVM as the hypervisor in our example implementations and architectures.
.. note::
There is an OpenStack Security Note pertaining to the `Use of LXC in
Compute <https://bugs.launchpad.net/ossn/+bug/1098582>`__.
Hypervisor memory optimization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Many hypervisors use memory optimization techniques to overcommit memory to
guest virtual machines. This is a useful feature that allows you to deploy very
dense compute clusters. One way to achieve this is through de-duplication or
"sharing" of memory pages. When two virtual machines have identical data in
memory, there are advantages to having them reference the same memory.
Typically this is achieved through Copy-On-Write (COW) mechanisms. These
mechanisms have been shown to be vulnerable to side-channel attacks where one
VM can infer something about the state of another and might not be appropriate
for multi-tenant environments where not all tenants are trusted or share the
same levels of trust.
KVM Kernel Samepage Merging
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Introduced into the Linux kernel in version 2.6.32, Kernel Samepage Merging
(KSM) consolidates identical memory pages between Linux processes. As each
guest VM under the KVM hypervisor runs in its own process, KSM can be used to
optimize memory use between VMs.
XEN transparent page sharing
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
XenServer 5.6 includes a memory overcommitment feature named Transparent Page
Sharing (TPS). TPS scans memory in 4 KB chunks for any duplicates. When found,
the Xen Virtual Machine Monitor (VMM) discards one of the duplicates and
records the reference of the second one.
Security considerations for memory optimization
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Traditionally, memory de-duplication systems are vulnerable to side channel
attacks. Both KSM and TPS have demonstrated to be vulnerable to some form of
attack. In academic studies attackers were able to identify software packages
and versions running on neighboring virtual machines as well as software
downloads and other sensitive information through analyzing memory access
times on the attacker VM.
If a cloud deployment requires strong separation of tenants, as is the
situation with public clouds and some private clouds, deployers should consider
disabling TPS and KSM memory optimizations.
Additional security features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Another thing to look into when selecting a hypervisor platform is the
availability of specific security features. In particular, we are referring to
features like Xen Server's XSM or Xen Security Modules, sVirt, Intel TXT, and
AppArmor. The presence of these features increase your security profile as well
as provide a good foundation.
The following table calls out these features by common hypervisor platforms.
.. list-table::
:header-rows: 1
* -
- XSM
- sVirt
- TXT
- AppArmor
- cgroups
- MAC Policy
* - KVM
-
- X
- X
- X
- X
- X
* - Xen
- X
-
- X
-
-
-
* - ESXi
-
-
- X
-
-
-
* - Hyper-V
-
-
-
-
-
-
MAC Policy: Mandatory Access Control; may be implemented with SELinux or other
operating systems
\* Features in this table might not be applicable to all hypervisors or
directly mappable between hypervisors.
Bibliography
~~~~~~~~~~~~
* Sunar, Eisenbarth, Inci, Gorka Irazoqui Apecechea. Fine Grain Cross-VM
Attacks on Xen and VMware are possible!. 2014.
`https://eprint.iacr.org/2014/248.pfd
<https://eprint.iacr.org/2014/248.pdf>`__
* Artho, Yagi, Iijima, Kuniyasu Suzaki. Memory Deduplication as a Threat to
the Guest OS. 2011.
`https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf
<https://staff.aist.go.jp/c.artho/papers/EuroSec2011-suzaki.pdf>`__
* KVM: Kernal-based Virtual Machine. Kernal Samepage Merging. 2010.
`http://www.linux-kvm.org/page/KSM <http://www.linux-kvm.org/page/KSM>`__
* Xen Project, Xen Security Modules: XSM-FLASK. 2014.
`http://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK
<http://wiki.xen.org/wiki/Xen_Security_Modules_:_XSM-FLASK>`__
* SELinux Project, SVirt. 2011.
`http://selinuxproject.org/page/SVirt
<http://selinuxproject.org/page/SVirt>`__
* Intel.com, Trusted Compute Pools with Intel Trusted Execution Technology
(Intel TXT).
`http://www.intel.com/txt <http://www.intel.com/txt>`__
* AppArmor.net, AppArmor Main Page. 2011.
`http://wiki.apparmor.net/index.php/Main_Page
<http://wiki.apparmor.net/index.php/Main_Page>`__
* Kernel.org, CGroups. 2004.
`https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt
<https://www.kernel.org/doc/Documentation/cgroups/cgroups.txt>`__
* Computer Security Resource Centre. Guide to Security for Full Virtualization
Technologies. 2011.
`http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf
<http://csrc.nist.gov/publications/nistpubs/800-125/SP800-125-final.pdf>`__
* National Information Assurance Partnership, National Security
Telecommunications and Information Systems Security Policy. 2003.
`http://www.niap-ccevs.org/cc-scheme/nstissp_11_revised_factsheet.pdf
<http://www.niap-ccevs.org/cc-scheme/nstissp_11_revised_factsheet.pdf>`__

View File

@ -0,0 +1,62 @@
=======================
Vulnerability awareness
=======================
OpenStack vulnerability management team
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We recommend keeping up to date on security issues and advisories as they are
published. The `OpenStack Security Portal
<https://security.openstack.org/>`__ is the central portal where advisories,
notices, meetings, and processes can be coordinated. Additionally, the
`OpenStack Vulnerability Management Team (VMT) portal
<https://security.openstack.org/#openstack-vulnerability-management-team>`__
coordinates remediation within the OpenStack project, as well as the process of
investigating reported bugs which are responsibly disclosed (privately) to the
VMT, by marking the bug as 'This bug is a security vulnerability'. Further
detail is outlined in the `VMT process page
<https://security.openstack.org/vmt-process.html#process>`__ and results in
an OpenStack Security Advisory or OSSA. This OSSA outlines the issue and the
fix, as well as linking to both the original bug, and the location where the
where the patch is hosted.
OpenStack security notes
~~~~~~~~~~~~~~~~~~~~~~~~
Reported security bugs that are found to be the result of a misconfiguration,
or are not strictly part of OpenStack are drafted into Openstack Security Notes
or OSSNs. These include configuration issues such as ensuring Identity provider
mappings as well as non-OpenStack but critical issues such as the Bashbug/Ghost
or Venom vulnerabilities that affect the platform OpenStack utilizes. The
current set of OSSNs is in the `Security Note wiki
<https://wiki.openstack.org/wiki/Security_Notes>`__.
OpenStack-dev mailinglist
~~~~~~~~~~~~~~~~~~~~~~~~~
All bugs, OSSAs and OSSNs are publicly disseminated through the openstack-dev
mailinglist with the [security] topic in the subject line. We recommend
subscribing to this list as well as mail filtering rules that ensure OSSNs,
OSSAs, and other important advisories are not missed. The openstack-dev
mailinglist is managed through
`http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>`__.
The openstack-dev list has a high traffic rate, and filtering is discussed in
the thread
`http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html
<http://lists.openstack.org/pipermail/openstack-dev/2013-November/019233.html>`__.
Hypervisor mailinglists
~~~~~~~~~~~~~~~~~~~~~~~
When implementing OpenStack, one of the core decisions is which hypervisor to
utilize. We recommend being informed of advisories pertaining to the
hypervisor(s) you have chosen. Several common hypervisor security lists are
below:
Xen:
`http://xenbits.xen.org/xsa/ <http://xenbits.xen.org/xsa/>`__
VMWare:
`http://blogs.vmware.com/security/ <http://blogs.vmware.com/security/>`__
Others (KVM, and more):
`http://seclists.org/oss-sec <http://seclists.org/oss-sec>`__

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB