Added keystone admin guides to documentation

Currently the identity administrator guide docs are a part of
general OpenStack-manuals. Migrating those docs to keystone
documentation so that they can be reviewed effectively by
keystone developers too.

Partial-Bug #1694460
Depends-On: Ia750cb049c0f53a234ea70ce1f2bbbb7a2aa9454

Change-Id: Id121ae1dd5bce993b4ad1219b592527ef0047063
This commit is contained in:
Samriddhi Jain 2017-05-31 20:01:23 +05:30
parent d6160630b0
commit aba9267323
17 changed files with 2545 additions and 0 deletions

View File

@ -0,0 +1,74 @@
Authentication middleware with user name and password
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can also configure Identity authentication middleware using the
``admin_user`` and ``admin_password`` options.
.. note::
The ``admin_token`` option is deprecated and no longer used for
configuring auth_token middleware.
For services that have a separate paste-deploy ``.ini`` file, you can
configure the authentication middleware in the ``[keystone_authtoken]``
section of the main configuration file, such as ``nova.conf``. In
Compute, for example, you can remove the middleware parameters from
``api-paste.ini``, as follows:
.. code-block:: ini
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
And set the following values in ``nova.conf`` as follows:
.. code-block:: ini
[DEFAULT]
# ...
auth_strategy=keystone
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_user = admin
admin_password = SuperSekretPassword
admin_tenant_name = service
.. note::
The middleware parameters in the paste config take priority. You
must remove them to use the values in the ``[keystone_authtoken]``
section.
.. note::
Comment out any ``auth_host``, ``auth_port``, and
``auth_protocol`` options because the ``identity_uri`` option
replaces them.
This sample paste config filter makes use of the ``admin_user`` and
``admin_password`` options:
.. code-block:: ini
[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
auth_token = 012345SECRET99TOKEN012345
admin_user = admin
admin_password = keystone123
.. note::
Using this option requires an admin project/role relationship. The
admin user is granted access to the admin role on the admin project.
.. note::
Comment out any ``auth_host``, ``auth_port``, and
``auth_protocol`` options because the ``identity_uri`` option
replaces them.

View File

@ -0,0 +1,128 @@
.. :orphan:
Caching layer
~~~~~~~~~~~~~
OpenStack Identity supports a caching layer that is above the
configurable subsystems (for example, token). OpenStack Identity uses the
`oslo.cache <https://docs.openstack.org/developer/oslo.cache/>`__
library which allows flexible cache back ends. The majority of the
caching configuration options are set in the ``[cache]`` section of the
``/etc/keystone/keystone.conf`` file. However, each section that has
the capability to be cached usually has a caching boolean value that
toggles caching.
So to enable only the token back end caching, set the values as follows:
.. code-block:: ini
[cache]
enabled=true
[catalog]
caching=false
[domain_config]
caching=false
[federation]
caching=false
[resource]
caching=false
[revoke]
caching=false
[role]
caching=false
[token]
caching=true
.. note::
Since the Newton release, the default setting is enabled for subsystem
caching and the global toggle. As a result, all subsystems that support
caching are doing this by default.
Caching for tokens and tokens validation
----------------------------------------
All types of tokens benefit from caching, including Fernet tokens. Although
Fernet tokens do not need to be persisted, they should still be cached for
optimal token validation performance.
The token system has a separate ``cache_time`` configuration option,
that can be set to a value above or below the global ``expiration_time``
default, allowing for different caching behavior from the other systems
in OpenStack Identity. This option is set in the ``[token]`` section of
the configuration file.
The token revocation list cache time is handled by the configuration
option ``revocation_cache_time`` in the ``[token]`` section. The
revocation list is refreshed whenever a token is revoked. It typically
sees significantly more requests than specific token retrievals or token
validation calls.
Here is a list of actions that are affected by the cached time: getting
a new token, revoking tokens, validating tokens, checking v2 tokens, and
checking v3 tokens.
The delete token API calls invalidate the cache for the tokens being
acted upon, as well as invalidating the cache for the revoked token list
and the validate/check token calls.
Token caching is configurable independently of the ``revocation_list``
caching. Lifted expiration checks from the token drivers to the token
manager. This ensures that cached tokens will still raise a
``TokenNotFound`` flag when expired.
For cache consistency, all token IDs are transformed into the short
token hash at the provider and token driver level. Some methods have
access to the full ID (PKI Tokens), and some methods do not. Cache
invalidation is inconsistent without token ID normalization.
Caching for non-token resources
-------------------------------
Various other keystone components have a separate ``cache_time`` configuration
option, that can be set to a value above or below the global
``expiration_time`` default, allowing for different caching behavior
from the other systems in Identity service. This option can be set in various
sections (for example, ``[role]`` and ``[resource]``) of the configuration
file.
The create, update, and delete actions for domains, projects and roles
will perform proper invalidations of the cached methods listed above.
For more information about the different back ends (and configuration
options), see:
- `dogpile.cache.memory <https://dogpilecache.readthedocs.io/en/latest/api.html#memory-backend>`__
- `dogpile.cache.memcached <https://dogpilecache.readthedocs.io/en/latest/api.html#memcached-backends>`__
.. note::
The memory back end is not suitable for use in a production
environment.
- `dogpile.cache.redis <https://dogpilecache.readthedocs.io/en/latest/api.html#redis-backends>`__
- `dogpile.cache.dbm <https://dogpilecache.readthedocs.io/en/latest/api.html#file-backends>`__
Configure the Memcached back end example
----------------------------------------
The following example shows how to configure the memcached back end:
.. code-block:: ini
[cache]
enabled = true
backend = dogpile.cache.memcached
backend_argument = url:127.0.0.1:11211
You need to specify the URL to reach the ``memcached`` instance with the
``backend_argument`` parameter.

View File

@ -0,0 +1,237 @@
====================
Certificates for PKI
====================
PKI stands for Public Key Infrastructure. Tokens are documents,
cryptographically signed using the X509 standard. In order to work
correctly token generation requires a public/private key pair. The
public key must be signed in an X509 certificate, and the certificate
used to sign it must be available as a Certificate Authority (CA)
certificate. These files can be generated either using the
:command:`keystone-manage` utility, or externally generated. The files need to
be in the locations specified by the top level Identity service
configuration file ``/etc/keystone/keystone.conf`` as specified in the
above section. Additionally, the private key should only be readable by
the system user that will run the Identity service.
.. warning::
The certificates can be world readable, but the private key cannot
be. The private key should only be readable by the account that is
going to sign tokens. When generating files with the
:command:`keystone-manage pki_setup` command, your best option is to run
as the pki user. If you run :command:`keystone-manage` as root, you can
append ``--keystone-user`` and ``--keystone-group`` parameters
to set the user name and group keystone is going to run under.
The values that specify where to read the certificates are under the
``[signing]`` section of the configuration file. The configuration
values are:
- ``certfile``
Location of certificate used to verify tokens. Default is
``/etc/keystone/ssl/certs/signing_cert.pem``.
- ``keyfile``
Location of private key used to sign tokens. Default is
``/etc/keystone/ssl/private/signing_key.pem``.
- ``ca_certs``
Location of certificate for the authority that issued
the above certificate. Default is
``/etc/keystone/ssl/certs/ca.pem``.
- ``ca_key``
Location of the private key used by the CA. Default is
``/etc/keystone/ssl/private/cakey.pem``.
- ``key_size``
Default is ``2048``.
- ``valid_days``
Default is ``3650``.
- ``cert_subject``
Certificate subject (auto generated certificate) for token signing.
Default is ``/C=US/ST=Unset/L=Unset/O=Unset/CN=www.example.com``.
When generating certificates with the :command:`keystone-manage pki_setup`
command, the ``ca_key``, ``key_size``, and ``valid_days`` configuration
options are used.
If the :command:`keystone-manage pki_setup` command is not used to generate
certificates, or you are providing your own certificates, these values
do not need to be set.
If ``provider=keystone.token.providers.uuid.Provider`` in the
``[token]`` section of the keystone configuration file, a typical token
looks like ``53f7f6ef0cc344b5be706bcc8b1479e1``. If
``provider=keystone.token.providers.pki.Provider``, a typical token is a
much longer string, such as::
MIIKtgYJKoZIhvcNAQcCoIIKpzCCCqMCAQExCTAHBgUrDgMCGjCCCY8GCSqGSIb3DQEHAaCCCYAEggl8eyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0wNS0z
MFQxNTo1MjowNi43MzMxOTgiLCAiZXhwaXJlcyI6ICIyMDEzLTA1LTMxVDE1OjUyOjA2WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogbnVs
bCwgImVuYWJsZWQiOiB0cnVlLCAiaWQiOiAiYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBlMDYiLCAibmFtZSI6ICJkZW1vIn19LCAic2VydmljZUNhdGFsb2ciOiBbeyJlbmRw
b2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiIsICJyZWdpb24iOiAiUmVnaW9u
T25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc0L3YyL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgImlkIjogIjFmYjMzYmM5M2Y5
ODRhNGNhZTk3MmViNzcwOTgzZTJlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6ODc3NC92Mi9jMmM1OWI0ZDNkMjg0ZDhmYTA5ZjE2OWNiMTgwMGUwNiJ9XSwg
ImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJjb21wdXRlIiwgIm5hbWUiOiAibm92YSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3
LjEwMDozMzMzIiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjMzMzMiLCAiaWQiOiAiN2JjMThjYzk1NWFiNDNkYjhm
MGU2YWNlNDU4NjZmMzAiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozMzMzIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogInMzIiwgIm5hbWUi
OiAiczMifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4xMDA6OTI5MiIsICJyZWdpb24iOiAiUmVnaW9uT25lIiwgImludGVybmFsVVJMIjog
Imh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo5MjkyIiwgImlkIjogIjczODQzNTJhNTQ0MjQ1NzVhM2NkOTVkN2E0YzNjZGY1IiwgInB1YmxpY1VSTCI6ICJodHRwOi8vMTkyLjE2OC4yNy4x
MDA6OTI5MiJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpbWFnZSIsICJuYW1lIjogImdsYW5jZSJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6
Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2IiwgInJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDov
LzE5Mi4xNjguMjcuMTAwOjg3NzYvdjEvYzJjNTliNGQzZDI4NGQ4ZmEwOWYxNjljYjE4MDBlMDYiLCAiaWQiOiAiMzQ3ZWQ2ZThjMjkxNGU1MGFlMmJiNjA2YWQxNDdjNTQiLCAicHVi
bGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4Nzc2L3YxL2MyYzU5YjRkM2QyODRkOGZhMDlmMTY5Y2IxODAwZTA2In1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBl
IjogInZvbHVtZSIsICJuYW1lIjogImNpbmRlciJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0FkbWluIiwg
InJlZ2lvbiI6ICJSZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjg3NzMvc2VydmljZXMvQ2xvdWQiLCAiaWQiOiAiMmIwZGMyYjNlY2U4NGJj
YWE1NDAzMDMzNzI5YzY3MjIiLCAicHVibGljVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDo4NzczL3NlcnZpY2VzL0Nsb3VkIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0
eXBlIjogImVjMiIsICJuYW1lIjogImVjMiJ9LCB7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly8xOTIuMTY4LjI3LjEwMDozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJS
ZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCIsICJpZCI6ICJiNTY2Y2JlZjA2NjQ0ZmY2OWMyOTMxNzY2Yjc5MTIyOSIsICJw
dWJsaWNVUkwiOiAiaHR0cDovLzE5Mi4xNjguMjcuMTAwOjUwMDAvdjIuMCJ9XSwgImVuZHBvaW50c19saW5rcyI6IFtdLCAidHlwZSI6ICJpZGVudGl0eSIsICJuYW1lIjogImtleXN0
b25lIn1dLCAidXNlciI6IHsidXNlcm5hbWUiOiAiZGVtbyIsICJyb2xlc19saW5rcyI6IFtdLCAiaWQiOiAiZTVhMTM3NGE4YTRmNDI4NWIzYWQ3MzQ1MWU2MDY4YjEiLCAicm9sZXMi
OiBbeyJuYW1lIjogImFub3RoZXJyb2xlIn0sIHsibmFtZSI6ICJNZW1iZXIifV0sICJuYW1lIjogImRlbW8ifSwgIm1ldGFkYXRhIjogeyJpc19hZG1pbiI6IDAsICJyb2xlcyI6IFsi
YWRiODM3NDVkYzQzNGJhMzk5ODllNjBjOTIzYWZhMjgiLCAiMzM2ZTFiNjE1N2Y3NGFmZGJhNWUwYTYwMWUwNjM5MmYiXX19fTGB-zCB-AIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYD
VQQIEwVVbnNldDEOMAwGA1UEBxMFVW5zZXQxDjAMBgNVBAoTBVVuc2V0MRgwFgYDVQQDEw93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEgYCAHLpsEs2R
nouriuiCgFayIqCssK3SVdhOMINiuJtqv0sE-wBDFiEj-Prcudqlz-n+6q7VgV4mwMPszz39-rwp+P5l4AjrJasUm7FrO-4l02tPLaaZXU1gBQ1jUG5e5aL5jPDP08HbCWuX6wr-QQQB
SrWY8lF3HrTcJT23sZIleg==
Sign certificate issued by external CA
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can use a signing certificate issued by an external CA instead of
generated by :command:`keystone-manage`. However, a certificate issued by an
external CA must satisfy the following conditions:
- All certificate and key files must be in Privacy Enhanced Mail (PEM)
format
- Private key files must not be protected by a password
When using a signing certificate issued by an external CA, you do not
need to specify ``key_size``, ``valid_days``, and ``ca_password`` as
they will be ignored.
The basic workflow for using a signing certificate issued by an external
CA involves:
#. Request Signing Certificate from External CA
#. Convert certificate and private key to PEM if needed
#. Install External Signing Certificate
Request a signing certificate from an external CA
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
One way to request a signing certificate from an external CA is to first
generate a PKCS #10 Certificate Request Syntax (CRS) using OpenSSL CLI.
Create a certificate request configuration file. For example, create the
``cert_req.conf`` file, as follows:
.. code-block:: ini
[ req ]
default_bits = 4096
default_keyfile = keystonekey.pem
default_md = sha256
prompt = no
distinguished_name = distinguished_name
[ distinguished_name ]
countryName = US
stateOrProvinceName = CA
localityName = Sunnyvale
organizationName = OpenStack
organizationalUnitName = Keystone
commonName = Keystone Signing
emailAddress = keystone@openstack.org
Then generate a CRS with OpenSSL CLI. **Do not encrypt the generated
private key. You must use the -nodes option.**
For example:
.. code-block:: console
$ openssl req -newkey rsa:1024 -keyout signing_key.pem -keyform PEM \
-out signing_cert_req.pem -outform PEM -config cert_req.conf -nodes
If everything is successful, you should end up with
``signing_cert_req.pem`` and ``signing_key.pem``. Send
``signing_cert_req.pem`` to your CA to request a token signing certificate
and make sure to ask the certificate to be in PEM format. Also, make sure your
trusted CA certificate chain is also in PEM format.
Install an external signing certificate
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Assuming you have the following already:
- ``signing_cert.pem``
(Keystone token) signing certificate in PEM format
- ``signing_key.pem``
Corresponding (non-encrypted) private key in PEM format
- ``cacert.pem``
Trust CA certificate chain in PEM format
Copy the above to your certificate directory. For example:
.. code-block:: console
# mkdir -p /etc/keystone/ssl/certs
# cp signing_cert.pem /etc/keystone/ssl/certs/
# cp signing_key.pem /etc/keystone/ssl/certs/
# cp cacert.pem /etc/keystone/ssl/certs/
# chmod -R 700 /etc/keystone/ssl/certs
.. note::
Make sure the certificate directory is only accessible by root.
.. note::
The procedure of copying the key and cert files may be improved if
done after first running :command:`keystone-manage pki_setup` since this
command also creates other needed files, such as the ``index.txt``
and ``serial`` files.
Also, when copying the necessary files to a different server for
replicating the functionality, the entire directory of files is
needed, not just the key and cert files.
If your certificate directory path is different from the default
``/etc/keystone/ssl/certs``, make sure it is reflected in the
``[signing]`` section of the configuration file.
Switching out expired signing certificates
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following procedure details how to switch out expired signing
certificates with no cloud outages.
#. Generate a new signing key.
#. Generate a new certificate request.
#. Sign the new certificate with the existing CA to generate a new
``signing_cert``.
#. Append the new ``signing_cert`` to the old ``signing_cert``. Ensure the
old certificate is in the file first.
#. Remove all signing certificates from all your hosts to force OpenStack
Compute to download the new ``signing_cert``.
#. Replace the old signing key with the new signing key. Move the new
signing certificate above the old certificate in the ``signing_cert``
file.
#. After the old certificate reads as expired, you can safely remove the
old signing certificate from the file.

View File

@ -0,0 +1,354 @@
=================
Identity concepts
=================
Authentication
The process of confirming the identity of a user. To confirm an incoming
request, OpenStack Identity validates a set of credentials users
supply. Initially, these credentials are a user name and password, or a
user name and API key. When OpenStack Identity validates user credentials,
it issues an authentication token. Users provide the token in
subsequent requests.
Credentials
Data that confirms the identity of the user. For example, user
name and password, user name and API key, or an authentication
token that the Identity service provides.
Domain
An Identity service API v3 entity. Domains are a collection of
projects and users that define administrative boundaries for
managing Identity entities. Domains can represent an
individual, company, or operator-owned space. They expose
administrative activities directly to system users. Users can be
granted the administrator role for a domain. A domain
administrator can create projects, users, and groups in a domain
and assign roles to users and groups in a domain.
Endpoint
A network-accessible address, usually a URL, through which you can
access a service. If you are using an extension for templates, you
can create an endpoint template that represents the templates of
all consumable services that are available across the regions.
Group
An Identity service API v3 entity. Groups are a collection of
users owned by a domain. A group role, granted to a domain
or project, applies to all users in the group. Adding or removing
users to or from a group grants or revokes their role and
authentication to the associated domain or project.
OpenStackClient
A command-line interface for several OpenStack services including
the Identity API. For example, a user can run the
:command:`openstack service create` and
:command:`openstack endpoint create` commands to register services
in their OpenStack installation.
Project
A container that groups or isolates resources or identity objects.
Depending on the service operator, a project might map to a
customer, account, organization, or tenant.
Region
An Identity service API v3 entity. Represents a general division
in an OpenStack deployment. You can associate zero or more
sub-regions with a region to make a tree-like structured hierarchy.
Although a region does not have a geographical connotation, a
deployment can use a geographical name for a region, such as ``us-east``.
Role
A personality with a defined set of user rights and privileges to
perform a specific set of operations. The Identity service issues
a token to a user that includes a list of roles. When a user calls
a service, that service interprets the user role set, and
determines to which operations or resources each role grants
access.
Service
An OpenStack service, such as Compute (nova), Object Storage
(swift), or Image service (glance), that provides one or more
endpoints through which users can access resources and perform
operations.
Token
An alpha-numeric text string that enables access to OpenStack APIs
and resources. A token may be revoked at any time and is valid for
a finite duration. While OpenStack Identity supports token-based
authentication in this release, it intends to support additional
protocols in the future. OpenStack Identity is an integration
service that does not aspire to be a full-fledged identity store
and management solution.
User
A digital representation of a person, system, or service that uses
OpenStack cloud services. The Identity service validates that
incoming requests are made by the user who claims to be making the
call. Users have a login and can access resources by using
assigned tokens. Users can be directly assigned to a particular
project and behave as if they are contained in that project.
User management
~~~~~~~~~~~~~~~
Identity user management examples:
* Create a user named ``alice``:
.. code-block:: console
$ openstack user create --password-prompt --email alice@example.com alice
* Create a project named ``acme``:
.. code-block:: console
$ openstack project create acme --domain default
* Create a domain named ``emea``:
.. code-block:: console
$ openstack --os-identity-api-version=3 domain create emea
* Create a role named ``compute-user``:
.. code-block:: console
$ openstack role create compute-user
.. note::
Individual services assign meaning to roles, typically through
limiting or granting access to users with the role to the
operations that the service supports. Role access is typically
configured in the service's ``policy.json`` file. For example,
to limit Compute access to the ``compute-user`` role, edit the
Compute service's ``policy.json`` file to require this role for
Compute operations.
The Identity service assigns a project and a role to a user. You might
assign the ``compute-user`` role to the ``alice`` user in the ``acme``
project:
.. code-block:: console
$ openstack role add --project acme --user alice compute-user
A user can have different roles in different projects. For example, Alice
might also have the ``admin`` role in the ``Cyberdyne`` project. A user
can also have multiple roles in the same project.
The ``/etc/[SERVICE_CODENAME]/policy.json`` file controls the
tasks that users can perform for a given service. For example, the
``/etc/nova/policy.json`` file specifies the access policy for the
Compute service, the ``/etc/glance/policy.json`` file specifies
the access policy for the Image service, and the
``/etc/keystone/policy.json`` file specifies the access policy for
the Identity service.
The default ``policy.json`` files in the Compute, Identity, and
Image services recognize only the ``admin`` role. Any user with
any role in a project can access all operations that do not require the
``admin`` role.
To restrict users from performing operations in, for example, the
Compute service, you must create a role in the Identity service and
then modify the ``/etc/nova/policy.json`` file so that this role
is required for Compute operations.
For example, the following line in the ``/etc/cinder/policy.json``
file does not restrict which users can create volumes:
.. code-block:: none
"volume:create": "",
If the user has any role in a project, he can create volumes in that
project.
To restrict the creation of volumes to users who have the
``compute-user`` role in a particular project, you add ``"role:compute-user"``:
.. code-block:: none
"volume:create": "role:compute-user",
To restrict all Compute service requests to require this role, the
resulting file looks like:
.. code-block:: json
{
"admin_or_owner": "role:admin or project_id:%(project_id)s",
"default": "rule:admin_or_owner",
"compute:create": "role:compute-user",
"compute:create:attach_network": "role:compute-user",
"compute:create:attach_volume": "role:compute-user",
"compute:get_all": "role:compute-user",
"compute:unlock_override": "rule:admin_api",
"admin_api": "role:admin",
"compute_extension:accounts": "rule:admin_api",
"compute_extension:admin_actions": "rule:admin_api",
"compute_extension:admin_actions:pause": "rule:admin_or_owner",
"compute_extension:admin_actions:unpause": "rule:admin_or_owner",
"compute_extension:admin_actions:suspend": "rule:admin_or_owner",
"compute_extension:admin_actions:resume": "rule:admin_or_owner",
"compute_extension:admin_actions:lock": "rule:admin_or_owner",
"compute_extension:admin_actions:unlock": "rule:admin_or_owner",
"compute_extension:admin_actions:resetNetwork": "rule:admin_api",
"compute_extension:admin_actions:injectNetworkInfo": "rule:admin_api",
"compute_extension:admin_actions:createBackup": "rule:admin_or_owner",
"compute_extension:admin_actions:migrateLive": "rule:admin_api",
"compute_extension:admin_actions:migrate": "rule:admin_api",
"compute_extension:aggregates": "rule:admin_api",
"compute_extension:certificates": "role:compute-user",
"compute_extension:cloudpipe": "rule:admin_api",
"compute_extension:console_output": "role:compute-user",
"compute_extension:consoles": "role:compute-user",
"compute_extension:createserverext": "role:compute-user",
"compute_extension:deferred_delete": "role:compute-user",
"compute_extension:disk_config": "role:compute-user",
"compute_extension:evacuate": "rule:admin_api",
"compute_extension:extended_server_attributes": "rule:admin_api",
"compute_extension:extended_status": "role:compute-user",
"compute_extension:flavorextradata": "role:compute-user",
"compute_extension:flavorextraspecs": "role:compute-user",
"compute_extension:flavormanage": "rule:admin_api",
"compute_extension:floating_ip_dns": "role:compute-user",
"compute_extension:floating_ip_pools": "role:compute-user",
"compute_extension:floating_ips": "role:compute-user",
"compute_extension:hosts": "rule:admin_api",
"compute_extension:keypairs": "role:compute-user",
"compute_extension:multinic": "role:compute-user",
"compute_extension:networks": "rule:admin_api",
"compute_extension:quotas": "role:compute-user",
"compute_extension:rescue": "role:compute-user",
"compute_extension:security_groups": "role:compute-user",
"compute_extension:server_action_list": "rule:admin_api",
"compute_extension:server_diagnostics": "rule:admin_api",
"compute_extension:simple_tenant_usage:show": "rule:admin_or_owner",
"compute_extension:simple_tenant_usage:list": "rule:admin_api",
"compute_extension:users": "rule:admin_api",
"compute_extension:virtual_interfaces": "role:compute-user",
"compute_extension:virtual_storage_arrays": "role:compute-user",
"compute_extension:volumes": "role:compute-user",
"compute_extension:volume_attachments:index": "role:compute-user",
"compute_extension:volume_attachments:show": "role:compute-user",
"compute_extension:volume_attachments:create": "role:compute-user",
"compute_extension:volume_attachments:delete": "role:compute-user",
"compute_extension:volumetypes": "role:compute-user",
"volume:create": "role:compute-user",
"volume:get_all": "role:compute-user",
"volume:get_volume_metadata": "role:compute-user",
"volume:get_snapshot": "role:compute-user",
"volume:get_all_snapshots": "role:compute-user",
"network:get_all_networks": "role:compute-user",
"network:get_network": "role:compute-user",
"network:delete_network": "role:compute-user",
"network:disassociate_network": "role:compute-user",
"network:get_vifs_by_instance": "role:compute-user",
"network:allocate_for_instance": "role:compute-user",
"network:deallocate_for_instance": "role:compute-user",
"network:validate_networks": "role:compute-user",
"network:get_instance_uuids_by_ip_filter": "role:compute-user",
"network:get_floating_ip": "role:compute-user",
"network:get_floating_ip_pools": "role:compute-user",
"network:get_floating_ip_by_address": "role:compute-user",
"network:get_floating_ips_by_project": "role:compute-user",
"network:get_floating_ips_by_fixed_address": "role:compute-user",
"network:allocate_floating_ip": "role:compute-user",
"network:deallocate_floating_ip": "role:compute-user",
"network:associate_floating_ip": "role:compute-user",
"network:disassociate_floating_ip": "role:compute-user",
"network:get_fixed_ip": "role:compute-user",
"network:add_fixed_ip_to_instance": "role:compute-user",
"network:remove_fixed_ip_from_instance": "role:compute-user",
"network:add_network_to_project": "role:compute-user",
"network:get_instance_nw_info": "role:compute-user",
"network:get_dns_domains": "role:compute-user",
"network:add_dns_entry": "role:compute-user",
"network:modify_dns_entry": "role:compute-user",
"network:delete_dns_entry": "role:compute-user",
"network:get_dns_entries_by_address": "role:compute-user",
"network:get_dns_entries_by_name": "role:compute-user",
"network:create_private_dns_domain": "role:compute-user",
"network:create_public_dns_domain": "role:compute-user",
"network:delete_dns_domain": "role:compute-user"
}
Service management
~~~~~~~~~~~~~~~~~~
The Identity service provides identity, token, catalog, and policy
services. It consists of:
* keystone Web Server Gateway Interface (WSGI) service
Can be run in a WSGI-capable web server such as Apache httpd to provide
the Identity service. The service and administrative APIs are run as
separate instances of the WSGI service.
* Identity service functions
Each has a pluggable back end that allow different ways to use the
particular service. Most support standard back ends like LDAP or SQL.
* keystone-all
Starts both the service and administrative APIs in a single process.
Using federation with keystone-all is not supported. keystone-all is
deprecated in favor of the WSGI service. Also, this will be removed
in Newton.
The Identity service also maintains a user that corresponds to each
service, such as, a user named ``nova`` for the Compute service, and a
special service project called ``service``.
For information about how to create services and endpoints, see the
`OpenStack Administrator Guide <https://docs.openstack.org/admin-guide/
cli-manage-services.html>`__.
Groups
~~~~~~
A group is a collection of users in a domain. Administrators can
create groups and add users to them. A role can then be assigned to
the group, rather than individual users. Groups were introduced with
the Identity API v3.
Identity API V3 provides the following group-related operations:
* Create a group
* Delete a group
* Update a group (change its name or description)
* Add a user to a group
* Remove a user from a group
* List group members
* List groups for a user
* Assign a role on a project to a group
* Assign a role on a domain to a group
* Query role assignments to groups
.. note::
The Identity service server might not allow all operations. For
example, if you use the Identity server with the LDAP Identity
back end and group updates are disabled, a request to create,
delete, or update a group fails.
Here are a couple of examples:
* Group A is granted Role A on Project A. If User A is a member of Group
A, when User A gets a token scoped to Project A, the token also
includes Role A.
* Group B is granted Role B on Domain B. If User B is a member of
Group B, when User B gets a token scoped to Domain B, the token also
includes Role B.

View File

@ -0,0 +1,69 @@
=============================
Domain-specific configuration
=============================
The Identity service supports domain-specific Identity drivers.
The drivers allow a domain to have its own LDAP or SQL back end.
By default, domain-specific drivers are disabled.
Domain-specific Identity configuration options can be stored in
domain-specific configuration files, or in the Identity SQL
database using API REST calls.
.. note::
Storing and managing configuration options in an SQL database is
experimental in Kilo, and added to the Identity service in the
Liberty release.
Enable drivers for domain-specific configuration files
------------------------------------------------------
To enable domain-specific drivers, set these options in the
``/etc/keystone/keystone.conf`` file:
.. code-block:: ini
[identity]
domain_specific_drivers_enabled = True
domain_config_dir = /etc/keystone/domains
When you enable domain-specific drivers, Identity looks in the
``domain_config_dir`` directory for configuration files that are named as
``keystone.DOMAIN_NAME.conf``. A domain without a domain-specific
configuration file uses options in the primary configuration file.
Enable drivers for storing configuration options in SQL database
----------------------------------------------------------------
To enable domain-specific drivers, set these options in the
``/etc/keystone/keystone.conf`` file:
.. code-block:: ini
[identity]
domain_specific_drivers_enabled = True
domain_configurations_from_database = True
Any domain-specific configuration options specified through the
Identity v3 API will override domain-specific configuration files in the
``/etc/keystone/domains`` directory.
Migrate domain-specific configuration files to the SQL database
---------------------------------------------------------------
You can use the ``keystone-manage`` command to migrate configuration
options in domain-specific configuration files to the SQL database:
.. code-block:: console
# keystone-manage domain_config_upload --all
To upload options from a specific domain-configuration file, specify the
domain name:
.. code-block:: console
# keystone-manage domain_config_upload --domain-name DOMAIN_NAME

View File

@ -0,0 +1,41 @@
=====================================
External authentication with Identity
=====================================
When Identity runs in ``apache-httpd``, you can use external
authentication methods that differ from the authentication provided by
the identity store back end. For example, you can use an SQL identity
back end together with X.509 authentication and Kerberos, instead of
using the user name and password combination.
Use HTTPD authentication
~~~~~~~~~~~~~~~~~~~~~~~~
Web servers, like Apache HTTP, support many methods of authentication.
Identity can allow the web server to perform the authentication. The web
server then passes the authenticated user to Identity by using the
``REMOTE_USER`` environment variable. This user must already exist in
the Identity back end to get a token from the controller. To use this
method, Identity should run on ``apache-httpd``.
Use X.509
~~~~~~~~~
The following Apache configuration snippet authenticates the user based
on a valid X.509 certificate from a known CA:
.. code-block:: none
<VirtualHost _default_:5000>
SSLEngine on
SSLCertificateFile /etc/ssl/certs/ssl.cert
SSLCertificateKeyFile /etc/ssl/private/ssl.key
SSLCACertificatePath /etc/ssl/allowed_cas
SSLCARevocationPath /etc/ssl/allowed_cas
SSLUserName SSL_CLIENT_S_DN_CN
SSLVerifyClient require
SSLVerifyDepth 10
(...)
</VirtualHost>

View File

@ -0,0 +1,345 @@
===================================
Fernet - Frequently Asked Questions
===================================
The following questions have been asked periodically since the initial release
of the fernet token format in Kilo.
What are the different types of keys?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
A key repository is required by keystone in order to create fernet tokens.
These keys are used to encrypt and decrypt the information that makes up the
payload of the token. Each key in the repository can have one of three states.
The state of the key determines how keystone uses a key with fernet tokens. The
different types are as follows:
Primary key:
There is only ever one primary key in a key repository. The primary key is
allowed to encrypt and decrypt tokens. This key is always named as the
highest index in the repository.
Secondary key:
A secondary key was at one point a primary key, but has been demoted in place
of another primary key. It is only allowed to decrypt tokens. Since it was
the primary at some point in time, its existence in the key repository is
justified. Keystone needs to be able to decrypt tokens that were created with
old primary keys.
Staged key:
The staged key is a special key that shares some similarities with secondary
keys. There can only ever be one staged key in a repository and it must
exist. Just like secondary keys, staged keys have the ability to decrypt
tokens. Unlike secondary keys, staged keys have never been a primary key. In
fact, they are opposites since the staged key will always be the next primary
key. This helps clarify the name because they are the next key staged to be
the primary key. This key is always named as ``0`` in the key repository.
So, how does a staged key help me and why do I care about it?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The fernet keys have a natural lifecycle. Each key starts as a staged key, is
promoted to be the primary key, and then demoted to be a secondary key. New
tokens can only be encrypted with a primary key. Secondary and staged keys are
never used to encrypt token. The staged key is a special key given the order of
events and the attributes of each type of key. The staged key is the only key
in the repository that has not had a chance to encrypt any tokens yet, but it
is still allowed to decrypt tokens. As an operator, this gives you the chance
to perform a key rotation on one keystone node, and distribute the new key set
over a span of time. This does not require the distribution to take place in an
ultra short period of time. Tokens encrypted with a primary key can be
decrypted, and validated, on other nodes where that key is still staged.
Where do I put my key repository?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The key repository is specified using the ``key_repository`` option in the
keystone configuration file. The keystone process should be able to read and
write to this location but it should be kept secret otherwise. Currently,
keystone only supports file-backed key repositories.
.. code-block:: ini
[fernet_tokens]
key_repository = /etc/keystone/fernet-keys/
What is the recommended way to rotate and distribute keys?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The :command:`keystone-manage` command line utility includes a key rotation
mechanism. This mechanism will initialize and rotate keys but does not make
an effort to distribute keys across keystone nodes. The distribution of keys
across a keystone deployment is best handled through configuration management
tooling. Use :command:`keystone-manage fernet_rotate` to rotate the key
repository.
Do fernet tokens still expire?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Yes, fernet tokens can expire just like any other keystone token formats.
Why should I choose fernet tokens over UUID tokens?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Even though fernet tokens operate very similarly to UUID tokens, they do not
require persistence. The keystone token database no longer suffers bloat as a
side effect of authentication. Pruning expired tokens from the token database
is no longer required when using fernet tokens. Because fernet tokens do not
require persistence, they do not have to be replicated. As long as each
keystone node shares the same key repository, fernet tokens can be created and
validated instantly across nodes.
Why should I choose fernet tokens over PKI or PKIZ tokens?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The arguments for using fernet over PKI and PKIZ remain the same as UUID, in
addition to the fact that fernet tokens are much smaller than PKI and PKIZ
tokens. PKI and PKIZ tokens still require persistent storage and can sometimes
cause issues due to their size. This issue is mitigated when switching to
fernet because fernet tokens are kept under a 250 byte limit. PKI and PKIZ
tokens typically exceed 1600 bytes in length. The length of a PKI or PKIZ token
is dependent on the size of the deployment. Bigger service catalogs will result
in longer token lengths. This pattern does not exist with fernet tokens because
the contents of the encrypted payload is kept to a minimum.
Should I rotate and distribute keys from the same keystone node every rotation?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No, but the relationship between rotation and distribution should be lock-step.
Once you rotate keys on one keystone node, the key repository from that node
should be distributed to the rest of the cluster. Once you confirm that each
node has the same key repository state, you could rotate and distribute from
any other node in the cluster.
If the rotation and distribution are not lock-step, a single keystone node in
the deployment will create tokens with a primary key that no other node has as
a staged key. This will cause tokens generated from one keystone node to fail
validation on other keystone nodes.
How do I add new keystone nodes to a deployment?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The keys used to create fernet tokens should be treated like super secret
configuration files, similar to an SSL secret key. Before a node is allowed to
join an existing cluster, issuing and validating tokens, it should have the
same key repository as the rest of the nodes in the cluster.
How should I approach key distribution?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Remember that key distribution is only required in multi-node keystone
deployments. If you only have one keystone node serving requests in your
deployment, key distribution is unnecessary.
Key distribution is a problem best approached from the deployment's current
configuration management system. Since not all deployments use the same
configuration management systems, it makes sense to explore options around what
is already available for managing keys, while keeping the secrecy of the keys
in mind. Many configuration management tools can leverage something like
``rsync`` to manage key distribution.
Key rotation is a single operation that promotes the current staged key to
primary, creates a new staged key, and prunes old secondary keys. It is easiest
to do this on a single node and verify the rotation took place properly before
distributing the key repository to the rest of the cluster. The concept behind
the staged key breaks the expectation that key rotation and key distribution
have to be done in a single step. With the staged key, we have time to inspect
the new key repository before syncing state with the rest of the cluster. Key
distribution should be an operation that can run in succession until it
succeeds. The following might help illustrate the isolation between key
rotation and key distribution.
#. Ensure all keystone nodes in the deployment have the same key repository.
#. Pick a keystone node in the cluster to rotate from.
#. Rotate keys.
#. Was it successful?
#. If no, investigate issues with the particular keystone node you
rotated keys on. Fernet keys are small and the operation for
rotation is trivial. There should not be much room for error in key
rotation. It is possible that the user does not have the ability to
write new keys to the key repository. Log output from
``keystone-manage fernet_rotate`` should give more information into
specific failures.
#. If yes, you should see a new staged key. The old staged key should
be the new primary. Depending on the ``max_active_keys`` limit you
might have secondary keys that were pruned. At this point, the node
that you rotated on will be creating fernet tokens with a primary
key that all other nodes should have as the staged key. This is why
we checked the state of all key repositories in Step one. All other
nodes in the cluster should be able to decrypt tokens created with
the new primary key. At this point, we are ready to distribute the
new key set.
#. Distribute the new key repository.
#. Was it successful?
#. If yes, you should be able to confirm that all nodes in the cluster
have the same key repository that was introduced in Step 3. All
nodes in the cluster will be creating tokens with the primary key
that was promoted in Step 3. No further action is required until the
next schedule key rotation.
#. If no, try distributing again. Remember that we already rotated the
repository and performing another rotation at this point will
result in tokens that cannot be validated across certain hosts.
Specifically, the hosts that did not get the latest key set. You
should be able to distribute keys until it is successful. If certain
nodes have issues syncing, it could be permission or network issues
and those should be resolved before subsequent rotations.
How long should I keep my keys around?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The fernet tokens that keystone creates are only secure as the keys creating
them. With staged keys the penalty of key rotation is low, allowing you to err
on the side of security and rotate weekly, daily, or even hourly. Ultimately,
this should be less time than it takes an attacker to break a ``AES256`` key
and a ``SHA256 HMAC``.
Is a fernet token still a bearer token?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Yes, and they follow exactly the same validation path as UUID tokens, with the
exception of being written to, and read from, a back end. If someone
compromises your fernet token, they have the power to do all the operations you
are allowed to do.
What if I need to revoke all my tokens?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To invalidate every token issued from keystone and start fresh, remove the
current key repository, create a new key set, and redistribute it to all nodes
in the cluster. This will render every token issued from keystone as invalid
regardless if the token has actually expired. When a client goes to
re-authenticate, the new token will have been created with a new fernet key.
What can an attacker do if they compromise a fernet key in my deployment?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If any key used in the key repository is compromised, an attacker will be able
to build their own tokens. If they know the ID of an administrator on a
project, they could generate administrator tokens for the project. They will be
able to generate their own tokens until the compromised key has been removed
from from the repository.
I rotated keys and now tokens are invalidating early, what did I do?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Using fernet tokens requires some awareness around token expiration and the key
lifecycle. You do not want to rotate so often that secondary keys are removed
that might still be needed to decrypt unexpired tokens. If this happens, you
will not be able to decrypt the token because the key the was used to encrypt
it is now gone. Only remove keys that you know are not being used to encrypt or
decrypt tokens.
For example, your token is valid for 24 hours and we want to rotate keys every
six hours. We will need to make sure tokens that were created at 08:00 AM on
Monday are still valid at 07:00 AM on Tuesday, assuming they were not
prematurely revoked. To accomplish this, we will want to make sure we set
``max_active_keys=6`` in our keystone configuration file. This will allow us to
hold all keys that might still be required to validate a previous token, but
keeps the key repository limited to only the keys that are needed.
The number of ``max_active_keys`` for a deployment can be determined by
dividing the token lifetime, in hours, by the frequency of rotation in hours
and adding two. Better illustrated as::
token_expiration = 24
rotation_frequency = 6
max_active_keys = (token_expiration / rotation_frequency) + 2
The reason for adding two additional keys to the count is to include the staged
key and a buffer key. This can be shown based on the previous example. We
initially setup the key repository at 6:00 AM on Monday, and the initial state
looks like:
.. code-block:: console
$ ls -la /etc/keystone/fernet-keys/
drwx------ 2 keystone keystone 4096 .
drwxr-xr-x 3 keystone keystone 4096 ..
-rw------- 1 keystone keystone 44 0 (staged key)
-rw------- 1 keystone keystone 44 1 (primary key)
All tokens created after 6:00 AM are encrypted with key ``1``. At 12:00 PM we
will rotate keys again, resulting in,
.. code-block:: console
$ ls -la /etc/keystone/fernet-keys/
drwx------ 2 keystone keystone 4096 .
drwxr-xr-x 3 keystone keystone 4096 ..
-rw------- 1 keystone keystone 44 0 (staged key)
-rw------- 1 keystone keystone 44 1 (secondary key)
-rw------- 1 keystone keystone 44 2 (primary key)
We are still able to validate tokens created between 6:00 - 11:59 AM because
the ``1`` key still exists as a secondary key. All tokens issued after 12:00 PM
will be encrypted with key ``2``. At 6:00 PM we do our next rotation, resulting
in:
.. code-block:: console
$ ls -la /etc/keystone/fernet-keys/
drwx------ 2 keystone keystone 4096 .
drwxr-xr-x 3 keystone keystone 4096 ..
-rw------- 1 keystone keystone 44 0 (staged key)
-rw------- 1 keystone keystone 44 1 (secondary key)
-rw------- 1 keystone keystone 44 2 (secondary key)
-rw------- 1 keystone keystone 44 3 (primary key)
It is still possible to validate tokens issued from 6:00 AM - 5:59 PM because
keys ``1`` and ``2`` exist as secondary keys. Every token issued until 11:59 PM
will be encrypted with key ``3``, and at 12:00 AM we do our next rotation:
.. code-block:: console
$ ls -la /etc/keystone/fernet-keys/
drwx------ 2 keystone keystone 4096 .
drwxr-xr-x 3 keystone keystone 4096 ..
-rw------- 1 keystone keystone 44 0 (staged key)
-rw------- 1 keystone keystone 44 1 (secondary key)
-rw------- 1 keystone keystone 44 2 (secondary key)
-rw------- 1 keystone keystone 44 3 (secondary key)
-rw------- 1 keystone keystone 44 4 (primary key)
Just like before, we can still validate tokens issued from 6:00 AM the previous
day until 5:59 AM today because keys ``1`` - ``4`` are present. At 6:00 AM,
tokens issued from the previous day will start to expire and we do our next
scheduled rotation:
.. code-block:: console
$ ls -la /etc/keystone/fernet-keys/
drwx------ 2 keystone keystone 4096 .
drwxr-xr-x 3 keystone keystone 4096 ..
-rw------- 1 keystone keystone 44 0 (staged key)
-rw------- 1 keystone keystone 44 1 (secondary key)
-rw------- 1 keystone keystone 44 2 (secondary key)
-rw------- 1 keystone keystone 44 3 (secondary key)
-rw------- 1 keystone keystone 44 4 (secondary key)
-rw------- 1 keystone keystone 44 5 (primary key)
Tokens will naturally expire after 6:00 AM, but we will not be able to remove
key ``1`` until the next rotation because it encrypted all tokens from 6:00 AM
to 12:00 PM the day before. Once we do our next rotation, which is at 12:00 PM,
the ``1`` key will be pruned from the repository:
.. code-block:: console
$ ls -la /etc/keystone/fernet-keys/
drwx------ 2 keystone keystone 4096 .
drwxr-xr-x 3 keystone keystone 4096 ..
-rw------- 1 keystone keystone 44 0 (staged key)
-rw------- 1 keystone keystone 44 2 (secondary key)
-rw------- 1 keystone keystone 44 3 (secondary key)
-rw------- 1 keystone keystone 44 4 (secondary key)
-rw------- 1 keystone keystone 44 5 (secondary key)
-rw------- 1 keystone keystone 44 6 (primary key)
If keystone were to receive a token that was created between 6:00 AM and 12:00
PM the day before, encrypted with the ``1`` key, it would not be valid because
it was already expired. This makes it possible for us to remove the ``1`` key
from the repository without negative validation side-effects.

View File

@ -0,0 +1,453 @@
.. _integrate-identity-with-ldap:
============================
Integrate Identity with LDAP
============================
The OpenStack Identity service supports integration with existing LDAP
directories for authentication and authorization services. LDAP back
ends require initialization before configuring the OpenStack Identity
service to work with it. For more information, see `Setting up LDAP
for use with Keystone <https://wiki.openstack.org/wiki/OpenLDAP>`__.
When the OpenStack Identity service is configured to use LDAP back ends,
you can split authentication (using the *identity* feature) and
authorization (using the *assignment* feature).
The *identity* feature enables administrators to manage users and groups
by each domain or the OpenStack Identity service entirely.
The *assignment* feature enables administrators to manage project role
authorization using the OpenStack Identity service SQL database, while
providing user authentication through the LDAP directory.
.. _identity_ldap_server_setup:
Identity LDAP server set up
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. important::
For the OpenStack Identity service to access LDAP servers, you must
enable the ``authlogin_nsswitch_use_ldap`` boolean value for SELinux
on the server running the OpenStack Identity service. To enable and
make the option persistent across reboots, set the following boolean
value as the root user:
.. code-block:: console
# setsebool -P authlogin_nsswitch_use_ldap on
The Identity configuration is split into two separate back ends; identity
(back end for users and groups), and assignments (back end for domains,
projects, roles, role assignments). To configure Identity, set options
in the ``/etc/keystone/keystone.conf`` file. See
:ref:`integrate-identity-backend-ldap` for Identity back end configuration
examples. Modify these examples as needed.
**To define the destination LDAP server**
#. Define the destination LDAP server in the
``/etc/keystone/keystone.conf`` file:
.. code-block:: ini
[ldap]
url = ldap://localhost
user = dc=Manager,dc=example,dc=org
password = samplepassword
suffix = dc=example,dc=org
**Additional LDAP integration settings**
Set these options in the ``/etc/keystone/keystone.conf`` file for a
single LDAP server, or ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf``
files for multiple back ends. Example configurations appear below each
setting summary:
**Query option**
.. hlist::
:columns: 1
* Use ``query_scope`` to control the scope level of data presented
(search only the first level or search an entire sub-tree)
through LDAP.
* Use ``page_size`` to control the maximum results per page. A value
of zero disables paging.
* Use ``alias_dereferencing`` to control the LDAP dereferencing
option for queries.
.. code-block:: ini
[ldap]
query_scope = sub
page_size = 0
alias_dereferencing = default
chase_referrals =
**Debug**
Use ``debug_level`` to set the LDAP debugging level for LDAP calls.
A value of zero means that debugging is not enabled.
.. code-block:: ini
[ldap]
debug_level = 0
.. warning::
This value is a bitmask, consult your LDAP documentation for
possible values.
**Connection pooling**
Use ``use_pool`` to enable LDAP connection pooling. Configure the
connection pool size, maximum retry, reconnect trials, timeout (-1
indicates indefinite wait) and lifetime in seconds.
.. code-block:: ini
[ldap]
use_pool = true
pool_size = 10
pool_retry_max = 3
pool_retry_delay = 0.1
pool_connection_timeout = -1
pool_connection_lifetime = 600
**Connection pooling for end user authentication**
Use ``use_auth_pool`` to enable LDAP connection pooling for end user
authentication. Configure the connection pool size and lifetime in
seconds.
.. code-block:: ini
[ldap]
use_auth_pool = false
auth_pool_size = 100
auth_pool_connection_lifetime = 60
When you have finished the configuration, restart the OpenStack Identity
service.
.. warning::
During the service restart, authentication and authorization are
unavailable.
.. _integrate-identity-backend-ldap:
Integrate Identity back end with LDAP
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Identity back end contains information for users, groups, and group
member lists. Integrating the Identity back end with LDAP allows
administrators to use users and groups in LDAP.
.. important::
For OpenStack Identity service to access LDAP servers, you must
define the destination LDAP server in the
``/etc/keystone/keystone.conf`` file. For more information,
see :ref:`identity_ldap_server_setup`.
**To integrate one Identity back end with LDAP**
#. Enable the LDAP Identity driver in the ``/etc/keystone/keystone.conf``
file. This allows LDAP as an identity back end:
.. code-block:: ini
[identity]
#driver = sql
driver = ldap
#. Create the organizational units (OU) in the LDAP directory, and define
the corresponding location in the ``/etc/keystone/keystone.conf``
file:
.. code-block:: ini
[ldap]
user_tree_dn = ou=Users,dc=example,dc=org
user_objectclass = inetOrgPerson
group_tree_dn = ou=Groups,dc=example,dc=org
group_objectclass = groupOfNames
.. note::
These schema attributes are extensible for compatibility with
various schemas. For example, this entry maps to the person
attribute in Active Directory:
.. code-block:: ini
user_objectclass = person
#. A read-only implementation is recommended for LDAP integration. These
permissions are applied to object types in the
``/etc/keystone/keystone.conf`` file:
.. code-block:: ini
[ldap]
user_allow_create = False
user_allow_update = False
user_allow_delete = False
group_allow_create = False
group_allow_update = False
group_allow_delete = False
Restart the OpenStack Identity service.
.. warning::
During service restart, authentication and authorization are
unavailable.
**To integrate multiple Identity back ends with LDAP**
#. Set the following options in the ``/etc/keystone/keystone.conf``
file:
#. Enable the LDAP driver:
.. code-block:: ini
[identity]
#driver = sql
driver = ldap
#. Enable domain-specific drivers:
.. code-block:: ini
[identity]
domain_specific_drivers_enabled = True
domain_config_dir = /etc/keystone/domains
#. Restart the OpenStack Identity service.
.. warning::
During service restart, authentication and authorization are
unavailable.
#. List the domains using the dashboard, or the OpenStackClient CLI. Refer
to the `Command List
<https://docs.openstack.org/developer/python-openstackclient/command-list.html>`__
for a list of OpenStackClient commands.
#. Create domains using OpenStack dashboard, or the OpenStackClient CLI.
#. For each domain, create a domain-specific configuration file in the
``/etc/keystone/domains`` directory. Use the file naming convention
``keystone.DOMAIN_NAME.conf``, where DOMAIN\_NAME is the domain name
assigned in the previous step.
.. note::
The options set in the
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file will
override options in the ``/etc/keystone/keystone.conf`` file.
#. Define the destination LDAP server in the
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file. For example:
.. code-block:: ini
[ldap]
url = ldap://localhost
user = dc=Manager,dc=example,dc=org
password = samplepassword
suffix = dc=example,dc=org
#. Create the organizational units (OU) in the LDAP directories, and define
their corresponding locations in the
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file. For example:
.. code-block:: ini
[ldap]
user_tree_dn = ou=Users,dc=example,dc=org
user_objectclass = inetOrgPerson
group_tree_dn = ou=Groups,dc=example,dc=org
group_objectclass = groupOfNames
.. note::
These schema attributes are extensible for compatibility with
various schemas. For example, this entry maps to the person
attribute in Active Directory:
.. code-block:: ini
user_objectclass = person
#. A read-only implementation is recommended for LDAP integration. These
permissions are applied to object types in the
``/etc/keystone/domains/keystone.DOMAIN_NAME.conf`` file:
.. code-block:: ini
[ldap]
user_allow_create = False
user_allow_update = False
user_allow_delete = False
group_allow_create = False
group_allow_update = False
group_allow_delete = False
#. Restart the OpenStack Identity service.
.. warning::
During service restart, authentication and authorization are
unavailable.
**Additional LDAP integration settings**
Set these options in the ``/etc/keystone/keystone.conf`` file for a
single LDAP server, or ``/etc/keystone/domains/keystone.DOMAIN_NAME.conf``
files for multiple back ends. Example configurations appear below each
setting summary:
Filters
Use filters to control the scope of data presented through LDAP.
.. code-block:: ini
[ldap]
user_filter = (memberof=cn=openstack-users,ou=workgroups,dc=example,dc=org)
group_filter =
Identity attribute mapping
Mask account status values (include any additional attribute
mappings) for compatibility with various directory services.
Superfluous accounts are filtered with ``user_filter``.
Setting attribute ignore to list of attributes stripped off on
update.
For example, you can mask Active Directory account status attributes
in the ``/etc/keystone/keystone.conf`` file:
.. code-block:: ini
[ldap]
user_id_attribute = cn
user_name_attribute = sn
user_mail_attribute = mail
user_pass_attribute = userPassword
user_enabled_attribute = userAccountControl
user_enabled_mask = 2
user_enabled_invert = false
user_enabled_default = 512
user_default_project_id_attribute =
user_additional_attribute_mapping =
group_id_attribute = cn
group_name_attribute = ou
group_member_attribute = member
group_desc_attribute = description
group_additional_attribute_mapping =
Enabled emulation
An alternative method to determine if a user is enabled or not is by
checking if that user is a member of the emulation group.
Use DN of the group entry to hold enabled user when using enabled
emulation.
.. code-block:: ini
[ldap]
user_enabled_emulation = false
user_enabled_emulation_dn = false
When you have finished configuration, restart the OpenStack Identity
service.
.. warning::
During service restart, authentication and authorization are
unavailable.
Secure the OpenStack Identity service connection to an LDAP back end
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The Identity service supports the use of TLS to encrypt LDAP traffic.
Before configuring this, you must first verify where your certificate
authority file is located. For more information, see the
`OpenStack Security Guide SSL introduction <https://docs.openstack.org/
security-guide/secure-communication/introduction-to-ssl-and-tls.html>`_.
Once you verify the location of your certificate authority file:
**To configure TLS encryption on LDAP traffic**
#. Open the ``/etc/keystone/keystone.conf`` configuration file.
#. Find the ``[ldap]`` section.
#. In the ``[ldap]`` section, set the ``use_tls`` configuration key to
``True``. Doing so will enable TLS.
#. Configure the Identity service to use your certificate authorities file.
To do so, set the ``tls_cacertfile`` configuration key in the ``ldap``
section to the certificate authorities file's path.
.. note::
You can also set the ``tls_cacertdir`` (also in the ``ldap``
section) to the directory where all certificate authorities files
are kept. If both ``tls_cacertfile`` and ``tls_cacertdir`` are set,
then the latter will be ignored.
#. Specify what client certificate checks to perform on incoming TLS
sessions from the LDAP server. To do so, set the ``tls_req_cert``
configuration key in the ``[ldap]`` section to ``demand``, ``allow``, or
``never``:
.. hlist::
:columns: 1
* ``demand`` - The LDAP server always receives certificate
requests. The session terminates if no certificate
is provided, or if the certificate provided cannot be verified
against the existing certificate authorities file.
* ``allow`` - The LDAP server always receives certificate
requests. The session will proceed as normal even if a certificate
is not provided. If a certificate is provided but it cannot be
verified against the existing certificate authorities file, the
certificate will be ignored and the session will proceed as
normal.
* ``never`` - A certificate will never be requested.
On distributions that include openstack-config, you can configure TLS
encryption on LDAP traffic by running the following commands instead.
.. code-block:: console
# openstack-config --set /etc/keystone/keystone.conf \
ldap use_tls True
# openstack-config --set /etc/keystone/keystone.conf \
ldap tls_cacertfile ``CA_FILE``
# openstack-config --set /etc/keystone/keystone.conf \
ldap tls_req_cert ``CERT_BEHAVIOR``
Where:
- ``CA_FILE`` is the absolute path to the certificate authorities file
that should be used to encrypt LDAP traffic.
- ``CERT_BEHAVIOR`` specifies what client certificate checks to perform
on an incoming TLS session from the LDAP server (``demand``,
``allow``, or ``never``).

View File

@ -0,0 +1,83 @@
Example usage and Identity features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``openstack`` CLI is used to interact with the Identity service.
It is set up to expect commands in the general
form of ``openstack command argument``, followed by flag-like keyword
arguments to provide additional (often optional) information. For
example, the :command:`openstack user list` and
:command:`openstack project create` commands can be invoked as follows:
.. code-block:: bash
# Using token auth env variables
export OS_SERVICE_ENDPOINT=http://127.0.0.1:5000/v2.0/
export OS_SERVICE_TOKEN=secrete_token
openstack user list
openstack project create demo --domain default
# Using token auth flags
openstack --os-token secrete --os-endpoint http://127.0.0.1:5000/v2.0/ user list
openstack --os-token secrete --os-endpoint http://127.0.0.1:5000/v2.0/ project create demo
# Using user + password + project_name env variables
export OS_USERNAME=admin
export OS_PASSWORD=secrete
export OS_PROJECT_NAME=admin
openstack user list
openstack project create demo --domain default
# Using user + password + project-name flags
openstack --os-username admin --os-password secrete --os-project-name admin user list
openstack --os-username admin --os-password secrete --os-project-name admin project create demo
Logging
-------
You configure logging externally to the rest of Identity. The name of
the file specifying the logging configuration is set using the
``log_config`` option in the ``[DEFAULT]`` section of the
``/etc/keystone/keystone.conf`` file. To route logging through syslog,
set ``use_syslog=true`` in the ``[DEFAULT]`` section.
A sample logging configuration file is available with the project in
``etc/logging.conf.sample``. Like other OpenStack projects, Identity
uses the Python logging module, which provides extensive configuration
options that let you define the output levels and formats.
User CRUD
---------
Identity provides a user CRUD (Create, Read, Update, and Delete) filter that
Administrators can add to the ``public_api`` pipeline. The user CRUD filter
enables users to use a HTTP PATCH to change their own password. To enable
this extension you should define a ``user_crud_extension`` filter, insert
it after the ``*_body`` middleware and before the ``public_service``
application in the ``public_api`` WSGI pipeline in
``keystone-paste.ini``. For example:
.. code-block:: ini
[filter:user_crud_extension]
paste.filter_factory = keystone.contrib.user_crud:CrudExtension.factory
[pipeline:public_api]
pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service
Each user can then change their own password with a HTTP PATCH.
.. code-block:: console
$ curl -X PATCH http://localhost:5000/v2.0/OS-KSCRUD/users/USERID -H "Content-type: application/json" \
-H "X_Auth_Token: AUTHTOKENID" -d '{"user": {"password": "ABCD", "original_password": "DCBA"}}'
In addition to changing their password, all current tokens for the user
are invalidated.
.. note::
Only use a KVS back end for tokens when testing.

View File

@ -0,0 +1,31 @@
.. _identity_management:
====================
Administrator Guides
====================
OpenStack Identity, code-named keystone, is the default Identity
management system for OpenStack. After you install Identity, you
configure it through the ``/etc/keystone/keystone.conf``
configuration file and, possibly, a separate logging configuration
file. You initialize data into Identity by using the ``keystone``
command-line client.
.. toctree::
:maxdepth: 1
identity-concepts.rst
identity-certificates-for-pki.rst
identity-domain-specific-config.rst
identity-external-authentication.rst
identity-integrate-with-ldap.rst
identity-tokens.rst
identity-token-binding.rst
identity-fernet-token-faq.rst
identity-use-trusts.rst
identity-caching-layer.rst
identity-security-compliance.rst
identity-keystone-usage-and-features.rst
identity-auth-token-middleware.rst
identity-service-api-protection.rst
identity-troubleshoot.rst

View File

@ -0,0 +1,167 @@
.. _identity_security_compliance:
===============================
Security compliance and PCI-DSS
===============================
As of the Newton release, the Identity service contains additional security
compliance features, specifically to satisfy Payment Card Industry -
Data Security Standard (PCI-DSS) v3.1 requirements. See
`Security Hardening PCI-DSS`_ for more information on PCI-DSS.
Security compliance features are disabled by default and most of the features
only apply to the SQL backend for the identity driver. Other identity backends,
such as LDAP, should implement their own security controls.
Enable these features by changing the configuration settings under the
``[security_compliance]`` section in ``keystone.conf``.
Setting the account lockout threshold
-------------------------------------
The account lockout feature limits the number of incorrect password attempts.
If a user fails to authenticate after the maximum number of attempts, the
service disables the user. Re-enable the user by explicitly setting the
enable user attribute with the update user API call, either
`v2.0`_ or `v3`_.
You set the maximum number of failed authentication attempts by setting
the ``lockout_failure_attempts``:
.. code-block:: ini
[security_compliance]
lockout_failure_attempts = 6
You set the number of minutes a user would be locked out by setting
the ``lockout_duration`` in seconds:
.. code-block:: ini
[security_compliance]
lockout_duration = 1800
If you do not set the ``lockout_duration``, users may be locked out
indefinitely until the user is explicitly enabled via the API.
Disabling inactive users
------------------------
PCI-DSS 8.1.4 requires that inactive user accounts be removed or disabled
within 90 days. You can achieve this by setting the
``disable_user_account_days_inactive``:
.. code-block:: ini
[security_compliance]
disable_user_account_days_inactive = 90
This above example means that users that have not authenticated (inactive) for
the past 90 days are automatically disabled. Users can be re-enabled by
explicitly setting the enable user attribute via the API.
Configuring password expiration
-------------------------------
Passwords can be configured to expire within a certain number of days by
setting the ``password_expires_days``:
.. code-block:: ini
[security_compliance]
password_expires_days = 90
Once set, any new password changes have an expiration date based on the
date/time of the password change plus the number of days defined here. Existing
passwords will not be impacted. If you want existing passwords to have an
expiration date, you would need to run a SQL script against the password table
in the database to update the expires_at column.
In addition, you can set it so that passwords never expire for some users by
adding their user ID to ``password_expires_ignore_user_ids`` list:
.. code-block:: ini
[security_compliance]
password_expires_ignore_user_ids = [3a54353c9dcc44f690975ea768512f6a]
In this example, the password for user ID ``3a54353c9dcc44f690975ea768512f6a``
would never expire.
Indicating password strength requirements
-----------------------------------------
You set password strength requirements, such as requiring numbers in passwords
or setting a minimum password length, by adding a regular expression to the
``password_regex``:
.. code-block:: ini
[security_compliance]
password_regex = ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$
The above example is a regular expression that requires a password to have
one letter, one digit, and a minimum length of seven characters.
If you do set the ``password_regex``, you should provide text that
describes your password strength requirements. You can do this by setting the
``password_regex_description``:
.. code-block:: ini
[security_compliance]
password_regex_description = Passwords must contain at least 1 letter, 1
digit, and be a minimum length of 7
characters.
The service returns that description to users to explain why their requested
password did not meet requirements.
.. note::
You must ensure the ``password_regex_description`` accurately and
completely describes the ``password_regex``. If the two options are out of
sync, the help text could inaccurately describe the password requirements
being applied to the password. This would lead to poor user experience.
Requiring a unique password history
-----------------------------------
The password history requirements controls the number of passwords for a user
that must be unique before an old password can be reused. You can enforce this
by setting the ``unique_last_password_count``:
.. code-block:: ini
[security_compliance]
unique_last_password_count= 5
The above example does not allow a user to create a new password that is the
same as any of their last four previous passwords.
Similarly, you can set the number of days that a password must be used before
the user can change it by setting the ``minimum_password_age``:
.. code-block:: ini
[security_compliance]
minimum_password_age = 1
In the above example, once a user changes their password, they would not be
able to change it again for one day. This prevents users from changing their
passwords immediately in order to wipe out their password history and reuse an
old password.
.. note::
When you set ``password_expires_days``, the value for the
``minimum_password_age`` should be less than the ``password_expires_days``.
Otherwise, users would not be able to change their passwords before they
expire.
.. _Security Hardening PCI-DSS: https://specs.openstack.org/openstack/keystone-specs/specs/keystone/newton/pci-dss.html
.. _v2.0: https://developer.openstack.org/api-ref/identity/v2-admin/index.html?expanded=update-user-admin-endpoint-detail#update-user-admin-endpoint
.. _v3: https://developer.openstack.org/api-ref/identity/v3/index.html#update-user

View File

@ -0,0 +1,128 @@
=============================================================
Identity API protection with role-based access control (RBAC)
=============================================================
Like most OpenStack projects, Identity supports the protection of its
APIs by defining policy rules based on an RBAC approach. Identity stores
a reference to a policy JSON file in the main Identity configuration
file, ``/etc/keystone/keystone.conf``. Typically this file is named
``policy.json``, and contains the rules for which roles have access to
certain actions in defined services.
Each Identity API v3 call has a line in the policy file that dictates
which level of governance of access applies.
.. code-block:: none
API_NAME: RULE_STATEMENT or MATCH_STATEMENT
Where:
``RULE_STATEMENT`` can contain ``RULE_STATEMENT`` or
``MATCH_STATEMENT``.
``MATCH_STATEMENT`` is a set of identifiers that must match between the
token provided by the caller of the API and the parameters or target
entities of the API call in question. For example:
.. code-block:: none
"identity:create_user": "role:admin and domain_id:%(user.domain_id)s"
Indicates that to create a user, you must have the admin role in your
token. The ``domain_id`` in your token must match the
``domain_id`` in the user object that you are trying
to create, which implies this must be a domain-scoped token.
In other words, you must have the admin role on the domain
in which you are creating the user, and the token that you use
must be scoped to that domain.
Each component of a match statement uses this format:
.. code-block:: none
ATTRIB_FROM_TOKEN:CONSTANT or ATTRIB_RELATED_TO_API_CALL
The Identity service expects these attributes:
Attributes from token:
- ``user_id``
- ``domain_id``
- ``project_id``
The ``project_id`` attribute requirement depends on the scope, and the
list of roles you have within that scope.
Attributes related to API call:
- ``user.domain_id``
- Any parameters passed into the API call
- Any filters specified in the query string
You reference attributes of objects passed with an object.attribute
syntax (such as, ``user.domain_id``). The target objects of an API are
also available using a target.object.attribute syntax. For instance:
.. code-block:: none
"identity:delete_user": "role:admin and domain_id:%(target.user.domain_id)s"
would ensure that Identity only deletes the user object in the same
domain as the provided token.
Every target object has an ``id`` and a ``name`` available as
``target.OBJECT.id`` and ``target.OBJECT.name``. Identity retrieves
other attributes from the database, and the attributes vary between
object types. The Identity service filters out some database fields,
such as user passwords.
List of object attributes:
.. code-block:: yaml
role:
target.role.id
target.role.name
user:
target.user.default_project_id
target.user.description
target.user.domain_id
target.user.enabled
target.user.id
target.user.name
group:
target.group.description
target.group.domain_id
target.group.id
target.group.name
domain:
target.domain.enabled
target.domain.id
target.domain.name
project:
target.project.description
target.project.domain_id
target.project.enabled
target.project.id
target.project.name
The default ``policy.json`` file supplied provides a somewhat
basic example of API protection, and does not assume any particular
use of domains. Refer to ``policy.v3cloudsample.json`` as an
example of multi-domain configuration installations where a cloud
provider wants to delegate administration of the contents of a domain
to a particular ``admin domain``. This example policy file also
shows the use of an ``admin_domain`` to allow a cloud provider to
enable administrators to have wider access across the APIs.
A clean installation could start with the standard policy file, to
allow creation of the ``admin_domain`` with the first users within
it. You could then obtain the ``domain_id`` of the admin domain,
paste the ID into a modified version of
``policy.v3cloudsample.json``, and then enable it as the main
``policy file``.

View File

@ -0,0 +1,64 @@
============================================
Configure Identity service for token binding
============================================
Token binding embeds information from an external authentication
mechanism, such as a Kerberos server or X.509 certificate, inside a
token. By using token binding, a client can enforce the use of a
specified external authentication mechanism with the token. This
additional security mechanism ensures that if a token is stolen, for
example, it is not usable without external authentication.
You configure the authentication types for a token binding in the
``/etc/keystone/keystone.conf`` file:
.. code-block:: ini
[token]
bind = kerberos
or
.. code-block:: ini
[token]
bind = x509
Currently ``kerberos`` and ``x509`` are supported.
To enforce checking of token binding, set the ``enforce_token_bind``
option to one of these modes:
- ``disabled``
Disables token bind checking.
- ``permissive``
Enables bind checking. If a token is bound to an unknown
authentication mechanism, the server ignores it. The default is this
mode.
- ``strict``
Enables bind checking. If a token is bound to an unknown
authentication mechanism, the server rejects it.
- ``required``
Enables bind checking. Requires use of at least authentication
mechanism for tokens.
- ``kerberos``
Enables bind checking. Requires use of kerberos as the authentication
mechanism for tokens:
.. code-block:: ini
[token]
enforce_token_bind = kerberos
- ``x509``
Enables bind checking. Requires use of X.509 as the authentication
mechanism for tokens:
.. code-block:: ini
[token]
enforce_token_bind = x509

View File

@ -0,0 +1,108 @@
===============
Keystone tokens
===============
Tokens are used to authenticate and authorize your interactions with the
various OpenStack APIs. Tokens come in many flavors, representing various
authorization scopes and sources of identity. There are also several different
"token providers", each with their own user experience, performance, and
deployment characteristics.
Authorization scopes
--------------------
Tokens can express your authorization in different scopes. You likely have
different sets of roles, in different projects, and in different domains.
While tokens always express your identity, they may only ever express one set
of roles in one authorization scope at a time.
Each level of authorization scope is useful for certain types of operations in
certain OpenStack services, and are not interchangeable.
Unscoped tokens
~~~~~~~~~~~~~~~
An unscoped token contains neither a service catalog, any roles, a project
scope, nor a domain scope. Their primary use case is simply to prove your
identity to keystone at a later time (usually to generate scoped tokens),
without repeatedly presenting your original credentials.
The following conditions must be met to receive an unscoped token:
* You must not specify an authorization scope in your authentication request
(for example, on the command line with arguments such as
``--os-project-name`` or ``--os-domain-id``),
* Your identity must not have a "default project" associated with it that you
also have role assignments, and thus authorization, upon.
Project-scoped tokens
~~~~~~~~~~~~~~~~~~~~~
Project-scoped tokens are the bread and butter of OpenStack. They express your
authorization to operate in a specific tenancy of the cloud and are useful to
authenticate yourself when working with most other services.
They contain a service catalog, a set of roles, and details of the project upon
which you have authorization.
Domain-scoped tokens
~~~~~~~~~~~~~~~~~~~~
Domain-scoped tokens also have limited use cases in OpenStack. They express
your authorization to operate a domain-level, above that of the user and
projects contained therein (typically as a domain-level administrator).
Depending on Keystone's configuration, they are useful for working with a
single domain in Keystone.
They contain a limited service catalog (only those services which do not
explicitly require per-project endpoints), a set of roles, and details of the
project upon which you have authorization.
They can also be used to work with domain-level concerns in other services,
such as to configure domain-wide quotas that apply to all users or projects in
a specific domain.
Token providers
---------------
The token type issued by keystone is configurable through the
``/etc/keystone/keystone.conf`` file. Currently, there are four supported
token types and they include ``UUID``, ``fernet``, ``PKI``, and ``PKIZ``.
UUID tokens
~~~~~~~~~~~
UUID was the first token type supported and is currently the default token
provider. UUID tokens are 32 bytes in length and must be persisted in a back
end. Clients must pass their UUID token to the Identity service in order to
validate it.
Fernet tokens
~~~~~~~~~~~~~
The fernet token format was introduced in the OpenStack Kilo release. Unlike
the other token types mentioned in this document, fernet tokens do not need to
be persisted in a back end. ``AES256`` encryption is used to protect the
information stored in the token and integrity is verified with a ``SHA256
HMAC`` signature. Only the Identity service should have access to the keys used
to encrypt and decrypt fernet tokens. Like UUID tokens, fernet tokens must be
passed back to the Identity service in order to validate them. For more
information on the fernet token type, see the :doc:`identity-fernet-token-faq`.
PKI and PKIZ tokens
~~~~~~~~~~~~~~~~~~~
PKI tokens are signed documents that contain the authentication context, as
well as the service catalog. Depending on the size of the OpenStack deployment,
these tokens can be very long. The Identity service uses public/private key
pairs and certificates in order to create and validate PKI tokens.
The same concepts from PKI tokens apply to PKIZ tokens. The only difference
between the two is PKIZ tokens are compressed to help mitigate the size issues
of PKI. For more information on the certificate setup for PKI and PKIZ tokens,
see the :doc:`identity-certificates-for-pki`.
.. note::
PKI and PKIZ tokens are deprecated and not supported in Ocata.

View File

@ -0,0 +1,199 @@
=================================
Troubleshoot the Identity service
=================================
To troubleshoot the Identity service, review the logs in the
``/var/log/keystone/keystone.log`` file.
Use the ``/etc/keystone/logging.conf`` file to configure the
location of log files.
.. note::
The ``insecure_debug`` flag is unique to the Identity service.
If you enable ``insecure_debug``, error messages from the API change
to return security-sensitive information. For example, the error message
on failed authentication includes information on why your authentication
failed.
The logs show the components that have come in to the WSGI request, and
ideally show an error that explains why an authorization request failed.
If you do not see the request in the logs, run keystone with the
``--debug`` parameter. Pass the ``--debug`` parameter before the
command parameters.
Debug PKI middleware
~~~~~~~~~~~~~~~~~~~~
Problem
-------
If you receive an ``Invalid OpenStack Identity Credentials`` message when
you accessing and reaching an OpenStack service, it might be caused by
the changeover from UUID tokens to PKI tokens in the Grizzly release.
The PKI-based token validation scheme relies on certificates from
Identity that are fetched through HTTP and stored in a local directory.
The location for this directory is specified by the ``signing_dir``
configuration option.
Solution
--------
In your services configuration file, look for a section like this:
.. code-block:: ini
[keystone_authtoken]
signing_dir = /var/cache/glance/api
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = glance
The first thing to check is that the ``signing_dir`` does, in fact,
exist. If it does, check for certificate files:
.. code-block:: console
$ ls -la /var/cache/glance/api/
total 24
drwx------. 2 ayoung root 4096 Jul 22 10:58 .
drwxr-xr-x. 4 root root 4096 Nov 7 2012 ..
-rw-r-----. 1 ayoung ayoung 1424 Jul 22 10:58 cacert.pem
-rw-r-----. 1 ayoung ayoung 15 Jul 22 10:58 revoked.pem
-rw-r-----. 1 ayoung ayoung 4518 Jul 22 10:58 signing_cert.pem
This directory contains two certificates and the token revocation list.
If these files are not present, your service cannot fetch them from
Identity. To troubleshoot, try to talk to Identity to make sure it
correctly serves files, as follows:
.. code-block:: console
$ curl http://localhost:35357/v2.0/certificates/signing
This command fetches the signing certificate:
.. code-block:: yaml
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: C=US, ST=Unset, L=Unset, O=Unset, CN=www.example.com
Validity
Not Before: Jul 22 14:57:31 2013 GMT
Not After : Jul 20 14:57:31 2023 GMT
Subject: C=US, ST=Unset, O=Unset, CN=www.example.com
Note the expiration dates of the certificate:
.. code-block:: console
Not Before: Jul 22 14:57:31 2013 GMT
Not After : Jul 20 14:57:31 2023 GMT
The token revocation list is updated once a minute, but the certificates
are not. One possible problem is that the certificates are the wrong
files or garbage. You can remove these files and run another command
against your server; they are fetched on demand.
The Identity service log should show the access of the certificate files. You
might have to turn up your logging levels. Set ``debug = True`` in your
Identity configuration file and restart the Identity server.
.. code-block:: console
(keystone.common.wsgi): 2013-07-24 12:18:11,461 DEBUG wsgi __call__
arg_dict: {}
(access): 2013-07-24 12:18:11,462 INFO core __call__ 127.0.0.1 - - [24/Jul/2013:16:18:11 +0000]
"GET http://localhost:35357/v2.0/certificates/signing HTTP/1.0" 200 4518
If the files do not appear in your directory after this, it is likely
one of the following issues:
* Your service is configured incorrectly and cannot talk to Identity.
Check the ``auth_port`` and ``auth_host`` values and make sure that
you can talk to that service through cURL, as shown previously.
* Your signing directory is not writable. Use the ``chmod`` command to
change its permissions so that the service (POSIX) user can write to
it. Verify the change through ``su`` and ``touch`` commands.
* The SELinux policy is denying access to the directory.
SELinux troubles often occur when you use Fedora or RHEL-based packages and
you choose configuration options that do not match the standard policy.
Run the ``setenforce permissive`` command. If that makes a difference,
you should relabel the directory. If you are using a sub-directory of
the ``/var/cache/`` directory, run the following command:
.. code-block:: console
# restorecon /var/cache/
If you are not using a ``/var/cache`` sub-directory, you should. Modify
the ``signing_dir`` configuration option for your service and restart.
Set back to ``setenforce enforcing`` to confirm that your changes solve
the problem.
If your certificates are fetched on demand, the PKI validation is
working properly. Most likely, the token from Identity is not valid for
the operation you are attempting to perform, and your user needs a
different role for the operation.
Debug signing key file errors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
If an error occurs when the signing key file opens, it is possible that
the person who ran the :command:`keystone-manage pki_setup` command to
generate certificates and keys did not use the correct user.
Solution
--------
When you run the :command:`keystone-manage pki_setup` command, Identity
generates a set of certificates and keys in ``/etc/keystone/ssl*``, which
is owned by ``root:root``. This can present a problem when you run the
Identity daemon under the keystone user account (nologin) when you try
to run PKI. Unless you run the :command:`chown` command against the
files ``keystone:keystone``, or run the :command:`keystone-manage pki_setup`
command with the ``--keystone-user`` and
``--keystone-group`` parameters, you will get an error.
For example:
.. code-block:: console
2012-07-31 11:10:53 ERROR [keystone.common.cms] Error opening signing key file
/etc/keystone/ssl/private/signing_key.pem
140380567730016:error:0200100D:system library:fopen:Permission
denied:bss_file.c:398:fopen('/etc/keystone/ssl/private/signing_key.pem','r')
140380567730016:error:20074002:BIO routines:FILE_CTRL:system lib:bss_file.c:400:
unable to load signing key file
Flush expired tokens from the token database table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Problem
-------
As you generate tokens, the token database table on the Identity server
grows.
Solution
--------
To clear the token table, an administrative user must run the
:command:`keystone-manage token_flush` command to flush the tokens. When you
flush tokens, expired tokens are deleted and traceability is eliminated.
Use ``cron`` to schedule this command to run frequently based on your
workload. For large workloads, running it every minute is recommended.

View File

@ -0,0 +1,56 @@
==========
Use trusts
==========
OpenStack Identity manages authentication and authorization. A trust is
an OpenStack Identity extension that enables delegation and, optionally,
impersonation through ``keystone``. A trust extension defines a
relationship between:
**Trustor**
The user delegating a limited set of their own rights to another user.
**Trustee**
The user trust is being delegated to, for a limited time.
The trust can eventually allow the trustee to impersonate the trustor.
For security reasons, some safeties are added. For example, if a trustor
loses a given role, any trusts the user issued with that role, and the
related tokens, are automatically revoked.
The delegation parameters are:
**User ID**
The user IDs for the trustor and trustee.
**Privileges**
The delegated privileges are a combination of a project ID and a
number of roles that must be a subset of the roles assigned to the
trustor.
If you omit all privileges, nothing is delegated. You cannot
delegate everything.
**Delegation depth**
Defines whether or not the delegation is recursive. If it is
recursive, defines the delegation chain length.
Specify one of the following values:
- ``0``. The delegate cannot delegate these permissions further.
- ``1``. The delegate can delegate the permissions to any set of
delegates but the latter cannot delegate further.
- ``inf``. The delegation is infinitely recursive.
**Endpoints**
A list of endpoints associated with the delegation.
This parameter further restricts the delegation to the specified
endpoints only. If you omit the endpoints, the delegation is
useless. A special value of ``all_endpoints`` allows the trust to be
used by all endpoints associated with the delegated project.
**Duration**
(Optional) Comprised of the start time and end time for the trust.

View File

@ -85,6 +85,14 @@ the keystone service.
advanced-topics/index.rst
sample_files/index.rst
Administrator Guides
~~~~~~~~~~~~~~~~~~~~
.. toctree::
:maxdepth: 2
admin/identity-management.rst
API Documentation
~~~~~~~~~~~~~~~~~
An end user can find the specific API documentation here, `OpenStack's Identity API`_.