Merge "Add service developer documentation for scopes"

This commit is contained in:
Zuul 2019-03-16 02:59:25 +00:00 committed by Gerrit Code Review
commit 515a95e02a
1 changed files with 303 additions and 10 deletions

View File

@ -152,7 +152,8 @@ Good examples of system-scoped resources include:
deployed in a cloud.
* Endpoints: Endpoints that tell users where to find services deployed in a
cloud.
* Hypervisors: Hosts for servers that belong to various projects.
* Hypervisors: Physical compute infrastructure that hosts instances where the
instances may, or may not, be owned by the same project.
Domain Scope
------------
@ -164,17 +165,18 @@ that must belong to a domain. Users and groups are good examples of this. The
following is an example of how a domain-scoped token could be used against a
service.
Assume a domain exists called `Foo`. and it contains projects call `bar` and
`baz`. Let's also assume both projects contain compute servers running a
workload. If Alice is a domain administrator for `Foo`, she should be able to
pass her domain-scoped token to nova and ask for a list of instances. If nova
supports domain-scoped token, the response would contain all instances in
projects `bar` and `baz`.
Assume a domain exists called `Foo`, and it contains projects called `bar` and
`baz`. Let's also assume both projects contain instances running a workload. If
Alice is a domain administrator for `Foo`, she should be able to pass her
domain-scoped token to nova and ask for a list of instances. If nova supports
domain-scoped tokens, the response would contain all instances in projects
`bar` and `baz`.
Another example of using a domain-scoped token would be if Alice wanted to
create a new project in domain `Foo`. When Alice sends a request for keystone
to create a project, keystone should ensure the new project is created within
the `Foo` domain, since that's the authorization associated to Alice's token.
create a new project in domain `Foo`. When Alice sends a request to create a
new project (`POST /v3/projects`), keystone should ensure the new project is
created within the `Foo` domain, since that's the authorization associated to
Alice's token.
.. WARNING::
@ -209,6 +211,297 @@ keystone for a list of projects they're allowed to access. Then they can
exchange their unscoped token for a project-scoped token allowing them to
perform actions within a particular project.
Why are authorization scopes important?
=======================================
Flexibility for exposing your work
----------------------------------
OpenStack provides a rich set of APIs and functionality. We wrote some APIs
with the intent of managing the deployment hardware, otherwise referred to as
the deployment system. We wrote others to orchestrate resources in a project or
a domain. Some APIs even operate on multiple levels. Since we use tokens to
authorize a user's actions against a given service, they needed to handle
different scope targets. For example, when a user asks for a new instance, we
expect that instance to belong to a project; thus we expect a project relayed
through the token's scope. This idea is fundamental in providing isolation, or
tenancy, between projects in OpenStack.
Initially, keystone only supported the ability to generate project-scoped
tokens as a product of a user having a role assignment on a project.
Consequently, services had no other choice but to require project-scoped tokens
to protect almost all of their APIs, even if that wasn't an ideal option. Using
project-scoped tokens to protect APIs they weren't designed to protect required
operators to write custom policy checks to secure those APIs. An example
showcases this more clearly.
Let's assume an operator wanted to create a read-only role. Users with the
`reader` role would be able to list things owned by the project, like
instances, volumes, or snapshots. The operator also wants to have a read-only
role for fellow operators or auditors, allowing them to view hypervisor
information or endpoints and services. Reusing the existing `reader` role is
difficult because users with that role on a project shouldn't see data about
hypervisors, which would violate tenancy. Operators could create a new role
called `operator` or `system-reader`, but then those users would still need to
have that role assigned on a project to access deployment-level APIs. The
concept of getting project-scoped tokens to access deployment-level resources
makes no sense for abstractions like hypervisors that cannot belong to a single
project. Furthermore, this requires deployers to maintain all of this in policy
files. You can quickly see how only using project-scope limits our ability to
protect APIs without convoluted or expensive-to-maintain solutions.
Each scope offered by keystone helps operators and users avoid these problems
by giving you, the developer, multiple options for protecting APIs you write,
instead of the one-size-fits-all approach we outgrew. You no longer have to
hope an operator configures policy correctly so their users can consume the
feature you wrote. The more options you have for protecting an API, the easier
it is to provide default policies that expose more of your work to users
safely.
Less custom code
----------------
Another crucial benefit of authorization scopes offered by keystone is less
custom code. For example, if you were writing an API to manage a
deployment-level resource but only allowed to consume project-scoped tokens,
how would you determine an operator from an end user? Would you attempt to
standardize a role name? Would you look for a unique project in the token's
scope? Would these checks be configurable in policy or hardcoded in your
service?
Chances are, different services will come up with different, inconsistent
solution for the same problem. These inconsistencies make it harder for
developers to context switch between services that process things differently.
Users also suffer from inconsistencies by having to maintain a mental mapping
of different behavior between services. Having different scopes at your
disposal, through keystone tokens, lets you build on a standard solution that
other projects also consume, reducing the likelihood of accidentally developing
inconsistencies between services. This commonality also gives us a similar set
of terms we can use when we communicate with each other and users, allowing us
to know what someone means by a `system-admin` and how that is different from a
`project-admin`.
Reusable default roles
----------------------
When OpenStack services originally started developing a policy enforcement
engine to protect APIs, the only real concrete role we assumed to be present in
the deployment was a role called `admin`. Because we assumed this, we were able
to write policies with `admin` as the default. Keystone also took steps to
ensure it had a role with that name during installation. While making this
assumption is beneficial for some APIs, having only one option is underwhelming
and leaves many common policy use cases for operators to implement through
policy overrides. For example, a typical ask from operators is to have a
read-only role, that only allows users with that role on a target to view its
contents, restricting them from making writable changes. Another example is a
membership role that isn't the administrator. To put it clearly, a user with a
`member` role assignment on a project may create new storage volumes, but
they're unable to perform backups. Users with the `admin` role on a project can
access the backups functionality.
Keep in mind, the examples above are only meant to describe the need for other
roles besides `admin` in a deployment. Service developers should be able to
reuse these definitions for similar APIs and assume those roles exist. As a
result, keystone implemented support for ensuring the `admin`, `member`, and
`reader` roles are present during the installation process, specifically when
running ``keystone-manage bootstrap``. Additionally, keystone creates a
relationship among these roles that make them easier for service developers to
use. During creation, keystone implies that the `admin` role is a superset of
the `member` role, and the `member` role is a superset of the `reader` role.
The benefit may not be obvious, but what this means is that users with the
`admin` role on a target also have the `member` and `reader` roles generated
in their token. Similarly, users with the `member` role also have the `reader`
role relayed in their token, even though they don't have a direct role
assignment using the `reader` role. This subtle relationship allows developers
to use a short-hand notation for writing policies. The following assumes
``foobar`` is a project-level resource available over a service API and is
protected by policies using generic roles:
.. code-block:: yaml
"service:foobar:get": "role:admin OR role:member OR role:reader"
"service:foobar:list": "role:admin OR role:member OR role:reader"
"service:foobar:create": "role:admin OR role:member"
"service:foobar:update": "role:admin OR role:member"
"service:foobar:delete": "role:admin"
The following policies are functionally equivalent to the policies above, but
rely on the implied relationship between the three roles, resulting in a
simplified check string expression:
.. code-block:: yaml
"service:foobar:get": "role:reader"
"service:foobar:list": "role:reader"
"service:foobar:create": "role:member"
"service:foobar:update": "role:member"
"service:foobar:delete": "role:admin"
How to I incorporate authorization scopes into a service?
=========================================================
Now that you understand the advantages of a shared approach to policy
enforcement, the following section details the order of operations you can use
to implement it in your service.
Ruthless Testing
----------------
Policy enforcement implementations vary greatly across OpenStack services. Some
enforce authorization near the top of the API while others push the logic
deeper into the service. Differences and intricacies between services make
testing imperative to adopt a uniform, consistent approach. Positive and
negative protection testing helps us assert users with specific roles can, or
cannot, access APIs. A protection test is similar to an API, or functional
test, but purely focused on the authoritative outcome. In other words,
protection testing is sufficient when we can assert that a user is or isn't
allowed to do or see something. For example, a user with a role assignment on
project `foo` shouldn't be able to list volumes in project `bar`. A user with a
role on a project shouldn't be able to modify entries in the service catalog.
Users with a `reader` role on the system, a domain, or a project shouldn't be
able to make writable changes. You commonly see protection tests conclude with
an assertion checking for a successful response code or an HTTP 403 Forbidden.
If your service has minimal or non-existent protection coverage, you should
start by introducing tests that exercise the current default policies, whatever
those are. This step serves three significant benefits.
First, it puts us in the shoes of our users from an authorization perspective,
allowing us to see the surface of the API a user has access to with a given
assignment. This information helps audit the API to make sure the user has all
the authorization to do what they need_, but nothing more. We should note
inconsistencies here as feedback that we should fix, especially since operators
are probably attempting to fix these inconsistencies through customized policy
today.
Second, a collection of protection tests make sure we don't have unwanted
security-related regressions. Imagine making a policy change that introduced a
regression and allowed a user to access an API and data they aren't supposed to
see. Conversely, imagine a patch that accidentally tightened restriction on an
API that resulted in a broken workflow for users. Testing makes sure we catch
cases like this early and handle them accordingly.
Finally, protection tests help us use test-driven development to evolve policy
enforcement. We can make a change and assert the behavior using tests locally,
allowing us to be proactive and not reactive in our authoritative business
logic.
To get started, refer to the `oslo.policy documentation`_ that describes
techniques for writing useful protection tests. This document also describes
some historical context you might recognize in your service and how you should
deal with it. You can also look at protection tests examples in other services,
like keystone_ or cinder_. Note that these examples test the three default
roles provided from keystone (reader, member, and admin) against the three
scopes keystone offers, allowing for nine different personas without operators
creating roles specific to their deployment. We recommend testing these
personas where applicable in your service:
* project reader
* project member
* project admin
* system reader
* system member
* system admin
* domain reader
* domain member
* domain admin
.. _need: https://en.wikipedia.org/wiki/Principle_of_least_privilege
.. _oslo.policy documentation: https://docs.openstack.org/oslo.policy/latest/user/usage.html#testing-default-policies
.. _keystone: https://git.openstack.org/cgit/openstack/keystone/tree/keystone/tests/unit/protection/v3?id=77e50e49c5af37780b8b4cfe8721ba28e8a58183
.. _cinder: https://review.openstack.org/#/c/602489/
Auditing the API
----------------
After going through the API and adding protection tests, you should have a good
idea of how each API is or isn't exposed to end users with different role
assignments. You might also have a list of areas where policies could be
improved. For example, maybe you noticed an API in your service that consumes
project-scoped tokens to protect a system-level resource. If your service has a
bug tracker, you can use it to document these gaps. The keystone team went
through this exercise and used bugs_. Feel free to use these bug reports as a
template for describing gaps in policy enforcement. For example, if your
service has APIs for listing or getting resources, you could implement the
reader role on that API.
.. _bugs: http://tinyurl.com/y5kj6fn9
Setting scope types
-------------------
With testing in place and gaps documented, you can start refactoring. The first
step is to start using oslo.policy for scope checking, which reduces complexity
in your service by having a library do some lifting for you. For example, if
you have an API that requires a project-scoped token, you can set the scope of
the policy protecting that API accordingly. If an instance of ``RuleDefault``
has scope associated to it, oslo.policy checks that it matches the scope of the
token used to make the request. This behavior is configurable_, allowing
operators to turn it on once all policies have a scope type and once operators
have audited their assignments and educated their users on how to get the scope
necessary to access an API. Once that happens, an operator can configure
oslo.policy to reject requests made with the wrong scope. Otherwise,
oslo.policy logs a warning for operators that describes the mismatched scope.
The oslo.policy library provides `documentation for setting scope`_. You can
also see `keystone examples`_ or `placement examples`_ of setting scope types
on policies.
If you have difficulty deciding which scope an API or resource requires, try
thinking about the intended user. Are they an operator managing the deployment?
Then you might choose `system`. Are they an end user meant to operate only
within a given project? Then `project` scope is likely what you need. Scopes
aren't mutually exclusive.
You may have APIs that require more than one scope. Keystone's user and project
APIs are good examples of resources that need different scopes. For example, a
system administrator should be able to list all users in the system, but domain
administrators should only be able to list users within their domain. If you
have an API that falls into this category, you may be required to implicitly
filter responses based on the scope type. If your service uses oslo.context and
keystonemiddleware, you can query a `RequestContext` object about the token's
scope. There are keystone patches_ that show how to filter responses according
to scope using oslo.context, in case you need inspiration.
If you still can't seem to find a solution, don't hesitate to send a note to
the `OpenStack Discuss mailing list`_ tagged with `[keystone]` or ask in
#openstack-keystone on IRC_.
.. _configurable: https://docs.openstack.org/oslo.policy/latest/configuration/index.html#oslo_policy.enforce_scope
.. _documentation for setting scope: https://docs.openstack.org/oslo.policy/latest/user/usage.html#setting-scope
.. _keystone examples: https://review.openstack.org/#/q/status:merged+project:openstack/keystone+branch:master+topic:add-scope-types
.. _placement examples: https://review.openstack.org/#/c/571201/
.. _patches: https://review.openstack.org/#/c/623319/
.. _OpenStack Discuss mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss
.. _IRC: https://wiki.openstack.org/wiki/IRC
Rewriting check string
----------------------
With oslo.policy able to check scope, you can start refactoring check strings
where-ever necessary. For example, adding support for default roles or removing
hard-coded ``is_admin: True`` checks. Remember that oslo.policy provides
deprecation tooling that makes upgrades easier for operators. Specifically,
upgrades are made easier by combining old defaults or overrides with the new
defaults using a logical `OR`. We encourage you to use the available
deprecation tooling when you change policy names or check strings. You can
refer to examples_ that show you how to build descriptive rule objects using
all the default roles from keystone and consuming scopes.
.. _examples: https://review.openstack.org/#/q/(status:open+OR+status:merged)+project:openstack/keystone+branch:master+topic:implement-default-roles
Communication
-------------
Communicating early and often is never a bad thing, especially when a change is
going to impact operators. At this point, it's crucial to emphasize the changes
you've made to policy enforcement in your service. Release notes are an
excellent way to signal changes to operators. You can find examples when
keystone implemented support for default roles. Additionally, you might have
operators or users ask questions about the various scopes or what they mean.
Don't hesitate to refer them to keystone's `scope documentation
<https://docs.openstack.org/keystone/latest/admin/tokens-overview.html#authorization-scopes>`_.
Auth Token middleware
=====================