Corrected code samples that did not validate
Closes-Bug: #1285327 Change-Id: Ib3ff6e7f2ec5fd36207537aa1c6ddd294142b7b6 author: diane fleming
This commit is contained in:
parent
2708c69c33
commit
64f0eb972f
|
@ -5,44 +5,47 @@
|
|||
xml:id="keystone-concepts">
|
||||
<?dbhtml stop-chunking?>
|
||||
<title>Identity Service Concepts</title>
|
||||
<para> The Identity service performs the following functions: </para>
|
||||
<para>The Identity service performs the following
|
||||
functions:</para>
|
||||
<itemizedlist spacing="compact">
|
||||
<listitem>
|
||||
<para>User management. Tracks users and their permissions. </para>
|
||||
<para>User management. Tracks users and their
|
||||
permissions.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Service catalog. Provides a catalog of available
|
||||
services with their API endpoints. </para>
|
||||
services with their API endpoints.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para> To understand the Identity Service, you must understand the
|
||||
following concepts: </para>
|
||||
<para>To understand the Identity Service, you must understand the
|
||||
following concepts:</para>
|
||||
<variablelist>
|
||||
<varlistentry>
|
||||
<term>User</term>
|
||||
<listitem>
|
||||
<para>Digital representation of a person, system, or service
|
||||
who uses OpenStack cloud services. Identity authentication
|
||||
services will validate that incoming request are being made
|
||||
by the user who claims to be making the call. Users have a
|
||||
login and may be assigned tokens to access resources. Users
|
||||
may be directly assigned to a particular tenant and behave
|
||||
as if they are contained in that tenant.
|
||||
</para>
|
||||
<para>Digital representation of a person, system, or
|
||||
service who uses OpenStack cloud services.
|
||||
Identity authentication services will validate
|
||||
that incoming request are being made by the user
|
||||
who claims to be making the call. Users have a
|
||||
login and may be assigned tokens to access
|
||||
resources. Users may be directly assigned to a
|
||||
particular tenant and behave as if they are
|
||||
contained in that tenant.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
<term>Credentials</term>
|
||||
<listitem>
|
||||
<para>Data that is known only by a user that proves
|
||||
who they are. In the Identity Service, examples
|
||||
are: </para>
|
||||
who they are. In the Identity Service, examples
|
||||
are:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Username and password </para>
|
||||
<para>Username and password</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Username and API key </para>
|
||||
<para>Username and API key</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>An authentication token provided by the
|
||||
|
@ -61,7 +64,8 @@
|
|||
and password or a username and API key. In
|
||||
response to these credentials, the Identity
|
||||
Service issues the user an authentication token,
|
||||
which the user provides in subsequent requests. </para>
|
||||
which the user provides in subsequent
|
||||
requests.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
|
@ -71,14 +75,14 @@
|
|||
resources. Each token has a scope which describes
|
||||
which resources are accessible with it. A token
|
||||
may be revoked at anytime and is valid for a
|
||||
finite duration. </para>
|
||||
finite duration.</para>
|
||||
<para>While the Identity Service supports token-based
|
||||
authentication in this release, the intention is
|
||||
for it to support additional protocols in the
|
||||
future. The intent is for it to be an integration
|
||||
service foremost, and not aspire to be a
|
||||
full-fledged identity store and management
|
||||
solution. </para>
|
||||
solution.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
|
@ -87,7 +91,7 @@
|
|||
<para>A container used to group or isolate resources
|
||||
and/or identity objects. Depending on the service
|
||||
operator, a tenant may map to a customer, account,
|
||||
organization, or project. </para>
|
||||
organization, or project.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
|
@ -96,7 +100,8 @@
|
|||
<para>An OpenStack service, such as Compute (Nova),
|
||||
Object Storage (Swift), or Image Service (Glance).
|
||||
Provides one or more endpoints through which users
|
||||
can access resources and perform operations. </para>
|
||||
can access resources and perform
|
||||
operations.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
|
@ -107,7 +112,7 @@
|
|||
an extension for templates, you can create an
|
||||
endpoint template, which represents the templates
|
||||
of all the consumable services that are available
|
||||
across the regions. </para>
|
||||
across the regions.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
<varlistentry>
|
||||
|
@ -117,46 +122,46 @@
|
|||
them to perform a specific set of operations. A
|
||||
role includes a set of rights and privileges. A
|
||||
user assuming that role inherits those rights and
|
||||
privileges. </para>
|
||||
privileges.</para>
|
||||
<para>In the Identity Service, a token that is issued
|
||||
to a user includes the list of roles that user can
|
||||
assume. Services that are being called by that
|
||||
user determine how they interpret the set of roles
|
||||
a user has and which operations or resources each
|
||||
role grants access to. </para>
|
||||
role grants access to.</para>
|
||||
</listitem>
|
||||
</varlistentry>
|
||||
</variablelist>
|
||||
<para>
|
||||
<mediaobject>
|
||||
<imageobject role="fo">
|
||||
<imagedata
|
||||
fileref="figures/SCH_5002_V00_NUAC-Keystone.png"
|
||||
format="PNG" scale="50"/>
|
||||
</imageobject>
|
||||
<imageobject role="html">
|
||||
<imagedata
|
||||
fileref="figures/SCH_5002_V00_NUAC-Keystone.png"
|
||||
format="PNG" scale="10"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</para>
|
||||
<mediaobject>
|
||||
<imageobject role="fo">
|
||||
<imagedata
|
||||
fileref="figures/SCH_5002_V00_NUAC-Keystone.png"
|
||||
format="PNG" scale="50"/>
|
||||
</imageobject>
|
||||
<imageobject role="html">
|
||||
<imagedata
|
||||
fileref="figures/SCH_5002_V00_NUAC-Keystone.png"
|
||||
format="PNG" scale="10"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
<section xml:id="keystone-user-management">
|
||||
<title>User management</title>
|
||||
<para> The main components of Identity user management are: <itemizedlist>
|
||||
<listitem>
|
||||
<para>Users</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Tenants</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Roles</para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
<para>The main components of Identity user management
|
||||
are:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Users</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Tenants</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Roles</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>A <emphasis>user</emphasis> represents a human user, and
|
||||
has associated information such as username, password and
|
||||
email. This example creates a user named "alice": </para>
|
||||
email. This example creates a user named "alice":</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-create --name=alice --pass=mypassword123 --email=alice@example.com</userinput></screen>
|
||||
<para>A <emphasis>tenant</emphasis> can be a project, group,
|
||||
or organization. Whenever you make requests to OpenStack
|
||||
|
@ -164,7 +169,7 @@
|
|||
query the Compute service for a list of running instances,
|
||||
you will receive a list of all of the running instances in
|
||||
the tenant you specified in your query. This example
|
||||
creates a tenant named "acme": </para>
|
||||
creates a tenant named "acme":</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone tenant-create --name=acme</userinput></screen>
|
||||
<note>
|
||||
<para>Because the term <emphasis>project</emphasis> was
|
||||
|
@ -173,22 +178,22 @@
|
|||
use <literal>--project_id</literal> instead of
|
||||
<literal>--tenant-id</literal> or
|
||||
<literal>--os-tenant-id</literal> to refer to a
|
||||
tenant ID. </para>
|
||||
tenant ID.</para>
|
||||
</note>
|
||||
<para>A <emphasis>role</emphasis> captures what operations a
|
||||
user is permitted to perform in a given tenant. This
|
||||
example creates a role named "compute-user": </para>
|
||||
example creates a role named "compute-user":</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone role-create --name=compute-user</userinput></screen>
|
||||
<note>
|
||||
<para>It is up to individual services such as the Compute
|
||||
service and Image service to assign meaning to these
|
||||
roles. As far as the Identity service is concerned, a
|
||||
role is simply a name. </para>
|
||||
role is simply a name.</para>
|
||||
</note>
|
||||
<para>The Identity service associates a user with a tenant and
|
||||
a role. To continue with our previous examples, we may
|
||||
wish to assign the "alice" user the "compute-user" role in
|
||||
the "acme" tenant: </para>
|
||||
the "acme" tenant:</para>
|
||||
<screen><prompt>$</prompt> <userinput>keystone user-list</userinput></screen>
|
||||
<screen><computeroutput>+--------+---------+-------------------+--------+
|
||||
| id | enabled | email | name |
|
||||
|
@ -211,7 +216,7 @@
|
|||
<para>A user can be assigned different roles in different
|
||||
tenants: for example, Alice may also have the "admin" role
|
||||
in the "Cyberdyne" tenant. A user can also be assigned
|
||||
multiple roles in the same tenant. </para>
|
||||
multiple roles in the same tenant.</para>
|
||||
<para>The
|
||||
<filename>/etc/<replaceable>[SERVICE_CODENAME]</replaceable>/policy.json</filename>
|
||||
controls what users are allowed to do for a given service.
|
||||
|
@ -220,136 +225,47 @@
|
|||
<filename>/etc/glance/policy.json</filename> specifies
|
||||
the access policy for the Image service, and
|
||||
<filename>/etc/keystone/policy.json</filename>
|
||||
specifies the access policy for the Identity service. </para>
|
||||
specifies the access policy for the Identity
|
||||
service.</para>
|
||||
<para>The default <filename>policy.json</filename> files in
|
||||
the Compute, Identity, and Image service recognize only
|
||||
the <literal>admin</literal> role: all operations that do
|
||||
not require the <literal>admin</literal> role will be
|
||||
accessible by any user that has any role in a tenant. </para>
|
||||
accessible by any user that has any role in a
|
||||
tenant.</para>
|
||||
<para>If you wish to restrict users from performing operations
|
||||
in, say, the Compute service, you need to create a role in
|
||||
the Identity service and then modify
|
||||
<filename>/etc/nova/policy.json</filename> so that
|
||||
this role is required for Compute operations. </para>
|
||||
this role is required for Compute operations.</para>
|
||||
<para>For example, this line in
|
||||
<filename>/etc/nova/policy.json</filename> specifies
|
||||
that there are no restrictions on which users can create
|
||||
volumes: if the user has any role in a tenant, they will
|
||||
be able to create volumes in that tenant. </para>
|
||||
<programlisting language="json">"volume:create": [], </programlisting>
|
||||
be able to create volumes in that tenant.</para>
|
||||
<programlisting language="json">{
|
||||
"volume:create":[
|
||||
|
||||
]
|
||||
}</programlisting>
|
||||
<para>If we wished to restrict creation of volumes to users
|
||||
who had the <literal>compute-user</literal> role in a
|
||||
particular tenant, we would add
|
||||
<literal>"role:compute-user"</literal>, like so: </para>
|
||||
<programlisting language="json">"volume:create": ["role:compute-user"],</programlisting>
|
||||
<para>
|
||||
If we wished to restrict all Compute service requests to require
|
||||
this role, the resulting file would look like:
|
||||
</para>
|
||||
<literal>"role:compute-user"</literal>, like
|
||||
so:</para>
|
||||
<programlisting language="json">{
|
||||
"admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]],
|
||||
"default": [["rule:admin_or_owner"]],
|
||||
|
||||
"compute:create": ["role":"compute-user"],
|
||||
"compute:create:attach_network": ["role":"compute-user"],
|
||||
"compute:create:attach_volume": ["role":"compute-user"],
|
||||
"compute:get_all": ["role":"compute-user"],
|
||||
|
||||
"admin_api": [["role:admin"]],
|
||||
"compute_extension:accounts": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:pause": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:resume": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:lock": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:unlock": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]],
|
||||
"compute_extension:admin_actions:migrateLive": [["rule:admin_api"]],
|
||||
"compute_extension:admin_actions:migrate": [["rule:admin_api"]],
|
||||
"compute_extension:aggregates": [["rule:admin_api"]],
|
||||
"compute_extension:certificates": ["role":"compute-user"],
|
||||
"compute_extension:cloudpipe": [["rule:admin_api"]],
|
||||
"compute_extension:console_output": ["role":"compute-user"],
|
||||
"compute_extension:consoles": ["role":"compute-user"],
|
||||
"compute_extension:createserverext": ["role":"compute-user"],
|
||||
"compute_extension:deferred_delete": ["role":"compute-user"],
|
||||
"compute_extension:disk_config": ["role":"compute-user"],
|
||||
"compute_extension:evacuate": [["rule:admin_api"]],
|
||||
"compute_extension:extended_server_attributes": [["rule:admin_api"]],
|
||||
"compute_extension:extended_status": ["role":"compute-user"],
|
||||
"compute_extension:flavorextradata": ["role":"compute-user"],
|
||||
"compute_extension:flavorextraspecs": ["role":"compute-user"],
|
||||
"compute_extension:flavormanage": [["rule:admin_api"]],
|
||||
"compute_extension:floating_ip_dns": ["role":"compute-user"],
|
||||
"compute_extension:floating_ip_pools": ["role":"compute-user"],
|
||||
"compute_extension:floating_ips": ["role":"compute-user"],
|
||||
"compute_extension:hosts": [["rule:admin_api"]],
|
||||
"compute_extension:keypairs": ["role":"compute-user"],
|
||||
"compute_extension:multinic": ["role":"compute-user"],
|
||||
"compute_extension:networks": [["rule:admin_api"]],
|
||||
"compute_extension:quotas": ["role":"compute-user"],
|
||||
"compute_extension:rescue": ["role":"compute-user"],
|
||||
"compute_extension:security_groups": ["role":"compute-user"],
|
||||
"compute_extension:server_action_list": [["rule:admin_api"]],
|
||||
"compute_extension:server_diagnostics": [["rule:admin_api"]],
|
||||
"compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]],
|
||||
"compute_extension:simple_tenant_usage:list": [["rule:admin_api"]],
|
||||
"compute_extension:users": [["rule:admin_api"]],
|
||||
"compute_extension:virtual_interfaces": ["role":"compute-user"],
|
||||
"compute_extension:virtual_storage_arrays": ["role":"compute-user"],
|
||||
"compute_extension:volumes": ["role":"compute-user"],
|
||||
"compute_extension:volumetypes": ["role":"compute-user"],
|
||||
|
||||
"volume:create": ["role":"compute-user"],
|
||||
"volume:get_all": ["role":"compute-user"],
|
||||
"volume:get_volume_metadata": ["role":"compute-user"],
|
||||
"volume:get_snapshot": ["role":"compute-user"],
|
||||
"volume:get_all_snapshots": ["role":"compute-user"],
|
||||
|
||||
"network:get_all_networks": ["role":"compute-user"],
|
||||
"network:get_network": ["role":"compute-user"],
|
||||
"network:delete_network": ["role":"compute-user"],
|
||||
"network:disassociate_network": ["role":"compute-user"],
|
||||
"network:get_vifs_by_instance": ["role":"compute-user"],
|
||||
"network:allocate_for_instance": ["role":"compute-user"],
|
||||
"network:deallocate_for_instance": ["role":"compute-user"],
|
||||
"network:validate_networks": ["role":"compute-user"],
|
||||
"network:get_instance_uuids_by_ip_filter": ["role":"compute-user"],
|
||||
|
||||
"network:get_floating_ip": ["role":"compute-user"],
|
||||
"network:get_floating_ip_pools": ["role":"compute-user"],
|
||||
"network:get_floating_ip_by_address": ["role":"compute-user"],
|
||||
"network:get_floating_ips_by_project": ["role":"compute-user"],
|
||||
"network:get_floating_ips_by_fixed_address": ["role":"compute-user"],
|
||||
"network:allocate_floating_ip": ["role":"compute-user"],
|
||||
"network:deallocate_floating_ip": ["role":"compute-user"],
|
||||
"network:associate_floating_ip": ["role":"compute-user"],
|
||||
"network:disassociate_floating_ip": ["role":"compute-user"],
|
||||
|
||||
"network:get_fixed_ip": ["role":"compute-user"],
|
||||
"network:add_fixed_ip_to_instance": ["role":"compute-user"],
|
||||
"network:remove_fixed_ip_from_instance": ["role":"compute-user"],
|
||||
"network:add_network_to_project": ["role":"compute-user"],
|
||||
"network:get_instance_nw_info": ["role":"compute-user"],
|
||||
|
||||
"network:get_dns_domains": ["role":"compute-user"],
|
||||
"network:add_dns_entry": ["role":"compute-user"],
|
||||
"network:modify_dns_entry": ["role":"compute-user"],
|
||||
"network:delete_dns_entry": ["role":"compute-user"],
|
||||
"network:get_dns_entries_by_address": ["role":"compute-user"],
|
||||
"network:get_dns_entries_by_name": ["role":"compute-user"],
|
||||
"network:create_private_dns_domain": ["role":"compute-user"],
|
||||
"network:create_public_dns_domain": ["role":"compute-user"],
|
||||
"network:delete_dns_domain": ["role":"compute-user"]
|
||||
} </programlisting>
|
||||
"volume:create":[
|
||||
"role:compute-user"
|
||||
]
|
||||
}</programlisting>
|
||||
<para> If we wished to restrict all Compute service requests
|
||||
to require this role, the resulting file would look like: </para>
|
||||
<programlisting language="json"><xi:include href="samples/roles.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
<section xml:id="keystone-service-mgmt">
|
||||
<title>Service management</title>
|
||||
<para> The Identity Service provides the following service
|
||||
management functions: </para>
|
||||
<para>The Identity Service provides the following service
|
||||
management functions:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para>Services</para>
|
||||
|
@ -362,8 +278,8 @@
|
|||
corresponds to each service (such as, a user named
|
||||
<emphasis>nova</emphasis>, for the Compute service)
|
||||
and a special service tenant, which is called
|
||||
<emphasis>service</emphasis>. </para>
|
||||
<emphasis>service</emphasis>.</para>
|
||||
<para>The commands for creating services and endpoints are
|
||||
described in a later section. </para>
|
||||
described in a later section.</para>
|
||||
</section>
|
||||
</section>
|
||||
|
|
|
@ -0,0 +1,331 @@
|
|||
{
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"project_id:%(project_id)s"
|
||||
]
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:create:attach_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:create:attach_volume":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute:get_all":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"admin_api":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"compute_extension:accounts":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:pause":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:unpause":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:suspend":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:resume":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:lock":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:unlock":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:resetNetwork":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:injectNetworkInfo":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:createBackup":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:migrateLive":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:admin_actions:migrate":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:aggregates":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:certificates":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:cloudpipe":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:console_output":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:consoles":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:createserverext":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:deferred_delete":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:disk_config":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:evacuate":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:extended_server_attributes":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:extended_status":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavorextradata":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavorextraspecs":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:flavormanage":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:floating_ip_dns":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:floating_ip_pools":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:floating_ips":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:hosts":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:keypairs":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:multinic":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:networks":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:quotas":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:rescue":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:security_groups":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:server_action_list":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:server_diagnostics":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:simple_tenant_usage:show":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"compute_extension:simple_tenant_usage:list":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:users":[
|
||||
[
|
||||
"rule:admin_api"
|
||||
]
|
||||
],
|
||||
"compute_extension:virtual_interfaces":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:virtual_storage_arrays":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volumes":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"compute_extension:volumetypes":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:create":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_all":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_volume_metadata":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_snapshot":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"volume:get_all_snapshots":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_all_networks":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:disassociate_network":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_vifs_by_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:allocate_for_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:deallocate_for_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:validate_networks":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_instance_uuids_by_ip_filter":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip_pools":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ip_by_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ips_by_project":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_floating_ips_by_fixed_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:allocate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:deallocate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:associate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:disassociate_floating_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_fixed_ip":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_fixed_ip_to_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:remove_fixed_ip_from_instance":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_network_to_project":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_instance_nw_info":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_domains":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:add_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:modify_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_dns_entry":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_entries_by_address":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:get_dns_entries_by_name":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:create_private_dns_domain":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:create_public_dns_domain":[
|
||||
"role:compute-user"
|
||||
],
|
||||
"network:delete_dns_domain":[
|
||||
"role:compute-user"
|
||||
]
|
||||
}
|
|
@ -13,35 +13,36 @@
|
|||
scheduler is configurable through a variety of options.</para>
|
||||
<para>Compute is configured with the following default scheduler
|
||||
options:</para>
|
||||
<programlisting>
|
||||
scheduler_driver=nova.scheduler.multi.MultiScheduler
|
||||
<programlisting language="ini">scheduler_driver=nova.scheduler.multi.MultiScheduler
|
||||
volume_scheduler_driver=nova.scheduler.chance.ChanceScheduler
|
||||
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
scheduler_weight_classes=nova.scheduler.weights.all_weighers
|
||||
ram_weight_multiplier=1.0
|
||||
</programlisting>
|
||||
ram_weight_multiplier=1.0</programlisting>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para> Compute. Configured to use the multi-scheduler,
|
||||
which allows the admin to specify different
|
||||
scheduling behavior for compute requests versus volume
|
||||
requests. </para></listitem>
|
||||
<para>Compute. Configured to use the multi-scheduler,
|
||||
which allows the admin to specify different scheduling
|
||||
behavior for compute requests versus volume
|
||||
requests.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Volume scheduler. Configured as a chance scheduler,
|
||||
which picks a host at random that has the
|
||||
<command>cinder-volume</command> service running.
|
||||
</para></listitem>
|
||||
<command>cinder-volume</command> service
|
||||
running.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para>Compute scheduler. Configured as a filter scheduler,
|
||||
described in detail in the next section. In the
|
||||
default configuration, this scheduler will only consider hosts
|
||||
that are in the requested availability zone
|
||||
(<literal>AvailabilityZoneFilter</literal>), that have
|
||||
sufficient RAM available (<literal>RamFilter</literal>), and
|
||||
that are actually capable of servicing the request
|
||||
(<literal>ComputeFilter</literal>).</para>
|
||||
default configuration, this scheduler will only
|
||||
consider hosts that are in the requested availability
|
||||
zone (<literal>AvailabilityZoneFilter</literal>), that
|
||||
have sufficient RAM available
|
||||
(<literal>RamFilter</literal>), and that are
|
||||
actually capable of servicing the request
|
||||
(<literal>ComputeFilter</literal>).</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<section xml:id="filter-scheduler">
|
||||
|
@ -54,7 +55,7 @@ ram_weight_multiplier=1.0
|
|||
created. This Scheduler can only be used for scheduling
|
||||
compute requests, not volume requests, i.e. it can only be
|
||||
used with the <literal>compute_scheduler_driver</literal>
|
||||
configuration option. </para>
|
||||
configuration option.</para>
|
||||
</section>
|
||||
<section xml:id="scheduler-filters">
|
||||
<?dbhtml stop-chunking?>
|
||||
|
@ -65,47 +66,41 @@ ram_weight_multiplier=1.0
|
|||
resource. Filters are binary: either a host is accepted by
|
||||
the filter, or it is rejected. Hosts that are accepted by
|
||||
the filter are then processed by a different algorithm to
|
||||
decide which hosts to use for that request, described in the
|
||||
<link linkend="weights">Weights</link> section. <figure xml:id="filter-figure">
|
||||
<title>Filtering</title>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata
|
||||
fileref="figures/filteringWorkflow1.png "
|
||||
scale="80"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</figure>
|
||||
</para>
|
||||
decide which hosts to use for that request, described in
|
||||
the <link linkend="weights">Weights</link> section.</para>
|
||||
<figure xml:id="filter-figure">
|
||||
<title>Filtering</title>
|
||||
<mediaobject>
|
||||
<imageobject>
|
||||
<imagedata
|
||||
fileref="figures/filteringWorkflow1.png "
|
||||
scale="80"/>
|
||||
</imageobject>
|
||||
</mediaobject>
|
||||
</figure>
|
||||
|
||||
<para>The <literal>scheduler_available_filters</literal>
|
||||
configuration option in <filename>nova.conf</filename>
|
||||
provides the Compute service with the list of the filters
|
||||
that will be used by the scheduler. The default setting
|
||||
specifies all of the filter that are included with the
|
||||
Compute service:
|
||||
<programlisting>
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
</programlisting>
|
||||
This configuration option can be specified multiple times.
|
||||
For example, if you implemented your own custom filter in
|
||||
Python called <literal>myfilter.MyFilter</literal> and you
|
||||
wanted to use both the built-in filters and your custom
|
||||
filter, your <filename>nova.conf</filename> file would
|
||||
contain:
|
||||
<programlisting>
|
||||
scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_available_filters=myfilter.MyFilter
|
||||
</programlisting>
|
||||
Compute service:</para>
|
||||
<programlisting language="ini">scheduler_available_filters=nova.scheduler.filters.all_filters</programlisting>
|
||||
<para>This configuration option can be specified multiple
|
||||
times. For example, if you implemented your own custom
|
||||
filter in Python called
|
||||
<literal>myfilter.MyFilter</literal> and you wanted to
|
||||
use both the built-in filters and your custom filter, your
|
||||
<filename>nova.conf</filename> file would contain:
|
||||
<programlisting language="ini">scheduler_available_filters=nova.scheduler.filters.all_filters
|
||||
scheduler_available_filters=myfilter.MyFilter</programlisting>
|
||||
</para>
|
||||
<para>The <literal>scheduler_default_filters</literal>
|
||||
configuration option in <filename>nova.conf</filename>
|
||||
defines the list of filters that will be applied by the
|
||||
<command>nova-scheduler</command> service. As
|
||||
mentioned above, the default filters are:
|
||||
<programlisting>
|
||||
scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
||||
</programlisting>
|
||||
<programlisting language="ini">scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter</programlisting>
|
||||
</para>
|
||||
<para>The available filters are described below.</para>
|
||||
|
||||
|
@ -120,15 +115,14 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
|||
|
||||
<section xml:id="aggregate-multi-tenancy-isolation">
|
||||
<title>AggregateMultiTenancyIsolation</title>
|
||||
<para>Isolates tenants to specific<link linkend="host-aggregates"
|
||||
>host aggregates</link>. If a host is in an
|
||||
aggregate that has the metadata key <literal>filter_tenant_id</literal>
|
||||
it will only create instances from that tenant (or list of tenants).
|
||||
A host can be
|
||||
in different aggregates. If a host does not belong to an
|
||||
aggregate with the metadata key, it can create instances from
|
||||
all tenants.
|
||||
</para>
|
||||
<para>Isolates tenants to specific<link
|
||||
linkend="host-aggregates">host aggregates</link>.
|
||||
If a host is in an aggregate that has the metadata key
|
||||
<literal>filter_tenant_id</literal> it will only
|
||||
create instances from that tenant (or list of
|
||||
tenants). A host can be in different aggregates. If a
|
||||
host does not belong to an aggregate with the metadata
|
||||
key, it can create instances from all tenants.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="allhostsfilter">
|
||||
|
@ -139,28 +133,31 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
|||
|
||||
<section xml:id="availabilityzonefilter">
|
||||
<title> AvailabilityZoneFilter </title>
|
||||
<para> Filters hosts by availability zone. This filter
|
||||
must be enabled for the scheduler to respect
|
||||
availability zones in requests. </para>
|
||||
<para>Filters hosts by availability zone. This filter must
|
||||
be enabled for the scheduler to respect availability
|
||||
zones in requests.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="computecapabilitiesfilter">
|
||||
<title>ComputeCapabilitiesFilter</title>
|
||||
<para>Matches properties defined in an instance type's extra specs
|
||||
against compute capabilities.</para>
|
||||
<para>If an extra specs key contains a colon ":", anything before
|
||||
the colon is treated as a namespace, and anything after the
|
||||
colon is treated as the key to be matched. If a namespace is
|
||||
present and is not 'capabilities', it is ignored by this
|
||||
filter.</para>
|
||||
<note><para>Disable the ComputeCapabilitiesFilter when using a
|
||||
Bare Metal configuration, due to
|
||||
<link xlink:href="https://bugs.launchpad.net/nova/+bug/1129485">bug 1129485</link></para></note>
|
||||
<para>Matches properties defined in an instance type's
|
||||
extra specs against compute capabilities.</para>
|
||||
<para>If an extra specs key contains a colon ":", anything
|
||||
before the colon is treated as a namespace, and
|
||||
anything after the colon is treated as the key to be
|
||||
matched. If a namespace is present and is not
|
||||
'capabilities', it is ignored by this filter.</para>
|
||||
<note>
|
||||
<para>Disable the ComputeCapabilitiesFilter when using
|
||||
a Bare Metal configuration, due to <link
|
||||
xlink:href="https://bugs.launchpad.net/nova/+bug/1129485"
|
||||
>bug 1129485</link></para>
|
||||
</note>
|
||||
</section>
|
||||
|
||||
<section xml:id="computefilter">
|
||||
<title> ComputeFilter </title>
|
||||
<para> Filters hosts by flavor (also known as instance
|
||||
<para>Filters hosts by flavor (also known as instance
|
||||
type) and image properties. The scheduler will check
|
||||
to ensure that a compute host has sufficient
|
||||
capabilities to run a virtual machine instance that
|
||||
|
@ -170,24 +167,25 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
|||
the filter checks for are:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><literal>architecture</literal>: Architecture describes the
|
||||
machine architecture required by the image.
|
||||
Examples are i686, x86_64, arm, and
|
||||
ppc64.</para>
|
||||
<para><literal>architecture</literal>:
|
||||
Architecture describes the machine
|
||||
architecture required by the image. Examples
|
||||
are i686, x86_64, arm, and ppc64.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>hypervisor_type</literal>: Hypervisor type describes
|
||||
the hypervisor required by the image. Examples
|
||||
are xen, kvm, qemu, xenapi, and
|
||||
powervm.</para>
|
||||
<para><literal>hypervisor_type</literal>:
|
||||
Hypervisor type describes the hypervisor
|
||||
required by the image. Examples are xen, kvm,
|
||||
qemu, xenapi, and powervm.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><literal>vm_mode</literal>: Virtual machine mode describes the
|
||||
hypervisor application binary interface (ABI)
|
||||
required by the image. Examples are 'xen' for
|
||||
Xen 3.0 paravirtual ABI, 'hvm' for native ABI,
|
||||
'uml' for User Mode Linux paravirtual ABI, exe
|
||||
for container virt executable ABI.</para>
|
||||
<para><literal>vm_mode</literal>: Virtual machine
|
||||
mode describes the hypervisor application
|
||||
binary interface (ABI) required by the image.
|
||||
Examples are 'xen' for Xen 3.0 paravirtual
|
||||
ABI, 'hvm' for native ABI, 'uml' for User Mode
|
||||
Linux paravirtual ABI, exe for container virt
|
||||
executable ABI.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>In general, this filter should always be enabled.
|
||||
|
@ -196,27 +194,23 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
|||
|
||||
<section xml:id="corefilter">
|
||||
<title> CoreFilter </title>
|
||||
<para> Only schedule instances on hosts if there are
|
||||
<para>Only schedule instances on hosts if there are
|
||||
sufficient CPU cores available. If this filter is not
|
||||
set, the scheduler may over provision a host based on
|
||||
cores (i.e., the virtual cores running on an instance
|
||||
may exceed the physical cores). </para>
|
||||
may exceed the physical cores).</para>
|
||||
<para>This filter can be configured to allow a fixed
|
||||
amount of vCPU overcommitment by using the
|
||||
<literal>cpu_allocation_ratio </literal>
|
||||
Configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:
|
||||
<programlisting>
|
||||
cpu_allocation_ratio=16.0
|
||||
</programlisting>
|
||||
<programlisting language="ini">cpu_allocation_ratio=16.0</programlisting>
|
||||
With this setting, if there are 8 vCPUs on a node, the
|
||||
scheduler will allow instances up to 128 vCPU to be
|
||||
run on that node. </para>
|
||||
run on that node.</para>
|
||||
<para>To disallow vCPU overcommitment set:</para>
|
||||
<programlisting>
|
||||
cpu_allocation_ratio=1.0
|
||||
</programlisting>
|
||||
<programlisting language="ini">cpu_allocation_ratio=1.0</programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="differenthostfilter">
|
||||
|
@ -228,57 +222,37 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
|||
list of instance uuids as the value. This filter is
|
||||
the opposite of the <literal>SameHostFilter</literal>.
|
||||
Using the <command>nova</command> command-line tool,
|
||||
use the <literal>--hint</literal> flag. For example:
|
||||
<screen>
|
||||
<prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput>
|
||||
</screen>
|
||||
With the API, use the
|
||||
use the <literal>--hint</literal> flag. For
|
||||
example:</para>
|
||||
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput></screen>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key. For
|
||||
example:
|
||||
<programlisting language="json">
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
|
||||
'8c19174f-4220-44f0-824a-cd1eeef10287'],
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
example:</para>
|
||||
<programlisting language="json"><xi:include href="samples/scheduler_hints3.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="diskfilter">
|
||||
<title>DiskFilter</title>
|
||||
<para> Only schedule instances on hosts if there are
|
||||
sufficient Disk available for ephemeral storage. </para>
|
||||
|
||||
<para>This filter can be configured to allow a fixed
|
||||
amount of disk overcommitment by using the
|
||||
<literal>disk_allocation_ratio </literal>
|
||||
Configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:
|
||||
<programlisting>
|
||||
disk_allocation_ratio=1.0
|
||||
</programlisting>
|
||||
</para>
|
||||
|
||||
<para>Adjusting this value to be greater than 1.0 will allow
|
||||
scheduling instances while over committing disk resources on
|
||||
the node. This may be desirable if you use an image format
|
||||
that is sparse or copy on write such that each virtual
|
||||
instance does not require a 1:1 allocation of virtual disk
|
||||
to physical storage.</para>
|
||||
|
||||
</section>
|
||||
<title>DiskFilter</title>
|
||||
<para>Only schedule instances on hosts if there are
|
||||
sufficient Disk available for ephemeral
|
||||
storage.</para>
|
||||
<para>This filter can be configured to allow a fixed
|
||||
amount of disk overcommitment by using the
|
||||
<literal>disk_allocation_ratio </literal>
|
||||
Configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:</para>
|
||||
<programlisting language="ini">disk_allocation_ratio=1.0</programlisting>
|
||||
<para>Adjusting this value to be greater than 1.0 will
|
||||
allow scheduling instances while over committing disk
|
||||
resources on the node. This may be desirable if you
|
||||
use an image format that is sparse or copy on write
|
||||
such that each virtual instance does not require a 1:1
|
||||
allocation of virtual disk to physical storage.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="groupantiaffinityfilter">
|
||||
<title>GroupAntiAffinityFilter</title>
|
||||
<para> The GroupAntiAffinityFilter ensures that each
|
||||
<para>The GroupAntiAffinityFilter ensures that each
|
||||
instance in a group is on a different host. To take
|
||||
advantage of this filter, the requester must pass a
|
||||
scheduler hint, using <literal>group</literal> as the
|
||||
|
@ -300,29 +274,24 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
|
|||
and virtual machine mode. E.g., an instance might
|
||||
require a host that runs an ARM-based processor and
|
||||
QEMU as the hypervisor. An image can be decorated with
|
||||
these properties using
|
||||
<programlisting>glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu</programlisting>
|
||||
</para>
|
||||
these properties:</para>
|
||||
<screen><prompt>$</prompt> <userinput>glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu</userinput></screen>
|
||||
</section>
|
||||
|
||||
|
||||
<section xml:id="isolatedhostsfilter">
|
||||
<title> IsolatedHostsFilter </title>
|
||||
<para> Allows the admin to define a special (isolated) set
|
||||
<para>Allows the admin to define a special (isolated) set
|
||||
of images and a special (isolated) set of hosts, such
|
||||
that the isolated images can only run on the isolated
|
||||
hosts, and the isolated hosts can only run isolated
|
||||
images. </para>
|
||||
<para> The admin must specify the isolated set of images
|
||||
images.</para>
|
||||
<para>The admin must specify the isolated set of images
|
||||
and hosts in the <filename>nova.conf</filename> file
|
||||
using the <literal>isolated_hosts</literal> and
|
||||
<literal>isolated_images</literal> configuration
|
||||
options. For example:
|
||||
<programlisting>
|
||||
isolated_hosts=server1,server2
|
||||
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
|
||||
</programlisting>
|
||||
</para>
|
||||
options. For example:</para>
|
||||
<programlisting language="ini">isolated_hosts=server1,server2
|
||||
isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09</programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="jsonfilter">
|
||||
|
@ -380,18 +349,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
|||
1 --hint query='[">=","$free_ram_mb",1024]' server1</userinput></screen>
|
||||
With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:
|
||||
<programlisting>
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'query': '[">=","$free_ram_mb",1024]',
|
||||
}
|
||||
}
|
||||
</programlisting></para>
|
||||
<programlisting language="json"><xi:include href="samples/scheduler_hints.json" parse="text"/></programlisting></para>
|
||||
</section>
|
||||
|
||||
<section xml:id="ramfilter">
|
||||
|
@ -406,13 +364,11 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
|
|||
<literal>ram_allocation_ratio</literal>
|
||||
configuration option in
|
||||
<filename>nova.conf</filename>. The default setting
|
||||
is:
|
||||
<programlisting>
|
||||
ram_allocation_ratio=1.5
|
||||
</programlisting>
|
||||
With this setting, if there is 1GB of free RAM, the
|
||||
is:</para>
|
||||
<programlisting language="ini">ram_allocation_ratio=1.5</programlisting>
|
||||
<para>With this setting, if there is 1GB of free RAM, the
|
||||
scheduler will allow instances up to size 1.5GB to be
|
||||
run on that instance. </para>
|
||||
run on that instance.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="retryfilter">
|
||||
|
@ -443,21 +399,8 @@ ram_allocation_ratio=1.5
|
|||
<screen>
|
||||
<prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1</userinput></screen>
|
||||
With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:
|
||||
<programlisting>
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1',
|
||||
'8c19174f-4220-44f0-824a-cd1eeef10287'],
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
</para>
|
||||
<literal>os:scheduler_hints</literal> key:</para>
|
||||
<programlisting><xi:include href="samples/scheduler_hints2.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
|
||||
<section xml:id="simplecidraffinityfilter">
|
||||
|
@ -486,48 +429,31 @@ ram_allocation_ratio=1.5
|
|||
command-line tool, use the <literal>--hint</literal>
|
||||
flag. For example, to specify the IP subnet
|
||||
<literal>192.168.1.1/24</literal></para>
|
||||
<screen>
|
||||
<prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1</userinput>
|
||||
</screen>
|
||||
<screen><prompt>$</prompt> <userinput>nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1</userinput></screen>
|
||||
<para>With the API, use the
|
||||
<literal>os:scheduler_hints</literal> key:</para>
|
||||
<programlisting language="json">
|
||||
{
|
||||
'server': {
|
||||
'name': 'server-1',
|
||||
'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175',
|
||||
'flavorRef': '1'
|
||||
},
|
||||
'os:scheduler_hints': {
|
||||
'build_near_host_ip': '192.168.1.1',
|
||||
'cidr': '24'
|
||||
}
|
||||
}
|
||||
</programlisting>
|
||||
<programlisting language="json"><xi:include href="samples/scheduler_hints4.json" parse="text"/></programlisting>
|
||||
</section>
|
||||
</section>
|
||||
|
||||
<section xml:id="weights">
|
||||
<title>Weights</title>
|
||||
<?dbhtml stop-chunking?>
|
||||
|
||||
<para>The filter scheduler uses the <literal>scheduler_weight_classes</literal>
|
||||
<title>Weights</title>
|
||||
<para>The filter scheduler uses the
|
||||
<literal>scheduler_weight_classes</literal>
|
||||
configuration parameter to calculate the weights of hosts.
|
||||
The value of this parameter defaults to
|
||||
<literal>nova.scheduler.weights.all_weighers</literal>,
|
||||
which selects the only weigher available -- the RamWeigher.
|
||||
Hosts are then weighed and sorted with the largest weight winning.
|
||||
</para>
|
||||
<programlisting>
|
||||
scheduler_weight_classes=nova.scheduler.weights.all_weighers
|
||||
ram_weight_multiplier=1.0
|
||||
</programlisting>
|
||||
<para>
|
||||
The default behavior is to spread instances across all hosts evenly.
|
||||
Set the <literal>ram_weight_multiplier</literal> configuration
|
||||
parameter to a negative number if you prefer stacking instead of spreading.
|
||||
</para>
|
||||
|
||||
The value of this parameter defaults to
|
||||
<literal>nova.scheduler.weights.all_weighers</literal>,
|
||||
which selects the only weigher available -- the
|
||||
RamWeigher. Hosts are then weighed and sorted with the
|
||||
largest weight winning.</para>
|
||||
<programlisting language="ini">scheduler_weight_classes=nova.scheduler.weights.all_weighers
|
||||
ram_weight_multiplier=1.0</programlisting>
|
||||
<para>The default behavior is to spread instances across all
|
||||
hosts evenly. Set the
|
||||
<literal>ram_weight_multiplier</literal> configuration
|
||||
parameter to a negative number if you prefer stacking
|
||||
instead of spreading.</para>
|
||||
</section>
|
||||
|
||||
<section xml:id="other-schedulers">
|
||||
|
@ -549,8 +475,8 @@ ram_weight_multiplier=1.0
|
|||
<literal>nova.scheduler.multi.MultiScheduler</literal>
|
||||
holds multiple sub-schedulers, one for
|
||||
<literal>nova-compute</literal> requests and one
|
||||
for <literal>cinder-volume</literal> requests. It is the
|
||||
default top-level scheduler as specified by the
|
||||
for <literal>cinder-volume</literal> requests. It is
|
||||
the default top-level scheduler as specified by the
|
||||
<literal>scheduler_driver</literal> configuration
|
||||
option.</para>
|
||||
</section>
|
||||
|
|
|
@ -0,0 +1,10 @@
|
|||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"query":"[>=,$free_ram_mb,1024]"
|
||||
}
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"same_host":[
|
||||
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
|
||||
"8c19174f-4220-44f0-824a-cd1eeef10287"
|
||||
]
|
||||
}
|
||||
}
|
|
@ -0,0 +1,13 @@
|
|||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"different_host":[
|
||||
"a0cf03a5-d921-4877-bb5c-86d26cf818e1",
|
||||
"8c19174f-4220-44f0-824a-cd1eeef10287"
|
||||
]
|
||||
}
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
{
|
||||
"server":{
|
||||
"name":"server-1",
|
||||
"imageRef":"cedef40a-ed67-4d10-800e-17455edce175",
|
||||
"flavorRef":"1"
|
||||
},
|
||||
"os:scheduler_hints":{
|
||||
"build_near_host_ip":"192.168.1.1",
|
||||
"cidr":"24"
|
||||
}
|
||||
}
|
|
@ -10,36 +10,35 @@
|
|||
to the OpenStack Networking service must provide an
|
||||
authentication token in X-Auth-Token request header. The
|
||||
aforementioned token should have been obtained by
|
||||
authenticating with the OpenStack Identity endpoint. For more
|
||||
information concerning authentication with OpenStack Identity,
|
||||
please refer to the OpenStack Identity documentation. When
|
||||
OpenStack Identity is enabled, it is not mandatory to specify
|
||||
tenant_id for resources in create requests, as the tenant
|
||||
identifier will be derived from the Authentication token.
|
||||
Please note that the default authorization settings only allow
|
||||
administrative users to create resources on behalf of a
|
||||
different tenant. OpenStack Networking uses information
|
||||
received from OpenStack Identity to authorize user requests.
|
||||
OpenStack Networking handles two kind of authorization
|
||||
policies: <itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold"
|
||||
>Operation-based</emphasis>: policies specify
|
||||
access criteria for specific operations, possibly
|
||||
with fine-grained control over specific
|
||||
attributes; </para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Resource-based:</emphasis>
|
||||
whether access to specific resource might be
|
||||
granted or not according to the permissions
|
||||
configured for the resource (currently available
|
||||
only for the network resource). The actual
|
||||
authorization policies enforced in OpenStack
|
||||
Networking might vary from deployment to
|
||||
deployment. </para>
|
||||
</listitem>
|
||||
</itemizedlist></para>
|
||||
authenticating with the OpenStack Identity endpoint.</para>
|
||||
<para>For more information concerning authentication with
|
||||
OpenStack Identity, please refer to the OpenStack Identity
|
||||
documentation. When OpenStack Identity is enabled, it is not
|
||||
mandatory to specify tenant_id for resources in create
|
||||
requests, as the tenant identifier will be derived from the
|
||||
Authentication token. Please note that the default
|
||||
authorization settings only allow administrative users to
|
||||
create resources on behalf of a different tenant. OpenStack
|
||||
Networking uses information received from OpenStack Identity
|
||||
to authorize user requests. OpenStack Networking handles two
|
||||
kind of authorization policies:</para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Operation-based</emphasis>:
|
||||
policies specify access criteria for specific
|
||||
operations, possibly with fine-grained control over
|
||||
specific attributes; </para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Resource-based:</emphasis>
|
||||
whether access to specific resource might be granted
|
||||
or not according to the permissions configured for the
|
||||
resource (currently available only for the network
|
||||
resource). The actual authorization policies enforced
|
||||
in OpenStack Networking might vary from deployment to
|
||||
deployment. </para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>The policy engine reads entries from the <emphasis
|
||||
role="italic">policy.json</emphasis> file. The actual
|
||||
location of this file might vary from distribution to
|
||||
|
@ -75,93 +74,101 @@
|
|||
for instance <code>extension:provider_network:set</code> will
|
||||
be triggered if the attributes defined by the Provider Network
|
||||
extensions are specified in an API request.</para>
|
||||
<para>An authorization policy can be composed by one or more rules. If more rules are specified,
|
||||
evaluation policy will be successful if any of the rules evaluates successfully; if an API
|
||||
operation matches multiple policies, then all the policies must evaluate successfully. Also,
|
||||
authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to
|
||||
another rule, until a terminal rule is reached.</para>
|
||||
<para>An authorization policy can be composed by one or more
|
||||
rules. If more rules are specified, evaluation policy will be
|
||||
successful if any of the rules evaluates successfully; if an
|
||||
API operation matches multiple policies, then all the policies
|
||||
must evaluate successfully. Also, authorization rules are
|
||||
recursive. Once a rule is matched, the rule(s) can be resolved
|
||||
to another rule, until a terminal rule is reached.</para>
|
||||
<para>The OpenStack Networking policy engine currently defines the
|
||||
following kinds of terminal rules:</para>
|
||||
<para><itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Role-based rules</emphasis>: evaluate successfully if
|
||||
the user submitting the request has the specified role. For instance
|
||||
<code>"role:admin"</code>is successful if the user submitting the request is
|
||||
an administrator.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Field-based rules: </emphasis>evaluate successfully if a
|
||||
field of the resource specified in the current request matches a specific value.
|
||||
For instance <code>"field:networks:shared=True"</code> is successful if the
|
||||
attribute <emphasis role="italic">shared</emphasis> of the <emphasis
|
||||
role="italic">network</emphasis> resource is set to true.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Generic rules:</emphasis> compare an attribute in the
|
||||
resource with an attribute extracted from the user's security credentials and
|
||||
evaluates successfully if the comparison is successful. For instance
|
||||
<code>"tenant_id:%(tenant_id)s"</code> is successful if the tenant
|
||||
identifier in the resource is equal to the tenant identifier of the user
|
||||
submitting the request.</para>
|
||||
</listitem>
|
||||
</itemizedlist> The following is an extract from the default policy.json file: </para>
|
||||
<para><code>{</code></para>
|
||||
<para><code> "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
|
||||
</code><emphasis role="bold">[1]</emphasis></para>
|
||||
<para><code> "admin_or_network_owner": [["role:admin"],
|
||||
["tenant_id:%(network_tenant_id)s"]],</code></para>
|
||||
<para><code> "admin_only": [["role:admin"]], "regular_user": [],</code></para>
|
||||
<para><code> "shared": [["field:networks:shared=True"]],</code></para>
|
||||
<para><code> "default": [["rule:admin_or_owner"]],</code><emphasis role="bold">[2]</emphasis></para>
|
||||
<para><code> "create_subnet": [["rule:admin_or_network_owner"]],</code></para>
|
||||
<para><code> "get_subnet": [["rule:admin_or_owner"], ["rule:shared"]],</code></para>
|
||||
<para><code> "update_subnet": [["rule:admin_or_network_owner"]],</code></para>
|
||||
<para><code> "delete_subnet": [["rule:admin_or_network_owner"]],</code></para>
|
||||
<para><code> "create_network": [],</code></para>
|
||||
<para><code> "get_network": [["rule:admin_or_owner"], ["rule:shared"]],</code><emphasis role="bold">[3]</emphasis></para>
|
||||
<para><code> "create_network:shared": [["rule:admin_only"]], </code><emphasis role="bold">[4]</emphasis>
|
||||
</para>
|
||||
<para><code> "update_network": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "delete_network": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "create_port": [], </code></para>
|
||||
<para><code> "create_port:mac_address": [["rule:admin_or_network_owner"]],</code><emphasis role="bold">[5]</emphasis></para>
|
||||
<para><code> "create_port:fixed_ips": [["rule:admin_or_network_owner"]],</code></para>
|
||||
<para><code> "get_port": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "update_port": [["rule:admin_or_owner"]], </code></para>
|
||||
<para><code> "delete_port": [["rule:admin_or_owner"]]</code></para>
|
||||
<para><code>}</code></para>
|
||||
<para>[1] is a rule which evaluates successfully if the current user is an administrator or the
|
||||
owner of the resource specified in the request (tenant identifier is equal).</para>
|
||||
<para>[2] is the default policy which is always evaluated if an API operation does not match any
|
||||
of the policies in policy.json.</para>
|
||||
<para>[3] This policy will evaluate successfully if either <emphasis role="italic"
|
||||
>admin_or_owner</emphasis>, or <emphasis role="italic">shared</emphasis> evaluates
|
||||
successfully.</para>
|
||||
<para>[4] This policy will restrict the ability of manipulating the <emphasis role="italic"
|
||||
>shared</emphasis> attribute for a network to administrators only.</para>
|
||||
<para>[5] This policy will restrict the ability of manipulating the <emphasis role="italic"
|
||||
>mac_address</emphasis> attribute for a port only to administrators and the owner of the
|
||||
network where the port is attached.</para>
|
||||
<para>In some cases, some operations should be restricted to administrators only; therefore, as
|
||||
a further example, let us consider how this sample policy file should be modified in a
|
||||
scenario where tenants are allowed only to define networks and see their resources, and all
|
||||
the other operations can be performed only in an administrative context:</para>
|
||||
<para><code>{</code></para>
|
||||
<para><code> "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],</code></para>
|
||||
<para><code> "admin_only": [["role:admin"]], "regular_user": [],</code></para>
|
||||
<para><code> "default": [["rule:admin_only"]], </code></para>
|
||||
<para><code> "create_subnet": [["rule:admin_only"]],</code></para>
|
||||
<para><code> "get_subnet": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "update_subnet": [["rule:admin_only"]],</code></para>
|
||||
<para><code> "delete_subnet": [["rule:admin_only"]],</code></para>
|
||||
<para><code> "create_network": [],</code></para>
|
||||
<para><code> "get_network": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "create_network:shared": [["rule:admin_only"]],</code></para>
|
||||
<para><code> "update_network": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "delete_network": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "create_port": [["rule:admin_only"]], </code></para>
|
||||
<para><code> "get_port": [["rule:admin_or_owner"]],</code></para>
|
||||
<para><code> "update_port": [["rule:admin_only"]], </code></para>
|
||||
<para><code> "delete_port": [["rule:admin_only"]]</code></para>
|
||||
<para><code>}</code></para>
|
||||
<itemizedlist>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Role-based rules</emphasis>:
|
||||
evaluate successfully if the user submitting the
|
||||
request has the specified role. For instance
|
||||
<code>"role:admin"</code>is successful if the user
|
||||
submitting the request is an administrator.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Field-based rules:
|
||||
</emphasis>evaluate successfully if a field of the
|
||||
resource specified in the current request matches a
|
||||
specific value. For instance
|
||||
<code>"field:networks:shared=True"</code> is
|
||||
successful if the attribute <emphasis role="italic"
|
||||
>shared</emphasis> of the <emphasis role="italic"
|
||||
>network</emphasis> resource is set to
|
||||
true.</para>
|
||||
</listitem>
|
||||
<listitem>
|
||||
<para><emphasis role="bold">Generic rules:</emphasis>
|
||||
compare an attribute in the resource with an attribute
|
||||
extracted from the user's security credentials and
|
||||
evaluates successfully if the comparison is
|
||||
successful. For instance
|
||||
<code>"tenant_id:%(tenant_id)s"</code> is
|
||||
successful if the tenant identifier in the resource is
|
||||
equal to the tenant identifier of the user submitting
|
||||
the request.</para>
|
||||
</listitem>
|
||||
</itemizedlist>
|
||||
<para>The following is an extract from the default policy.json
|
||||
file:</para>
|
||||
<example xml:id="xml_example">
|
||||
<title>policy.json file</title>
|
||||
<programlistingco>
|
||||
<areaspec>
|
||||
<area xml:id="policy.json.rule1" units="linecolumn"
|
||||
coords="2 23"/>
|
||||
<area xml:id="policy.json.rule2" units="linecolumn"
|
||||
coords="31 16"/>
|
||||
<area xml:id="policy.json.rule3" units="linecolumn"
|
||||
coords="62 20"/>
|
||||
<area xml:id="policy.json.rule4" units="linecolumn"
|
||||
coords="70 30"/>
|
||||
<area xml:id="policy.json.rule5" units="linecolumn"
|
||||
coords="88 32"/>
|
||||
</areaspec>
|
||||
<programlisting language="json"><xi:include href="samples/policy.json" parse="text"/></programlisting>
|
||||
</programlistingco>
|
||||
</example>
|
||||
<calloutlist>
|
||||
<callout arearefs="policy.json.rule1">
|
||||
<para>A rule that evaluates successfully if the current
|
||||
user is an administrator or the owner of the resource
|
||||
specified in the request (tenant identifier is
|
||||
equal).</para>
|
||||
</callout>
|
||||
<callout arearefs="policy.json.rule2">
|
||||
<para>The default policy that is always evaluated if an
|
||||
API operation does not match any of the policies in
|
||||
policy.json.</para>
|
||||
</callout>
|
||||
<callout arearefs="policy.json.rule3">
|
||||
<para>This policy evaluates successfully if either
|
||||
<literal>admin_or_owner</literal>, or
|
||||
<literal>shared</literal> evaluates
|
||||
successfully.</para>
|
||||
</callout>
|
||||
<callout arearefs="policy.json.rule4">
|
||||
<para>This policy restricts the ability of manipulating
|
||||
the <literal>shared</literal> attribute for a network
|
||||
to administrators only.</para>
|
||||
</callout>
|
||||
<callout arearefs="policy.json.rule5">
|
||||
<para>This policy restricts the ability of manipulating
|
||||
the <literal>mac_address</literal> attribute for a
|
||||
port only to administrators and the owner of the
|
||||
network where the port is attached.</para>
|
||||
</callout>
|
||||
</calloutlist>
|
||||
<para>In some cases, some operations should be restricted to
|
||||
administrators only; therefore, as a further example, let us
|
||||
consider how this sample policy file should be modified in a
|
||||
scenario where tenants are allowed only to define networks and
|
||||
see their resources, and all the other operations can be
|
||||
performed only in an administrative context:</para>
|
||||
<programlisting language="json"><xi:include href="samples/policy2.json" parse="text"/></programlisting>
|
||||
</chapter>
|
||||
|
|
|
@ -0,0 +1,113 @@
|
|||
{
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_or_network_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(network_tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_only":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"regular_user":[
|
||||
|
||||
],
|
||||
"shared":[
|
||||
[
|
||||
"field:networks:shared=True"
|
||||
]
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"create_subnet":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"get_subnet":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
],
|
||||
[
|
||||
"rule:shared"
|
||||
]
|
||||
],
|
||||
"update_subnet":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"delete_subnet":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"create_network":[
|
||||
|
||||
],
|
||||
"get_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
],
|
||||
[
|
||||
"rule:shared"
|
||||
]
|
||||
],
|
||||
"create_network:shared":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"update_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"delete_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"create_port":[
|
||||
|
||||
],
|
||||
"create_port:mac_address":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"create_port:fixed_ips":[
|
||||
[
|
||||
"rule:admin_or_network_owner"
|
||||
]
|
||||
],
|
||||
"get_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"update_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"delete_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
]
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
{
|
||||
"admin_or_owner":[
|
||||
[
|
||||
"role:admin"
|
||||
],
|
||||
[
|
||||
"tenant_id:%(tenant_id)s"
|
||||
]
|
||||
],
|
||||
"admin_only":[
|
||||
[
|
||||
"role:admin"
|
||||
]
|
||||
],
|
||||
"regular_user":[
|
||||
|
||||
],
|
||||
"default":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"create_subnet":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"get_subnet":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"update_subnet":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"delete_subnet":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"create_network":[
|
||||
|
||||
],
|
||||
"get_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"create_network:shared":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"update_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"delete_network":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"create_port":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"get_port":[
|
||||
[
|
||||
"rule:admin_or_owner"
|
||||
]
|
||||
],
|
||||
"update_port":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
],
|
||||
"delete_port":[
|
||||
[
|
||||
"rule:admin_only"
|
||||
]
|
||||
]
|
||||
}
|
Loading…
Reference in New Issue