diff --git a/doc/src/docbkx/common/keystone-concepts.xml b/doc/src/docbkx/common/keystone-concepts.xml index 132de50106..64fd143219 100644 --- a/doc/src/docbkx/common/keystone-concepts.xml +++ b/doc/src/docbkx/common/keystone-concepts.xml @@ -5,44 +5,47 @@ xml:id="keystone-concepts"> Identity Service Concepts - The Identity service performs the following functions: + The Identity service performs the following + functions: - User management. Tracks users and their permissions. + User management. Tracks users and their + permissions. Service catalog. Provides a catalog of available - services with their API endpoints. + services with their API endpoints. - To understand the Identity Service, you must understand the - following concepts: + To understand the Identity Service, you must understand the + following concepts: User - Digital representation of a person, system, or service - who uses OpenStack cloud services. Identity authentication - services will validate that incoming request are being made - by the user who claims to be making the call. Users have a - login and may be assigned tokens to access resources. Users - may be directly assigned to a particular tenant and behave - as if they are contained in that tenant. - + Digital representation of a person, system, or + service who uses OpenStack cloud services. + Identity authentication services will validate + that incoming request are being made by the user + who claims to be making the call. Users have a + login and may be assigned tokens to access + resources. Users may be directly assigned to a + particular tenant and behave as if they are + contained in that tenant. Credentials Data that is known only by a user that proves - who they are. In the Identity Service, examples - are: + who they are. In the Identity Service, examples + are: - Username and password + Username and password - Username and API key + Username and API key An authentication token provided by the @@ -61,7 +64,8 @@ and password or a username and API key. In response to these credentials, the Identity Service issues the user an authentication token, - which the user provides in subsequent requests. + which the user provides in subsequent + requests. @@ -71,14 +75,14 @@ resources. Each token has a scope which describes which resources are accessible with it. A token may be revoked at anytime and is valid for a - finite duration. + finite duration. While the Identity Service supports token-based authentication in this release, the intention is for it to support additional protocols in the future. The intent is for it to be an integration service foremost, and not aspire to be a full-fledged identity store and management - solution. + solution. @@ -87,7 +91,7 @@ A container used to group or isolate resources and/or identity objects. Depending on the service operator, a tenant may map to a customer, account, - organization, or project. + organization, or project. @@ -96,7 +100,8 @@ An OpenStack service, such as Compute (Nova), Object Storage (Swift), or Image Service (Glance). Provides one or more endpoints through which users - can access resources and perform operations. + can access resources and perform + operations. @@ -107,7 +112,7 @@ an extension for templates, you can create an endpoint template, which represents the templates of all the consumable services that are available - across the regions. + across the regions. @@ -117,46 +122,46 @@ them to perform a specific set of operations. A role includes a set of rights and privileges. A user assuming that role inherits those rights and - privileges. + privileges. In the Identity Service, a token that is issued to a user includes the list of roles that user can assume. Services that are being called by that user determine how they interpret the set of roles a user has and which operations or resources each - role grants access to. + role grants access to. - - - - - - - - - - + + + + + + + +
User management - The main components of Identity user management are: - - Users - - - Tenants - - - Roles - - + The main components of Identity user management + are: + + + Users + + + Tenants + + + Roles + + A user represents a human user, and has associated information such as username, password and - email. This example creates a user named "alice": + email. This example creates a user named "alice": $ keystone user-create --name=alice --pass=mypassword123 --email=alice@example.com A tenant can be a project, group, or organization. Whenever you make requests to OpenStack @@ -164,7 +169,7 @@ query the Compute service for a list of running instances, you will receive a list of all of the running instances in the tenant you specified in your query. This example - creates a tenant named "acme": + creates a tenant named "acme": $ keystone tenant-create --name=acme Because the term project was @@ -173,22 +178,22 @@ use --project_id instead of --tenant-id or --os-tenant-id to refer to a - tenant ID. + tenant ID. A role captures what operations a user is permitted to perform in a given tenant. This - example creates a role named "compute-user": + example creates a role named "compute-user": $ keystone role-create --name=compute-user It is up to individual services such as the Compute service and Image service to assign meaning to these roles. As far as the Identity service is concerned, a - role is simply a name. + role is simply a name. The Identity service associates a user with a tenant and a role. To continue with our previous examples, we may wish to assign the "alice" user the "compute-user" role in - the "acme" tenant: + the "acme" tenant: $ keystone user-list +--------+---------+-------------------+--------+ | id | enabled | email | name | @@ -211,7 +216,7 @@ A user can be assigned different roles in different tenants: for example, Alice may also have the "admin" role in the "Cyberdyne" tenant. A user can also be assigned - multiple roles in the same tenant. + multiple roles in the same tenant. The /etc/[SERVICE_CODENAME]/policy.json controls what users are allowed to do for a given service. @@ -220,136 +225,47 @@ /etc/glance/policy.json specifies the access policy for the Image service, and /etc/keystone/policy.json - specifies the access policy for the Identity service. + specifies the access policy for the Identity + service. The default policy.json files in the Compute, Identity, and Image service recognize only the admin role: all operations that do not require the admin role will be - accessible by any user that has any role in a tenant. + accessible by any user that has any role in a + tenant. If you wish to restrict users from performing operations in, say, the Compute service, you need to create a role in the Identity service and then modify /etc/nova/policy.json so that - this role is required for Compute operations. + this role is required for Compute operations. For example, this line in /etc/nova/policy.json specifies that there are no restrictions on which users can create volumes: if the user has any role in a tenant, they will - be able to create volumes in that tenant. - "volume:create": [], + be able to create volumes in that tenant. + { + "volume:create":[ + + ] +} If we wished to restrict creation of volumes to users who had the compute-user role in a particular tenant, we would add - "role:compute-user", like so: - "volume:create": ["role:compute-user"], - - If we wished to restrict all Compute service requests to require - this role, the resulting file would look like: - + "role:compute-user", like + so: { - "admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]], - "default": [["rule:admin_or_owner"]], - - "compute:create": ["role":"compute-user"], - "compute:create:attach_network": ["role":"compute-user"], - "compute:create:attach_volume": ["role":"compute-user"], - "compute:get_all": ["role":"compute-user"], - - "admin_api": [["role:admin"]], - "compute_extension:accounts": [["rule:admin_api"]], - "compute_extension:admin_actions": [["rule:admin_api"]], - "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:lock": [["rule:admin_api"]], - "compute_extension:admin_actions:unlock": [["rule:admin_api"]], - "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], - "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]], - "compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:migrateLive": [["rule:admin_api"]], - "compute_extension:admin_actions:migrate": [["rule:admin_api"]], - "compute_extension:aggregates": [["rule:admin_api"]], - "compute_extension:certificates": ["role":"compute-user"], - "compute_extension:cloudpipe": [["rule:admin_api"]], - "compute_extension:console_output": ["role":"compute-user"], - "compute_extension:consoles": ["role":"compute-user"], - "compute_extension:createserverext": ["role":"compute-user"], - "compute_extension:deferred_delete": ["role":"compute-user"], - "compute_extension:disk_config": ["role":"compute-user"], - "compute_extension:evacuate": [["rule:admin_api"]], - "compute_extension:extended_server_attributes": [["rule:admin_api"]], - "compute_extension:extended_status": ["role":"compute-user"], - "compute_extension:flavorextradata": ["role":"compute-user"], - "compute_extension:flavorextraspecs": ["role":"compute-user"], - "compute_extension:flavormanage": [["rule:admin_api"]], - "compute_extension:floating_ip_dns": ["role":"compute-user"], - "compute_extension:floating_ip_pools": ["role":"compute-user"], - "compute_extension:floating_ips": ["role":"compute-user"], - "compute_extension:hosts": [["rule:admin_api"]], - "compute_extension:keypairs": ["role":"compute-user"], - "compute_extension:multinic": ["role":"compute-user"], - "compute_extension:networks": [["rule:admin_api"]], - "compute_extension:quotas": ["role":"compute-user"], - "compute_extension:rescue": ["role":"compute-user"], - "compute_extension:security_groups": ["role":"compute-user"], - "compute_extension:server_action_list": [["rule:admin_api"]], - "compute_extension:server_diagnostics": [["rule:admin_api"]], - "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]], - "compute_extension:simple_tenant_usage:list": [["rule:admin_api"]], - "compute_extension:users": [["rule:admin_api"]], - "compute_extension:virtual_interfaces": ["role":"compute-user"], - "compute_extension:virtual_storage_arrays": ["role":"compute-user"], - "compute_extension:volumes": ["role":"compute-user"], - "compute_extension:volumetypes": ["role":"compute-user"], - - "volume:create": ["role":"compute-user"], - "volume:get_all": ["role":"compute-user"], - "volume:get_volume_metadata": ["role":"compute-user"], - "volume:get_snapshot": ["role":"compute-user"], - "volume:get_all_snapshots": ["role":"compute-user"], - - "network:get_all_networks": ["role":"compute-user"], - "network:get_network": ["role":"compute-user"], - "network:delete_network": ["role":"compute-user"], - "network:disassociate_network": ["role":"compute-user"], - "network:get_vifs_by_instance": ["role":"compute-user"], - "network:allocate_for_instance": ["role":"compute-user"], - "network:deallocate_for_instance": ["role":"compute-user"], - "network:validate_networks": ["role":"compute-user"], - "network:get_instance_uuids_by_ip_filter": ["role":"compute-user"], - - "network:get_floating_ip": ["role":"compute-user"], - "network:get_floating_ip_pools": ["role":"compute-user"], - "network:get_floating_ip_by_address": ["role":"compute-user"], - "network:get_floating_ips_by_project": ["role":"compute-user"], - "network:get_floating_ips_by_fixed_address": ["role":"compute-user"], - "network:allocate_floating_ip": ["role":"compute-user"], - "network:deallocate_floating_ip": ["role":"compute-user"], - "network:associate_floating_ip": ["role":"compute-user"], - "network:disassociate_floating_ip": ["role":"compute-user"], - - "network:get_fixed_ip": ["role":"compute-user"], - "network:add_fixed_ip_to_instance": ["role":"compute-user"], - "network:remove_fixed_ip_from_instance": ["role":"compute-user"], - "network:add_network_to_project": ["role":"compute-user"], - "network:get_instance_nw_info": ["role":"compute-user"], - - "network:get_dns_domains": ["role":"compute-user"], - "network:add_dns_entry": ["role":"compute-user"], - "network:modify_dns_entry": ["role":"compute-user"], - "network:delete_dns_entry": ["role":"compute-user"], - "network:get_dns_entries_by_address": ["role":"compute-user"], - "network:get_dns_entries_by_name": ["role":"compute-user"], - "network:create_private_dns_domain": ["role":"compute-user"], - "network:create_public_dns_domain": ["role":"compute-user"], - "network:delete_dns_domain": ["role":"compute-user"] -} + "volume:create":[ + "role:compute-user" + ] +} + If we wished to restrict all Compute service requests + to require this role, the resulting file would look like: +
Service management - The Identity Service provides the following service - management functions: + The Identity Service provides the following service + management functions: Services @@ -362,8 +278,8 @@ corresponds to each service (such as, a user named nova, for the Compute service) and a special service tenant, which is called - service. + service. The commands for creating services and endpoints are - described in a later section. + described in a later section.
diff --git a/doc/src/docbkx/common/samples/roles.json b/doc/src/docbkx/common/samples/roles.json new file mode 100644 index 0000000000..b66ae1d59a --- /dev/null +++ b/doc/src/docbkx/common/samples/roles.json @@ -0,0 +1,331 @@ +{ + "admin_or_owner":[ + [ + "role:admin" + ], + [ + "project_id:%(project_id)s" + ] + ], + "default":[ + [ + "rule:admin_or_owner" + ] + ], + "compute:create":[ + "role:compute-user" + ], + "compute:create:attach_network":[ + "role:compute-user" + ], + "compute:create:attach_volume":[ + "role:compute-user" + ], + "compute:get_all":[ + "role:compute-user" + ], + "admin_api":[ + [ + "role:admin" + ] + ], + "compute_extension:accounts":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:pause":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:unpause":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:suspend":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:resume":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:lock":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:unlock":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:resetNetwork":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:injectNetworkInfo":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:createBackup":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:admin_actions:migrateLive":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:admin_actions:migrate":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:aggregates":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:certificates":[ + "role:compute-user" + ], + "compute_extension:cloudpipe":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:console_output":[ + "role:compute-user" + ], + "compute_extension:consoles":[ + "role:compute-user" + ], + "compute_extension:createserverext":[ + "role:compute-user" + ], + "compute_extension:deferred_delete":[ + "role:compute-user" + ], + "compute_extension:disk_config":[ + "role:compute-user" + ], + "compute_extension:evacuate":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:extended_server_attributes":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:extended_status":[ + "role:compute-user" + ], + "compute_extension:flavorextradata":[ + "role:compute-user" + ], + "compute_extension:flavorextraspecs":[ + "role:compute-user" + ], + "compute_extension:flavormanage":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:floating_ip_dns":[ + "role:compute-user" + ], + "compute_extension:floating_ip_pools":[ + "role:compute-user" + ], + "compute_extension:floating_ips":[ + "role:compute-user" + ], + "compute_extension:hosts":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:keypairs":[ + "role:compute-user" + ], + "compute_extension:multinic":[ + "role:compute-user" + ], + "compute_extension:networks":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:quotas":[ + "role:compute-user" + ], + "compute_extension:rescue":[ + "role:compute-user" + ], + "compute_extension:security_groups":[ + "role:compute-user" + ], + "compute_extension:server_action_list":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:server_diagnostics":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:simple_tenant_usage:show":[ + [ + "rule:admin_or_owner" + ] + ], + "compute_extension:simple_tenant_usage:list":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:users":[ + [ + "rule:admin_api" + ] + ], + "compute_extension:virtual_interfaces":[ + "role:compute-user" + ], + "compute_extension:virtual_storage_arrays":[ + "role:compute-user" + ], + "compute_extension:volumes":[ + "role:compute-user" + ], + "compute_extension:volumetypes":[ + "role:compute-user" + ], + "volume:create":[ + "role:compute-user" + ], + "volume:get_all":[ + "role:compute-user" + ], + "volume:get_volume_metadata":[ + "role:compute-user" + ], + "volume:get_snapshot":[ + "role:compute-user" + ], + "volume:get_all_snapshots":[ + "role:compute-user" + ], + "network:get_all_networks":[ + "role:compute-user" + ], + "network:get_network":[ + "role:compute-user" + ], + "network:delete_network":[ + "role:compute-user" + ], + "network:disassociate_network":[ + "role:compute-user" + ], + "network:get_vifs_by_instance":[ + "role:compute-user" + ], + "network:allocate_for_instance":[ + "role:compute-user" + ], + "network:deallocate_for_instance":[ + "role:compute-user" + ], + "network:validate_networks":[ + "role:compute-user" + ], + "network:get_instance_uuids_by_ip_filter":[ + "role:compute-user" + ], + "network:get_floating_ip":[ + "role:compute-user" + ], + "network:get_floating_ip_pools":[ + "role:compute-user" + ], + "network:get_floating_ip_by_address":[ + "role:compute-user" + ], + "network:get_floating_ips_by_project":[ + "role:compute-user" + ], + "network:get_floating_ips_by_fixed_address":[ + "role:compute-user" + ], + "network:allocate_floating_ip":[ + "role:compute-user" + ], + "network:deallocate_floating_ip":[ + "role:compute-user" + ], + "network:associate_floating_ip":[ + "role:compute-user" + ], + "network:disassociate_floating_ip":[ + "role:compute-user" + ], + "network:get_fixed_ip":[ + "role:compute-user" + ], + "network:add_fixed_ip_to_instance":[ + "role:compute-user" + ], + "network:remove_fixed_ip_from_instance":[ + "role:compute-user" + ], + "network:add_network_to_project":[ + "role:compute-user" + ], + "network:get_instance_nw_info":[ + "role:compute-user" + ], + "network:get_dns_domains":[ + "role:compute-user" + ], + "network:add_dns_entry":[ + "role:compute-user" + ], + "network:modify_dns_entry":[ + "role:compute-user" + ], + "network:delete_dns_entry":[ + "role:compute-user" + ], + "network:get_dns_entries_by_address":[ + "role:compute-user" + ], + "network:get_dns_entries_by_name":[ + "role:compute-user" + ], + "network:create_private_dns_domain":[ + "role:compute-user" + ], + "network:create_public_dns_domain":[ + "role:compute-user" + ], + "network:delete_dns_domain":[ + "role:compute-user" + ] +} \ No newline at end of file diff --git a/doc/src/docbkx/openstack-compute-admin/computescheduler.xml b/doc/src/docbkx/openstack-compute-admin/computescheduler.xml index 4553f49858..5b21bdd1f5 100644 --- a/doc/src/docbkx/openstack-compute-admin/computescheduler.xml +++ b/doc/src/docbkx/openstack-compute-admin/computescheduler.xml @@ -13,35 +13,36 @@ scheduler is configurable through a variety of options. Compute is configured with the following default scheduler options: - -scheduler_driver=nova.scheduler.multi.MultiScheduler + scheduler_driver=nova.scheduler.multi.MultiScheduler volume_scheduler_driver=nova.scheduler.chance.ChanceScheduler compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler scheduler_available_filters=nova.scheduler.filters.all_filters scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter scheduler_weight_classes=nova.scheduler.weights.all_weighers -ram_weight_multiplier=1.0 - +ram_weight_multiplier=1.0 - Compute. Configured to use the multi-scheduler, - which allows the admin to specify different - scheduling behavior for compute requests versus volume - requests. + Compute. Configured to use the multi-scheduler, + which allows the admin to specify different scheduling + behavior for compute requests versus volume + requests. + Volume scheduler. Configured as a chance scheduler, which picks a host at random that has the - cinder-volume service running. - + cinder-volume service + running. + Compute scheduler. Configured as a filter scheduler, described in detail in the next section. In the - default configuration, this scheduler will only consider hosts - that are in the requested availability zone - (AvailabilityZoneFilter), that have - sufficient RAM available (RamFilter), and - that are actually capable of servicing the request - (ComputeFilter). + default configuration, this scheduler will only + consider hosts that are in the requested availability + zone (AvailabilityZoneFilter), that + have sufficient RAM available + (RamFilter), and that are + actually capable of servicing the request + (ComputeFilter).
@@ -54,7 +55,7 @@ ram_weight_multiplier=1.0 created. This Scheduler can only be used for scheduling compute requests, not volume requests, i.e. it can only be used with the compute_scheduler_driver - configuration option. + configuration option.
@@ -65,47 +66,41 @@ ram_weight_multiplier=1.0 resource. Filters are binary: either a host is accepted by the filter, or it is rejected. Hosts that are accepted by the filter are then processed by a different algorithm to - decide which hosts to use for that request, described in the - Weights section.
- Filtering - - - - - -
- + decide which hosts to use for that request, described in + the Weights section. +
+ Filtering + + + + + +
The scheduler_available_filters configuration option in nova.conf provides the Compute service with the list of the filters that will be used by the scheduler. The default setting specifies all of the filter that are included with the - Compute service: - -scheduler_available_filters=nova.scheduler.filters.all_filters - - This configuration option can be specified multiple times. - For example, if you implemented your own custom filter in - Python called myfilter.MyFilter and you - wanted to use both the built-in filters and your custom - filter, your nova.conf file would - contain: - -scheduler_available_filters=nova.scheduler.filters.all_filters -scheduler_available_filters=myfilter.MyFilter - + Compute service: + scheduler_available_filters=nova.scheduler.filters.all_filters + This configuration option can be specified multiple + times. For example, if you implemented your own custom + filter in Python called + myfilter.MyFilter and you wanted to + use both the built-in filters and your custom filter, your + nova.conf file would contain: + scheduler_available_filters=nova.scheduler.filters.all_filters +scheduler_available_filters=myfilter.MyFilter The scheduler_default_filters configuration option in nova.conf defines the list of filters that will be applied by the nova-scheduler service. As mentioned above, the default filters are: - -scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter - + scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter The available filters are described below. @@ -120,15 +115,14 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
AggregateMultiTenancyIsolation - Isolates tenants to specifichost aggregates. If a host is in an - aggregate that has the metadata key filter_tenant_id - it will only create instances from that tenant (or list of tenants). - A host can be - in different aggregates. If a host does not belong to an - aggregate with the metadata key, it can create instances from - all tenants. - + Isolates tenants to specifichost aggregates. + If a host is in an aggregate that has the metadata key + filter_tenant_id it will only + create instances from that tenant (or list of + tenants). A host can be in different aggregates. If a + host does not belong to an aggregate with the metadata + key, it can create instances from all tenants.
@@ -139,28 +133,31 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
AvailabilityZoneFilter - Filters hosts by availability zone. This filter - must be enabled for the scheduler to respect - availability zones in requests. + Filters hosts by availability zone. This filter must + be enabled for the scheduler to respect availability + zones in requests.
ComputeCapabilitiesFilter - Matches properties defined in an instance type's extra specs - against compute capabilities. - If an extra specs key contains a colon ":", anything before - the colon is treated as a namespace, and anything after the - colon is treated as the key to be matched. If a namespace is - present and is not 'capabilities', it is ignored by this - filter. - Disable the ComputeCapabilitiesFilter when using a - Bare Metal configuration, due to - bug 1129485 + Matches properties defined in an instance type's + extra specs against compute capabilities. + If an extra specs key contains a colon ":", anything + before the colon is treated as a namespace, and + anything after the colon is treated as the key to be + matched. If a namespace is present and is not + 'capabilities', it is ignored by this filter. + + Disable the ComputeCapabilitiesFilter when using + a Bare Metal configuration, due to bug 1129485 +
ComputeFilter - Filters hosts by flavor (also known as instance + Filters hosts by flavor (also known as instance type) and image properties. The scheduler will check to ensure that a compute host has sufficient capabilities to run a virtual machine instance that @@ -170,24 +167,25 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter the filter checks for are: - architecture: Architecture describes the - machine architecture required by the image. - Examples are i686, x86_64, arm, and - ppc64. + architecture: + Architecture describes the machine + architecture required by the image. Examples + are i686, x86_64, arm, and ppc64. - hypervisor_type: Hypervisor type describes - the hypervisor required by the image. Examples - are xen, kvm, qemu, xenapi, and - powervm. + hypervisor_type: + Hypervisor type describes the hypervisor + required by the image. Examples are xen, kvm, + qemu, xenapi, and powervm. - vm_mode: Virtual machine mode describes the - hypervisor application binary interface (ABI) - required by the image. Examples are 'xen' for - Xen 3.0 paravirtual ABI, 'hvm' for native ABI, - 'uml' for User Mode Linux paravirtual ABI, exe - for container virt executable ABI. + vm_mode: Virtual machine + mode describes the hypervisor application + binary interface (ABI) required by the image. + Examples are 'xen' for Xen 3.0 paravirtual + ABI, 'hvm' for native ABI, 'uml' for User Mode + Linux paravirtual ABI, exe for container virt + executable ABI. In general, this filter should always be enabled. @@ -196,27 +194,23 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter
CoreFilter - Only schedule instances on hosts if there are + Only schedule instances on hosts if there are sufficient CPU cores available. If this filter is not set, the scheduler may over provision a host based on cores (i.e., the virtual cores running on an instance - may exceed the physical cores). + may exceed the physical cores). This filter can be configured to allow a fixed amount of vCPU overcommitment by using the cpu_allocation_ratio Configuration option in nova.conf. The default setting is: - - cpu_allocation_ratio=16.0 - + cpu_allocation_ratio=16.0 With this setting, if there are 8 vCPUs on a node, the scheduler will allow instances up to 128 vCPU to be - run on that node. + run on that node. To disallow vCPU overcommitment set: - - cpu_allocation_ratio=1.0 - + cpu_allocation_ratio=1.0
@@ -228,57 +222,37 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter list of instance uuids as the value. This filter is the opposite of the SameHostFilter. Using the nova command-line tool, - use the --hint flag. For example: - -$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 - - With the API, use the + use the --hint flag. For + example: + $ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 + With the API, use the os:scheduler_hints key. For - example: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'different_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1', - '8c19174f-4220-44f0-824a-cd1eeef10287'], - } -} - - + example: +
-
- DiskFilter - Only schedule instances on hosts if there are - sufficient Disk available for ephemeral storage. - - This filter can be configured to allow a fixed - amount of disk overcommitment by using the - disk_allocation_ratio - Configuration option in - nova.conf. The default setting - is: - - disk_allocation_ratio=1.0 - - - - Adjusting this value to be greater than 1.0 will allow - scheduling instances while over committing disk resources on - the node. This may be desirable if you use an image format - that is sparse or copy on write such that each virtual - instance does not require a 1:1 allocation of virtual disk - to physical storage. - -
+ DiskFilter + Only schedule instances on hosts if there are + sufficient Disk available for ephemeral + storage. + This filter can be configured to allow a fixed + amount of disk overcommitment by using the + disk_allocation_ratio + Configuration option in + nova.conf. The default setting + is: + disk_allocation_ratio=1.0 + Adjusting this value to be greater than 1.0 will + allow scheduling instances while over committing disk + resources on the node. This may be desirable if you + use an image format that is sparse or copy on write + such that each virtual instance does not require a 1:1 + allocation of virtual disk to physical storage. +
GroupAntiAffinityFilter - The GroupAntiAffinityFilter ensures that each + The GroupAntiAffinityFilter ensures that each instance in a group is on a different host. To take advantage of this filter, the requester must pass a scheduler hint, using group as the @@ -300,29 +274,24 @@ scheduler_default_filters=AvailabilityZoneFilter,RamFilter,ComputeFilter and virtual machine mode. E.g., an instance might require a host that runs an ARM-based processor and QEMU as the hypervisor. An image can be decorated with - these properties using - glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu - + these properties: + $ glance image-update img-uuid --property architecture=arm --property hypervisor_type=qemu
-
IsolatedHostsFilter - Allows the admin to define a special (isolated) set + Allows the admin to define a special (isolated) set of images and a special (isolated) set of hosts, such that the isolated images can only run on the isolated hosts, and the isolated hosts can only run isolated - images. - The admin must specify the isolated set of images + images. + The admin must specify the isolated set of images and hosts in the nova.conf file using the isolated_hosts and isolated_images configuration - options. For example: - -isolated_hosts=server1,server2 -isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09 - - + options. For example: + isolated_hosts=server1,server2 +isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09
@@ -380,18 +349,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 1 --hint query='[">=","$free_ram_mb",1024]' server1 With the API, use the os:scheduler_hints key: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'query': '[">=","$free_ram_mb",1024]', - } -} - +
@@ -406,13 +364,11 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1 ram_allocation_ratio configuration option in nova.conf. The default setting - is: - -ram_allocation_ratio=1.5 - - With this setting, if there is 1GB of free RAM, the + is: + ram_allocation_ratio=1.5 + With this setting, if there is 1GB of free RAM, the scheduler will allow instances up to size 1.5GB to be - run on that instance. + run on that instance.
@@ -443,21 +399,8 @@ ram_allocation_ratio=1.5 $ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1 With the API, use the - os:scheduler_hints key: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'same_host': ['a0cf03a5-d921-4877-bb5c-86d26cf818e1', - '8c19174f-4220-44f0-824a-cd1eeef10287'], - } -} - - + os:scheduler_hints key: +
@@ -486,48 +429,31 @@ ram_allocation_ratio=1.5 command-line tool, use the --hint flag. For example, to specify the IP subnet 192.168.1.1/24 - -$ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1 - + $ nova boot --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint build_near_host_ip=192.168.1.1 --hint cidr=/24 server-1 With the API, use the os:scheduler_hints key: - - { - 'server': { - 'name': 'server-1', - 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', - 'flavorRef': '1' - }, - 'os:scheduler_hints': { - 'build_near_host_ip': '192.168.1.1', - 'cidr': '24' - } -} - +
- Weights - - The filter scheduler uses the scheduler_weight_classes + Weights + The filter scheduler uses the + scheduler_weight_classes configuration parameter to calculate the weights of hosts. - The value of this parameter defaults to - nova.scheduler.weights.all_weighers, - which selects the only weigher available -- the RamWeigher. - Hosts are then weighed and sorted with the largest weight winning. - - -scheduler_weight_classes=nova.scheduler.weights.all_weighers -ram_weight_multiplier=1.0 - - - The default behavior is to spread instances across all hosts evenly. - Set the ram_weight_multiplier configuration - parameter to a negative number if you prefer stacking instead of spreading. - - + The value of this parameter defaults to + nova.scheduler.weights.all_weighers, + which selects the only weigher available -- the + RamWeigher. Hosts are then weighed and sorted with the + largest weight winning. + scheduler_weight_classes=nova.scheduler.weights.all_weighers +ram_weight_multiplier=1.0 + The default behavior is to spread instances across all + hosts evenly. Set the + ram_weight_multiplier configuration + parameter to a negative number if you prefer stacking + instead of spreading.
@@ -549,8 +475,8 @@ ram_weight_multiplier=1.0 nova.scheduler.multi.MultiScheduler holds multiple sub-schedulers, one for nova-compute requests and one - for cinder-volume requests. It is the - default top-level scheduler as specified by the + for cinder-volume requests. It is + the default top-level scheduler as specified by the scheduler_driver configuration option.
diff --git a/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints.json b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints.json new file mode 100644 index 0000000000..b991f4350a --- /dev/null +++ b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints.json @@ -0,0 +1,10 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "query":"[>=,$free_ram_mb,1024]" + } +} \ No newline at end of file diff --git a/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints2.json b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints2.json new file mode 100644 index 0000000000..5c65981936 --- /dev/null +++ b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints2.json @@ -0,0 +1,13 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "same_host":[ + "a0cf03a5-d921-4877-bb5c-86d26cf818e1", + "8c19174f-4220-44f0-824a-cd1eeef10287" + ] + } +} \ No newline at end of file diff --git a/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints3.json b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints3.json new file mode 100644 index 0000000000..f0506ad9dd --- /dev/null +++ b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints3.json @@ -0,0 +1,13 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "different_host":[ + "a0cf03a5-d921-4877-bb5c-86d26cf818e1", + "8c19174f-4220-44f0-824a-cd1eeef10287" + ] + } +} \ No newline at end of file diff --git a/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints4.json b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints4.json new file mode 100644 index 0000000000..28359f5c0d --- /dev/null +++ b/doc/src/docbkx/openstack-compute-admin/samples/scheduler_hints4.json @@ -0,0 +1,11 @@ +{ + "server":{ + "name":"server-1", + "imageRef":"cedef40a-ed67-4d10-800e-17455edce175", + "flavorRef":"1" + }, + "os:scheduler_hints":{ + "build_near_host_ip":"192.168.1.1", + "cidr":"24" + } +} \ No newline at end of file diff --git a/doc/src/docbkx/openstack-network-connectivity-admin/ch_auth.xml b/doc/src/docbkx/openstack-network-connectivity-admin/ch_auth.xml index 67aed543fc..51938afeb7 100644 --- a/doc/src/docbkx/openstack-network-connectivity-admin/ch_auth.xml +++ b/doc/src/docbkx/openstack-network-connectivity-admin/ch_auth.xml @@ -10,36 +10,35 @@ to the OpenStack Networking service must provide an authentication token in X-Auth-Token request header. The aforementioned token should have been obtained by - authenticating with the OpenStack Identity endpoint. For more - information concerning authentication with OpenStack Identity, - please refer to the OpenStack Identity documentation. When - OpenStack Identity is enabled, it is not mandatory to specify - tenant_id for resources in create requests, as the tenant - identifier will be derived from the Authentication token. - Please note that the default authorization settings only allow - administrative users to create resources on behalf of a - different tenant. OpenStack Networking uses information - received from OpenStack Identity to authorize user requests. - OpenStack Networking handles two kind of authorization - policies: - - Operation-based: policies specify - access criteria for specific operations, possibly - with fine-grained control over specific - attributes; - - - Resource-based: - whether access to specific resource might be - granted or not according to the permissions - configured for the resource (currently available - only for the network resource). The actual - authorization policies enforced in OpenStack - Networking might vary from deployment to - deployment. - - + authenticating with the OpenStack Identity endpoint. + For more information concerning authentication with + OpenStack Identity, please refer to the OpenStack Identity + documentation. When OpenStack Identity is enabled, it is not + mandatory to specify tenant_id for resources in create + requests, as the tenant identifier will be derived from the + Authentication token. Please note that the default + authorization settings only allow administrative users to + create resources on behalf of a different tenant. OpenStack + Networking uses information received from OpenStack Identity + to authorize user requests. OpenStack Networking handles two + kind of authorization policies: + + + Operation-based: + policies specify access criteria for specific + operations, possibly with fine-grained control over + specific attributes; + + + Resource-based: + whether access to specific resource might be granted + or not according to the permissions configured for the + resource (currently available only for the network + resource). The actual authorization policies enforced + in OpenStack Networking might vary from deployment to + deployment. + + The policy engine reads entries from the policy.json file. The actual location of this file might vary from distribution to @@ -75,93 +74,101 @@ for instance extension:provider_network:set will be triggered if the attributes defined by the Provider Network extensions are specified in an API request. - An authorization policy can be composed by one or more rules. If more rules are specified, - evaluation policy will be successful if any of the rules evaluates successfully; if an API - operation matches multiple policies, then all the policies must evaluate successfully. Also, - authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to - another rule, until a terminal rule is reached. + An authorization policy can be composed by one or more + rules. If more rules are specified, evaluation policy will be + successful if any of the rules evaluates successfully; if an + API operation matches multiple policies, then all the policies + must evaluate successfully. Also, authorization rules are + recursive. Once a rule is matched, the rule(s) can be resolved + to another rule, until a terminal rule is reached. The OpenStack Networking policy engine currently defines the following kinds of terminal rules: - - - Role-based rules: evaluate successfully if - the user submitting the request has the specified role. For instance - "role:admin"is successful if the user submitting the request is - an administrator. - - - Field-based rules: evaluate successfully if a - field of the resource specified in the current request matches a specific value. - For instance "field:networks:shared=True" is successful if the - attribute shared of the network resource is set to true. - - - Generic rules: compare an attribute in the - resource with an attribute extracted from the user's security credentials and - evaluates successfully if the comparison is successful. For instance - "tenant_id:%(tenant_id)s" is successful if the tenant - identifier in the resource is equal to the tenant identifier of the user - submitting the request. - - The following is an extract from the default policy.json file: - { - "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], - [1] - "admin_or_network_owner": [["role:admin"], - ["tenant_id:%(network_tenant_id)s"]], - "admin_only": [["role:admin"]], "regular_user": [], - "shared": [["field:networks:shared=True"]], - "default": [["rule:admin_or_owner"]],[2] - "create_subnet": [["rule:admin_or_network_owner"]], - "get_subnet": [["rule:admin_or_owner"], ["rule:shared"]], - "update_subnet": [["rule:admin_or_network_owner"]], - "delete_subnet": [["rule:admin_or_network_owner"]], - "create_network": [], - "get_network": [["rule:admin_or_owner"], ["rule:shared"]],[3] - "create_network:shared": [["rule:admin_only"]], [4] - - "update_network": [["rule:admin_or_owner"]], - "delete_network": [["rule:admin_or_owner"]], - "create_port": [], - "create_port:mac_address": [["rule:admin_or_network_owner"]],[5] - "create_port:fixed_ips": [["rule:admin_or_network_owner"]], - "get_port": [["rule:admin_or_owner"]], - "update_port": [["rule:admin_or_owner"]], - "delete_port": [["rule:admin_or_owner"]] - } - [1] is a rule which evaluates successfully if the current user is an administrator or the - owner of the resource specified in the request (tenant identifier is equal). - [2] is the default policy which is always evaluated if an API operation does not match any - of the policies in policy.json. - [3] This policy will evaluate successfully if either admin_or_owner, or shared evaluates - successfully. - [4] This policy will restrict the ability of manipulating the shared attribute for a network to administrators only. - [5] This policy will restrict the ability of manipulating the mac_address attribute for a port only to administrators and the owner of the - network where the port is attached. - In some cases, some operations should be restricted to administrators only; therefore, as - a further example, let us consider how this sample policy file should be modified in a - scenario where tenants are allowed only to define networks and see their resources, and all - the other operations can be performed only in an administrative context: - { - "admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]], - "admin_only": [["role:admin"]], "regular_user": [], - "default": [["rule:admin_only"]], - "create_subnet": [["rule:admin_only"]], - "get_subnet": [["rule:admin_or_owner"]], - "update_subnet": [["rule:admin_only"]], - "delete_subnet": [["rule:admin_only"]], - "create_network": [], - "get_network": [["rule:admin_or_owner"]], - "create_network:shared": [["rule:admin_only"]], - "update_network": [["rule:admin_or_owner"]], - "delete_network": [["rule:admin_or_owner"]], - "create_port": [["rule:admin_only"]], - "get_port": [["rule:admin_or_owner"]], - "update_port": [["rule:admin_only"]], - "delete_port": [["rule:admin_only"]] - } + + + Role-based rules: + evaluate successfully if the user submitting the + request has the specified role. For instance + "role:admin"is successful if the user + submitting the request is an administrator. + + + Field-based rules: + evaluate successfully if a field of the + resource specified in the current request matches a + specific value. For instance + "field:networks:shared=True" is + successful if the attribute shared of the network resource is set to + true. + + + Generic rules: + compare an attribute in the resource with an attribute + extracted from the user's security credentials and + evaluates successfully if the comparison is + successful. For instance + "tenant_id:%(tenant_id)s" is + successful if the tenant identifier in the resource is + equal to the tenant identifier of the user submitting + the request. + + + The following is an extract from the default policy.json + file: + + policy.json file + + + + + + + + + + + + + + A rule that evaluates successfully if the current + user is an administrator or the owner of the resource + specified in the request (tenant identifier is + equal). + + + The default policy that is always evaluated if an + API operation does not match any of the policies in + policy.json. + + + This policy evaluates successfully if either + admin_or_owner, or + shared evaluates + successfully. + + + This policy restricts the ability of manipulating + the shared attribute for a network + to administrators only. + + + This policy restricts the ability of manipulating + the mac_address attribute for a + port only to administrators and the owner of the + network where the port is attached. + + + In some cases, some operations should be restricted to + administrators only; therefore, as a further example, let us + consider how this sample policy file should be modified in a + scenario where tenants are allowed only to define networks and + see their resources, and all the other operations can be + performed only in an administrative context: + diff --git a/doc/src/docbkx/openstack-network-connectivity-admin/samples/policy.json b/doc/src/docbkx/openstack-network-connectivity-admin/samples/policy.json new file mode 100644 index 0000000000..fcab0f534b --- /dev/null +++ b/doc/src/docbkx/openstack-network-connectivity-admin/samples/policy.json @@ -0,0 +1,113 @@ +{ + "admin_or_owner":[ + [ + "role:admin" + ], + [ + "tenant_id:%(tenant_id)s" + ] + ], + "admin_or_network_owner":[ + [ + "role:admin" + ], + [ + "tenant_id:%(network_tenant_id)s" + ] + ], + "admin_only":[ + [ + "role:admin" + ] + ], + "regular_user":[ + + ], + "shared":[ + [ + "field:networks:shared=True" + ] + ], + "default":[ + [ + "rule:admin_or_owner" + ] + ], + "create_subnet":[ + [ + "rule:admin_or_network_owner" + ] + ], + "get_subnet":[ + [ + "rule:admin_or_owner" + ], + [ + "rule:shared" + ] + ], + "update_subnet":[ + [ + "rule:admin_or_network_owner" + ] + ], + "delete_subnet":[ + [ + "rule:admin_or_network_owner" + ] + ], + "create_network":[ + + ], + "get_network":[ + [ + "rule:admin_or_owner" + ], + [ + "rule:shared" + ] + ], + "create_network:shared":[ + [ + "rule:admin_only" + ] + ], + "update_network":[ + [ + "rule:admin_or_owner" + ] + ], + "delete_network":[ + [ + "rule:admin_or_owner" + ] + ], + "create_port":[ + + ], + "create_port:mac_address":[ + [ + "rule:admin_or_network_owner" + ] + ], + "create_port:fixed_ips":[ + [ + "rule:admin_or_network_owner" + ] + ], + "get_port":[ + [ + "rule:admin_or_owner" + ] + ], + "update_port":[ + [ + "rule:admin_or_owner" + ] + ], + "delete_port":[ + [ + "rule:admin_or_owner" + ] + ] +} \ No newline at end of file diff --git a/doc/src/docbkx/openstack-network-connectivity-admin/samples/policy2.json b/doc/src/docbkx/openstack-network-connectivity-admin/samples/policy2.json new file mode 100644 index 0000000000..eeca81825e --- /dev/null +++ b/doc/src/docbkx/openstack-network-connectivity-admin/samples/policy2.json @@ -0,0 +1,86 @@ +{ + "admin_or_owner":[ + [ + "role:admin" + ], + [ + "tenant_id:%(tenant_id)s" + ] + ], + "admin_only":[ + [ + "role:admin" + ] + ], + "regular_user":[ + + ], + "default":[ + [ + "rule:admin_only" + ] + ], + "create_subnet":[ + [ + "rule:admin_only" + ] + ], + "get_subnet":[ + [ + "rule:admin_or_owner" + ] + ], + "update_subnet":[ + [ + "rule:admin_only" + ] + ], + "delete_subnet":[ + [ + "rule:admin_only" + ] + ], + "create_network":[ + + ], + "get_network":[ + [ + "rule:admin_or_owner" + ] + ], + "create_network:shared":[ + [ + "rule:admin_only" + ] + ], + "update_network":[ + [ + "rule:admin_or_owner" + ] + ], + "delete_network":[ + [ + "rule:admin_or_owner" + ] + ], + "create_port":[ + [ + "rule:admin_only" + ] + ], + "get_port":[ + [ + "rule:admin_or_owner" + ] + ], + "update_port":[ + [ + "rule:admin_only" + ] + ], + "delete_port":[ + [ + "rule:admin_only" + ] + ] +} \ No newline at end of file