initial Neutron OVS support scenarios

Change-Id: Ibac27dd6d1840f31ecb54c6b5e2b74b16f2c3b06
This commit is contained in:
Jiri Broulik 2016-11-21 20:23:47 +01:00 committed by Jakub Pavlik
parent e732abde3a
commit 74f61118e5
26 changed files with 3498 additions and 199 deletions

View File

@ -10,28 +10,6 @@ Starting in the Folsom release, Neutron is a core and supported part of the
OpenStack platform (for Essex, we were an "incubated" project, which means use
is suggested only for those who really know what they're doing with Neutron).
Usage notes
===========
For live migration to work, you have to set migration param on bridge and
switch nodes.
.. code-block:: yaml
neutron:
bridge:
enabled: true
migration: true
.. code-block:: yaml
neutron:
switch:
enabled: true
migration: true
Furthermore you need to set private and public keys for user 'neutron'.
Sample pillars
==============
@ -42,20 +20,10 @@ Neutron Server on the controller node
neutron:
server:
enabled: true
version: havana
version: mitaka
bind:
address: 172.20.0.1
port: 9696
tunnel_type: vxlan
public_networks:
- name: public
subnets:
- name: public-subnet
gateway: 10.0.0.1
network: 10.0.0.0/24
pool_start: 10.0.5.20
pool_end: 10.0.5.200
dhcp: False
database:
engine: mysql
host: 127.0.0.1
@ -81,14 +49,460 @@ Neutron Server on the controller node
host: 127.0.0.1
port: 8775
password: pass
fwaas: false
Neutron Server with OpenContrail
Neutron VXLAN tenant networks with Network Nodes (with DVR for East-West
and Network node for North-South)
=========================================================================
===================================
This use case describes a model utilising VxLAN overlay with DVR. The DVR
routers will only be utilized for traffic that is router within the cloud
infrastructure and that remains encapsulated. External traffic will be
routed to via the network nodes.
The intention is that each tenant will require at least two (2) vrouters
one to be utilised
Neutron Server only
-------------------
.. code-block:: yaml
neutron:
server:
version: mitaka
plugin: ml2
bind:
address: 172.20.0.1
port: 9696
database:
engine: mysql
host: 127.0.0.1
port: 3306
name: neutron
user: neutron
password: pwd
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: neutron
password: pwd
tenant: service
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
global_physnet_mtu: 9000
l3_ha: False # Which type of router will be created by default
dvr: True # disabled for non DVR use case
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
external_mtu: 9000
mechanism:
ovs:
driver: openvswitch
Network Node only
-----------------
.. code-block:: yaml
neutron:
gateway:
enabled: True
version: mitaka
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
local_ip: 192.168.20.20 # br-mesh ip address
dvr: True # disabled for non DVR use case
agent_mode: dvr_snat
metadata:
host: 127.0.0.1
password: pass
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
Compute Node
-------------
.. code-block:: yaml
neutron:
compute:
enabled: True
version: mitaka
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
local_ip: 192.168.20.20 # br-mesh ip address
dvr: True # disabled for non DVR use case
agent_mode: dvr
external_access: false # Compute node with DVR for east-west only, Network Node has True as default
metadata:
host: 127.0.0.1
password: pass
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
Neutron VXLAN tenant networks with Network Nodes (non DVR)
==========================================================
This section describes a network solution that utilises VxLAN overlay
networks without DVR with all routers being managed on the network nodes.
Neutron Server only
-------------------
.. code-block:: yaml
neutron:
server:
version: mitaka
plugin: ml2
bind:
address: 172.20.0.1
port: 9696
database:
engine: mysql
host: 127.0.0.1
port: 3306
name: neutron
user: neutron
password: pwd
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: neutron
password: pwd
tenant: service
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
global_physnet_mtu: 9000
l3_ha: True
dvr: False
backend:
engine: ml2
tenant_network_types= "flat,vxlan"
external_mtu: 9000
mechanism:
ovs:
driver: openvswitch
Network Node only
-----------------
.. code-block:: yaml
neutron:
gateway:
enabled: True
version: mitaka
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
local_ip: 192.168.20.20 # br-mesh ip address
dvr: False
agent_mode: legacy
metadata:
host: 127.0.0.1
password: pass
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
Compute Node
-------------
.. code-block:: yaml
neutron:
compute:
enabled: True
version: mitaka
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
local_ip: 192.168.20.20 # br-mesh ip address
external_access: False
dvr: False
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
Neutron VXLAN tenant networks with Network Nodes (with DVR for
East-West and North-South, DVR everywhere, Network node for SNAT)
==============================================================
========================================================
This section describes a network solution that utilises VxLAN
overlay networks with DVR with North-South and East-West. Network
Node is used only for SNAT.
Neutron Server only
-------------------
.. code-block:: yaml
neutron:
server:
version: mitaka
plugin: ml2
bind:
address: 172.20.0.1
port: 9696
database:
engine: mysql
host: 127.0.0.1
port: 3306
name: neutron
user: neutron
password: pwd
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: neutron
password: pwd
tenant: service
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
global_physnet_mtu: 9000
l3_ha: False
dvr: True
backend:
engine: ml2
tenant_network_types= "flat,vxlan"
external_mtu: 9000
mechanism:
ovs:
driver: openvswitch
Network Node only
-----------------
.. code-block:: yaml
neutron:
gateway:
enabled: True
version: mitaka
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
local_ip: 192.168.20.20 # br-mesh ip address
dvr: True
agent_mode: dvr_snat
metadata:
host: 127.0.0.1
password: pass
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
Compute Node
-------------
.. code-block:: yaml
neutron:
compute:
enabled: True
version: mitaka
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
local_ip: 192.168.20.20 # br-mesh ip address
dvr: True
external_access: True
agent_mode: dvr
metadata:
host: 127.0.0.1
password: pass
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
Sample Linux network configuration for DVR
--------------------------------------------
.. code-block:: yaml
linux:
network:
bridge: openvswitch
interface:
eth1:
enabled: true
type: eth
mtu: 9000
proto: manual
eth2:
enabled: true
type: eth
mtu: 9000
proto: manual
eth3:
enabled: true
type: eth
mtu: 9000
proto: manual
br-int:
enabled: true
mtu: 9000
type: ovs_bridge
br-floating:
enabled: true
mtu: 9000
type: ovs_bridge
float-to-ex:
enabled: true
type: ovs_port
mtu: 65000
bridge: br-floating
br-mgmt:
enabled: true
type: bridge
mtu: 9000
address: ${_param:single_address}
netmask: 255.255.255.0
use_interfaces:
- eth1
br-mesh:
enabled: true
type: bridge
mtu: 9000
address: ${_param:tenant_address}
netmask: 255.255.255.0
use_interfaces:
- eth2
br-ex:
enabled: true
type: bridge
mtu: 9000
address: ${_param:external_address}
netmask: 255.255.255.0
use_interfaces:
- eth3
use_ovs_ports:
- float-to-ex
Neutron VLAN tenant networks with Network Nodes
===============================================
VLAN tenant provider
Neutron Server only
-------------------
.. code-block:: yaml
neutron:
server:
version: mitaka
plugin: ml2
...
global_physnet_mtu: 9000
l3_ha: False
dvr: True
backend:
engine: ml2
tenant_network_types: "flat,vlan" # Can be mixed flat,vlan,vxlan
tenant_vlan_range: "1000:2000"
external_vlan_range: "100:200" # Does not have to be defined.
external_mtu: 9000
mechanism:
ovs:
driver: openvswitch
Compute node
-------------------
.. code-block:: yaml
neutron:
compute:
version: mitaka
plugin: ml2
...
dvr: True
agent_mode: dvr
external_access: False
backend:
engine: ml2
tenant_network_types: "flat,vlan" # Can be mixed flat,vlan,vxlan
mechanism:
ovs:
driver: openvswitch
Neutron Server with OpenContrail
==================================
.. code-block:: yaml
neutron:
server:
plugin: contrail
backend:
engine: contrail
host: contrail_discovery_host
@ -99,6 +513,7 @@ Neutron Server with OpenContrail
token: token
Neutron Server with Midonet
===========================
.. code-block:: yaml
@ -111,72 +526,8 @@ Neutron Server with Midonet
user: admin
password: password
Neutron bridge on the network node
.. code-block:: yaml
neutron:
bridge:
enabled: true
version: havana
tunnel_type: vxlan
bind:
address: 172.20.0.2
database:
engine: mysql
host: 127.0.0.1
port: 3306
name: neutron
user: neutron
password: pwd
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: neutron
password: pwd
tenant: service
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
Neutron switch on the compute node with live migration turned on
.. code-block:: yaml
neutron:
switch:
enabled: true
version: havana
migration: True
tunnel_type: vxlan
bind:
address: 127.20.0.100
database:
engine: mysql
host: 127.0.0.1
port: 3306
name: neutron
user: neutron
password: pwd
identity:
engine: keystone
host: 127.0.0.1
port: 35357
user: neutron
password: pwd
tenant: service
message_queue:
engine: rabbitmq
host: 127.0.0.1
port: 5672
user: openstack
password: pwd
virtual_host: '/openstack'
Other
=====
Neutron Keystone region

View File

@ -1,29 +0,0 @@
applications:
- neutron
parameters:
neutron:
bridge:
enabled: true
version: icehouse
migration: true
mtu: 1500
bind:
address: ${linux:network:host:local:address}
metadata:
host: ${linux:network:host:vip:address}
port: 8775
password: metadataPass
identity:
engine: keystone
host: ${linux:network:host:vip:address}
port: 35357
user: neutron
password: ${_secret:keystone_neutron_password}
tenant: service
message_queue:
engine: rabbitmq
host: ${linux:network:host:vip:address}
port: 5672
user: openstack
password: ${_secret:rabbitmq_openstack_password}
virtual_host: '/openstack'

View File

@ -0,0 +1,28 @@
applications:
- neutron
parameters:
neutron:
compute:
enabled: true
version: ${_param:neutron_version}
message_queue:
engine: rabbitmq
host: ${_param:cluster_vip_address}
port: 5672
user: openstack
password: ${_param:rabbitmq_openstack_password}
virtual_host: '/openstack'
local_ip: ${_param:tenant_address}
dvr: false
external_access: false
metadata:
host: ${_param:cluster_vip_address}
password: ${_param:metadata_password}
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch

View File

@ -6,9 +6,7 @@ parameters:
neutron:
server:
enabled: true
fwaas: false
dns_domain: novalocal
tunnel_type: vxlan
version: ${_param:neutron_version}
bind:
address: ${_param:cluster_local_address}

View File

@ -0,0 +1,26 @@
applications:
- neutron
parameters:
neutron:
gateway:
enabled: true
version: ${_param:neutron_version}
message_queue:
engine: rabbitmq
host: ${_param:cluster_vip_address}
port: 5672
user: openstack
password: ${_param:rabbitmq_openstack_password}
virtual_host: '/openstack'
local_ip: ${_param:tenant_address}
dvr: false
external_access: True
metadata:
host: ${_param:cluster_vip_address}
password: ${_param:metadata_password}
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch

View File

@ -1,32 +0,0 @@
applications:
- neutron
parameters:
neutron:
switch:
enabled: true
version: icehouse
mtu: 1500
tunnel_type: gre
bind:
address: ${linux:network:host:local:address}
database:
engine: mysql
host: ${linux:network:host:vip:address}
port: 3306
name: neutron
user: neutron
password: ${_secret:mysql_neutron_password}
identity:
engine: keystone
host: ${linux:network:host:vip:address}
port: 35357
user: neutron
password: ${_secret:keystone_neutron_password}
tenant: service
message_queue:
engine: rabbitmq
host: ${linux:network:host:vip:address}
port: 5672
user: openstack
password: ${_secret:rabbitmq_openstack_password}
virtual_host: '/openstack'

View File

@ -1,6 +0,0 @@
{% from "neutron/map.jinja" import bridge with context %}
{%- if bridge.enabled %}
{#TBD: prepared role for OpenVSwitch implementation on Network node side#}
{%- endif %}

View File

@ -1,6 +1,58 @@
{% from "neutron/map.jinja" import compute with context %}
{%- if compute.enabled %}
{#TBD: prepared role for OpenVSwitch implementation on Compute node side#}
neutron_compute_packages:
pkg.installed:
- names: {{ compute.pkgs }}
/etc/neutron/neutron.conf:
file.managed:
- source: salt://neutron/files/{{ compute.version }}/neutron-generic.conf.{{ grains.os_family }}
- template: jinja
- require:
- pkg: neutron_compute_packages
{% if compute.dvr %}
neutron_dvr_packages:
pkg.installed:
- names:
- neutron-l3-agent
- neutron-metadata-agent
/etc/neutron/l3_agent.ini:
file.managed:
- source: salt://neutron/files/{{ compute.version }}/l3_agent.ini
- template: jinja
- watch_in:
- service: neutron_compute_services
- require:
- pkg: neutron_compute_packages
/etc/neutron/metadata_agent.ini:
file.managed:
- source: salt://neutron/files/{{ compute.version }}/metadata_agent.ini
- template: jinja
- watch_in:
- service: neutron_compute_services
- require:
- pkg: neutron_compute_packages
{% endif %}
/etc/neutron/plugins/ml2/openvswitch_agent.ini:
file.managed:
- source: salt://neutron/files/{{ compute.version }}/openvswitch_agent.ini
- template: jinja
- require:
- pkg: neutron_compute_packages
neutron_compute_services:
service.running:
- names: {{ compute.services }}
- enable: true
- watch:
- file: /etc/neutron/neutron.conf
- file: /etc/neutron/plugins/ml2/openvswitch_agent.ini
{%- endif %}

View File

@ -0,0 +1,184 @@
[DEFAULT]
#
# From neutron.base.agent
#
# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int
# Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) so long as ovs_use_veth is set to
# True. (boolean value)
#ovs_use_veth = false
# MTU setting for device. This option will be removed in Newton. Please use the system-wide global_physnet_mtu setting which the agents will
# take into account when wiring VIFs. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#network_device_mtu = <None>
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs commands will fail with ALARMCLOCK error. (integer value)
#ovs_vsctl_timeout = 10
#
# From neutron.dhcp.agent
#
# The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is number of
# seconds between attempts. (integer value)
#resync_interval = 5
resync_interval = 30
# The driver used to manage the DHCP server. (string value)
#dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
# The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to
# append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router
# port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn't have any effect when
# force_metadata is set to True. (boolean value)
#enable_isolated_metadata = false
enable_isolated_metadata = True
# In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting
# this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service
# will be activated for all the networks. (boolean value)
#force_metadata = false
# Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix),
# and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs,
# as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True. (boolean value)
#enable_metadata_network = false
enable_metadata_network = False
# Number of threads to use during sync process. Should not exceed connection pool size configured on server. (integer value)
#num_sync_threads = 4
# Location to store DHCP server config files. (string value)
#dhcp_confs = $state_path/dhcp
# Domain to use for building the hostnames. This option is deprecated. It has been moved to neutron.conf as dns_domain. It will be removed
# in a future release. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#dhcp_domain = openstacklocal
# Override the default dnsmasq settings with this file. (string value)
#dnsmasq_config_file =
# Comma-separated list of the DNS servers which will be used as forwarders. (list value)
# Deprecated group/name - [DEFAULT]/dnsmasq_dns_server
#dnsmasq_dns_servers = <None>
# Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or
# DNS. If this section is null, disable dnsmasq log. (string value)
#dnsmasq_base_log_dir = <None>
# Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively
# removes the '--no-resolv' option from the dnsmasq process arguments. Adding custom DNS resolvers to the 'dnsmasq_dns_servers' option
# disables this feature. (boolean value)
#dnsmasq_local_resolv = false
# Limit number of leases to prevent a denial-of-service. (integer value)
#dnsmasq_lease_max = 16777216
# Use broadcast in DHCP replies. (boolean value)
#dhcp_broadcast_reply = false
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default INFO level. (boolean value)
#debug = false
debug = False
# If set to false, the logging level will be set to WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified
# path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if
# log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
state_path=/var/lib/neutron
[AGENT]
#
# From neutron.base.agent
#
# Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time.
# (floating point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false

View File

@ -0,0 +1,230 @@
{%- if pillar.neutron.gateway is defined %}
{%- from "neutron/map.jinja" import gateway as neutron with context %}
{%- else %}
{%- from "neutron/map.jinja" import compute as neutron with context %}
{%- endif %}
[DEFAULT]
#
# From neutron.base.agent
#
# Name of Open vSwitch bridge to use (string value)
#ovs_integration_bridge = br-int
# Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) so long as ovs_use_veth is set to
# True. (boolean value)
#ovs_use_veth = false
# MTU setting for device. This option will be removed in Newton. Please use the system-wide global_physnet_mtu setting which the agents will
# take into account when wiring VIFs. (integer value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#network_device_mtu = <None>
# The driver used to manage the virtual interface. (string value)
#interface_driver = <None>
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
# Timeout in seconds for ovs-vsctl commands. If the timeout expires, ovs commands will fail with ALARMCLOCK error. (integer value)
#ovs_vsctl_timeout = 10
#
# From neutron.l3.agent
#
# The working mode for the agent. Allowed modes are: 'legacy' - this preserves the existing behavior where the L3 agent is deployed on a
# centralized networking node to provide L3 services like DNAT, and SNAT. Use this mode if you do not want to adopt DVR. 'dvr' - this mode
# enables DVR functionality and must be used for an L3 agent that runs on a compute host. 'dvr_snat' - this enables centralized SNAT support
# in conjunction with DVR. This mode must be used for an L3 agent running on a centralized node (or in single-host deployments, e.g.
# devstack) (string value)
# Allowed values: dvr, dvr_snat, legacy
#agent_mode = legacy
agent_mode = {{ neutron.agent_mode }}
# TCP Port used by Neutron metadata namespace proxy. (port value)
# Minimum value: 0
# Maximum value: 65535
#metadata_port = 9697
metadata_port = 8775
# Send this many gratuitous ARPs for HA setup, if less than or equal to 0, the feature is disabled (integer value)
#send_arp_for_ha = 3
# If non-empty, the l3 agent can only configure a router that has the matching router ID. (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#router_id =
# Indicates that this L3 agent should also handle routers that do not have an external network gateway configured. This option should be
# True only for a single agent in a Neutron deployment, and may be False for all agents if all routers must have an external network
# gateway. (boolean value)
#handle_internal_only_routers = true
# When external_network_bridge is set, each L3 agent can be associated with no more than one external network. This value should be set to
# the UUID of that external network. To allow L3 agent support multiple external networks, both the external_network_bridge and
# gateway_external_network_id must be left empty. (string value)
#gateway_external_network_id =
# With IPv6, the network used for the external gateway does not need to have an associated subnet, since the automatically assigned link-
# local address (LLA) can be used. However, an IPv6 gateway address is needed for use as the next-hop for the default route. If no IPv6
# gateway address is configured here, (and only then) the neutron router will be configured to get its default route from router
# advertisements (RAs) from the upstream router; in which case the upstream router must also be configured to send these RAs. The
# ipv6_gateway, when configured, should be the LLA of the interface on the upstream router. If a next-hop using a global unique address
# (GUA) is desired, it needs to be done via a subnet allocated to the network and not through this parameter. (string value)
#ipv6_gateway =
# Driver used for ipv6 prefix delegation. This needs to be an entry point defined in the neutron.agent.linux.pd_drivers namespace. See
# setup.cfg for entry points included with the neutron source. (string value)
#prefix_delegation_driver = dibbler
# Allow running metadata proxy. (boolean value)
#enable_metadata_proxy = true
# Iptables mangle mark used to mark metadata valid requests. This mark will be masked with 0xffff so that only the lower 16 bits will be
# used. (string value)
#metadata_access_mark = 0x1
# Iptables mangle mark used to mark ingress from external network. This mark will be masked with 0xffff so that only the lower 16 bits will
# be used. (string value)
#external_ingress_mark = 0x2
# Name of bridge used for external network traffic. This should be set to an empty value for the Linux Bridge. When this parameter is set,
# each L3 agent can be associated with no more than one external network. (string value)
#external_network_bridge = br-ex
external_network_bridge =
# Seconds between running periodic tasks (integer value)
#periodic_interval = 40
# Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best
# performance. (integer value)
#api_workers = <None>
# Number of RPC worker processes for service (integer value)
#rpc_workers = 1
# Number of RPC worker processes dedicated to state reports queue (integer value)
#rpc_state_report_workers = 1
# Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) (integer
# value)
#periodic_fuzzy_delay = 5
# Location to store keepalived/conntrackd config files (string value)
#ha_confs_path = $state_path/ha_confs
# VRRP authentication type (string value)
# Allowed values: AH, PASS
#ha_vrrp_auth_type = PASS
# VRRP authentication password (string value)
#ha_vrrp_auth_password = <None>
# The advertisement interval in seconds (integer value)
#ha_vrrp_advert_int = 2
# Service to handle DHCPv6 Prefix delegation. (string value)
#pd_dhcp_driver = dibbler
# Location to store IPv6 RA config files (string value)
#ra_confs = $state_path/ra
# MinRtrAdvInterval setting for radvd.conf (integer value)
#min_rtr_adv_interval = 30
# MaxRtrAdvInterval setting for radvd.conf (integer value)
#max_rtr_adv_interval = 100
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default INFO level. (boolean value)
#debug = false
debug = False
# If set to false, the logging level will be set to WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified
# path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if
# log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[AGENT]
#
# From neutron.base.agent
#
# Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time.
# (floating point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false

View File

@ -0,0 +1,158 @@
{%- if pillar.neutron.gateway is defined %}
{%- from "neutron/map.jinja" import gateway as neutron with context %}
{%- else %}
{%- from "neutron/map.jinja" import compute as neutron with context %}
{%- endif %}
[DEFAULT]
#
# From neutron.metadata.agent
#
# Location for Metadata Proxy UNIX domain socket. (string value)
#metadata_proxy_socket = $state_path/metadata_proxy
# User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). (string value)
#metadata_proxy_user =
# Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). (string value)
#metadata_proxy_group =
# Certificate Authority public key (CA cert) file for ssl (string value)
#auth_ca_cert = <None>
# IP address used by Nova metadata server. (string value)
#nova_metadata_ip = 127.0.0.1
nova_metadata_ip = {{ neutron.metadata.host }}
# TCP Port used by Nova metadata server. (port value)
# Minimum value: 0
# Maximum value: 65535
#nova_metadata_port = 8775
# When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret to prevent spoofing. You may select any string
# for a secret, but it must match here and in the configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but
# in [neutron] section. (string value)
metadata_proxy_shared_secret = {{ neutron.metadata.password }}
# Protocol to access nova metadata, http or https (string value)
# Allowed values: http, https
#nova_metadata_protocol = http
nova_metadata_protocol = http
# Allow to perform insecure SSL (https) requests to nova metadata (boolean value)
#nova_metadata_insecure = false
# Client certificate for nova metadata api server. (string value)
#nova_client_cert =
# Private key of client certificate. (string value)
#nova_client_priv_key =
# Metadata Proxy UNIX domain socket mode, 4 values allowed: 'deduce': deduce mode from metadata_proxy_user/group values, 'user': set
# metadata proxy socket mode to 0o644, to use when metadata_proxy_user is agent effective user or root, 'group': set metadata proxy socket
# mode to 0o664, to use when metadata_proxy_group is agent effective group or root, 'all': set metadata proxy socket mode to 0o666, to use
# otherwise. (string value)
# Allowed values: deduce, user, group, all
#metadata_proxy_socket_mode = deduce
# Number of separate worker processes for metadata server (defaults to half of the number of CPUs) (integer value)
#metadata_workers = 4
# Number of backlog requests to configure the metadata server socket with (integer value)
#metadata_backlog = 4096
# URL to connect to the cache back end. (string value)
#cache_url = memory://
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default INFO level. (boolean value)
#debug = false
debug = False
# If set to false, the logging level will be set to WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified
# path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if
# log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[AGENT]
#
# From neutron.metadata.agent
#
# Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time.
# (floating point value)
#report_interval = 30
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false

View File

@ -0,0 +1,208 @@
{%- from "neutron/map.jinja" import server with context %}
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default INFO level. (boolean value)
#debug = false
# If set to false, the logging level will be set to WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified
# path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if
# log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[ml2]
#
# From neutron.ml2
#
# List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace. (list value)
#type_drivers = local,flat,vlan,gre,vxlan,geneve
type_drivers = local,flat,vlan,gre,vxlan
# Ordered list of network_types to allocate as tenant networks. The default value 'local' is useful for single-box testing but provides no
# connectivity between hosts. (list value)
#tenant_network_types = local
tenant_network_types = {{ server.backend.tenant_network_types }}
# An ordered list of networking mechanism driver entrypoints to be loaded from the neutron.ml2.mechanism_drivers namespace. (list value)
#mechanism_drivers =openvswitch,l2population
mechanism_drivers ={%- for backend_name, mechanism in server.backend.get('mechanism', {}).iteritems() %}{{ mechanism.driver }},{%- endfor %}l2population
# An ordered list of extension driver entrypoints to be loaded from the neutron.ml2.extension_drivers namespace. For example:
# extension_drivers = port_security,qos (list value)
extension_drivers = port_security
#extension_drivers =
# Maximum size of an IP packet (MTU) that can traverse the underlying physical network infrastructure without fragmentation for
# overlay/tunnel networks. In most cases, use the same value as the global_physnet_mtu option. (integer value)
#path_mtu = 1500
path_mtu = {{ server.get('global_physnet_mtu', '1500') }}
# A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val>. This mapping allows specifying a
# physical network MTU value that differs from the default global_physnet_mtu value. (list value)
#physical_network_mtus =
physical_network_mtus =physnet1:{{ server.backend.get('external_mtu', '1500') }}{%- if "vlan" in server.backend.tenant_network_types %},physnet2:{{ server.backend.get('external_mtu', '1500') }}{%- endif %}
# Default network type for external networks when no provider attributes are specified. By default it is None, which means that if provider
# attributes are not specified while creating external networks then they will have the same type as tenant networks. Allowed values for
# external_network_type config option depend on the network type values configured in type_drivers config option. (string value)
#external_network_type = <None>
[ml2_type_flat]
#
# From neutron.ml2
#
# List of physical_network names with which flat networks can be created. Use default '*' to allow flat networks with arbitrary
# physical_network names. Use an empty list to disable flat networks. (list value)
#flat_networks = *
flat_networks = *
[ml2_type_geneve]
#
# From neutron.ml2
#
# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs that are available for tenant network allocation
# (list value)
#vni_ranges =
# Geneve encapsulation header size is dynamic, this value is used to calculate the maximum MTU for the driver. This is the sum of the sizes
# of the outer ETH + IP + UDP + GENEVE header sizes. The default size for this field is 50, which is the size of the Geneve header without
# any additional option headers. (integer value)
#max_header_size = 50
[ml2_type_gre]
#
# From neutron.ml2
#
# Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# (list value)
#tunnel_id_ranges =
tunnel_id_ranges =2:65535
[ml2_type_vlan]
#
# From neutron.ml2
#
# List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks. (list value)
#network_vlan_ranges =
network_vlan_ranges ={%- if "vlan" in server.backend.tenant_network_types %}physnet1{%- if server.backend.external_vlan_range is defined %}:{{ server.backend.external_vlan_range }}{%- endif %},physnet2:{{ server.backend.tenant_vlan_range }}{%- endif %}
[ml2_type_vxlan]
#
# From neutron.ml2
#
# Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation
# (list value)
#vni_ranges =
vni_ranges =2:65535
# Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured,
# will disable multicast VXLAN mode. (string value)
#vxlan_group = <None>
vxlan_group = 224.0.0.1
[securitygroup]
#
# From neutron.ml2
#
# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
# Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the
# nova security group API. (boolean value)
#enable_security_group = true
enable_security_group = True
# Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node.
# (boolean value)
#enable_ipset = true

File diff suppressed because it is too large Load Diff

View File

@ -25,21 +25,30 @@ bind_port = {{ server.bind.port }}
# extensions:/path/to/more/exts:/even/more/exts. The __path__ of
# neutron.extensions is appended to this, so if your extensions are in there
# you don't need to specify them here. (string value)
{% if server.backend.engine == "contrail" %}
# TEMPORARY - until neutron v2 contrail package would be supported
#api_extensions_path = extensions:/usr/lib/python2.7/dist-packages/neutron_plugin_contrail/extensions:/usr/lib/python2.7/dist-packages/neutron_lbaas/extensions
api_extensions_path = extensions:/usr/lib/python2.7/dist-packages/neutron_plugin_contrail/extensions
# The core plugin Neutron will use (string value)
core_plugin = neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.NeutronPluginContrailCoreV2
# TEMPORARY - until neutron v2 contrail package would be supported
#service_plugins = neutron_plugin_contrail.plugins.opencontrail.loadbalancer.v2.plugin.LoadBalancerPluginV2
{% elif server.backend.engine == "ml2" %}
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins =neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.metering.metering_plugin.MeteringPlugin
{% endif %}
# The type of authentication to use (string value)
#auth_strategy = keystone
auth_strategy = keystone
# The core plugin Neutron will use (string value)
core_plugin = neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.NeutronPluginContrailCoreV2
# The service plugins Neutron will use (list value)
# TEMPORARY - until neutron v2 contrail package would be supported
#service_plugins = neutron_plugin_contrail.plugins.opencontrail.loadbalancer.v2.plugin.LoadBalancerPluginV2
# The base MAC address Neutron will use for VIFs. The first 3 octets will
# remain unchanged. If the 4th octet is not 00, it will also be used. The
# others will be randomly generated. (string value)
@ -47,6 +56,7 @@ core_plugin = neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.Neutr
# How many times Neutron will retry MAC generation (integer value)
#mac_generation_retries = 16
mac_generation_retries = 32
# Allow the usage of the bulk API (boolean value)
#allow_bulk = true
@ -113,6 +123,7 @@ core_plugin = neutron_plugin_contrail.plugins.opencontrail.contrail_plugin.Neutr
# lease times. (integer value)
# Deprecated group/name - [DEFAULT]/dhcp_lease_time
#dhcp_lease_duration = 86400
dhcp_lease_duration = 600
# Domain to use for building the hostnames (string value)
#dns_domain = openstacklocal
@ -159,6 +170,7 @@ notify_nova_on_port_data_changes = True
# If True, advertise network MTU values if core plugin calculates them. MTU is
# advertised to running instances via DHCP and RA MTU options. (boolean value)
#advertise_mtu = true
advertise_mtu = True
# Neutron IPAM (IP address management) driver to use. If ipam_driver is not set
# (default behavior), no IPAM driver is used. In order to use the reference
@ -181,6 +193,7 @@ notify_nova_on_port_data_changes = True
# value. Defaults to 1500, the standard value for Ethernet. (integer value)
# Deprecated group/name - [ml2]/segment_mtu
#global_physnet_mtu = 1500
global_physnet_mtu = {{ server.get('global_physnet_mtu', '1500') }}
# Number of backlog requests to configure the socket with (integer value)
#backlog = 4096
@ -243,6 +256,7 @@ notify_nova_on_port_data_changes = True
# Seconds to regard the agent is down; should be at least twice
# report_interval, to be sure the agent is down for good. (integer value)
#agent_down_time = 75
agent_down_time = 30
# Representing the resource type whose load is being reported by the agent.
# This can be "networks", "subnets" or "ports". When specified (Default is
@ -283,6 +297,7 @@ notify_nova_on_port_data_changes = True
# a given tenant network, providing high availability for DHCP service.
# (integer value)
#dhcp_agents_per_network = 1
dhcp_agents_per_network = 2
# Enable services on an agent with admin_state_up False. If this option is
# False, when admin_state_up of an agent is turned False, services on it will
@ -302,9 +317,11 @@ notify_nova_on_port_data_changes = True
# System-wide flag to determine the type of router that tenants can create.
# Only admin can override. (boolean value)
#router_distributed = false
router_distributed = {{ server.get('dvr', 'False') }}
# Driver to use for scheduling router to a default L3 agent (string value)
#router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
# Allow auto scheduling of routers to L3 agent. (boolean value)
#router_auto_schedule = true
@ -315,6 +332,7 @@ notify_nova_on_port_data_changes = True
# Enable HA mode for virtual routers. (boolean value)
#l3_ha = false
l3_ha = {{ server.get('l3_ha', 'False') }}
# Maximum number of L3 agents which a HA router will be scheduled on. If it is
# set to 0 then the router will be scheduled on every agent. (integer value)
@ -568,6 +586,7 @@ rpc_backend = rabbit
# wait forever. (integer value)
#client_socket_timeout = 900
nova_url = http://{{ server.compute.host }}:8774/v2
[agent]
@ -593,6 +612,7 @@ root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
# agent_down_time, best if it is half or less than agent_down_time. (floating
# point value)
#report_interval = 30
report_interval = 10
# Log agent heartbeats (boolean value)
#log_agent_heartbeats = false
@ -702,8 +722,11 @@ root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
# Deprecated group/name - [DEFAULT]/sql_connection
# Deprecated group/name - [DATABASE]/sql_connection
# Deprecated group/name - [sql]/connection
{% if server.backend.engine == "ml2" %}
connection = {{ server.database.engine }}+pymysql://{{ server.database.user }}:{{ server.database.password }}@{{ server.database.host }}/{{ server.database.name }}
{% else %}
connection = sqlite:////var/lib/neutron/neutron.sqlite
{% endif %}
# The SQLAlchemy connection string to use to connect to the slave database.
# (string value)
#slave_connection = <None>
@ -719,6 +742,7 @@ connection = sqlite:////var/lib/neutron/neutron.sqlite
# Deprecated group/name - [DATABASE]/sql_idle_timeout
# Deprecated group/name - [sql]/idle_timeout
#idle_timeout = 3600
idle_timeout = 3600
# Minimum number of SQL connections to keep open in a pool. (integer value)
# Deprecated group/name - [DEFAULT]/sql_min_pool_size
@ -729,22 +753,26 @@ connection = sqlite:////var/lib/neutron/neutron.sqlite
# Deprecated group/name - [DEFAULT]/sql_max_pool_size
# Deprecated group/name - [DATABASE]/sql_max_pool_size
#max_pool_size = <None>
max_pool_size = 20
# Maximum number of database connection retries during startup. Set to -1 to
# specify an infinite retry count. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_retries
# Deprecated group/name - [DATABASE]/sql_max_retries
#max_retries = 10
max_retries = -1
# Interval between retries of opening a SQL connection. (integer value)
# Deprecated group/name - [DEFAULT]/sql_retry_interval
# Deprecated group/name - [DATABASE]/reconnect_interval
#retry_interval = 10
retry_interval = 2
# If set, use this value for max_overflow with SQLAlchemy. (integer value)
# Deprecated group/name - [DEFAULT]/sql_max_overflow
# Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
#max_overflow = 50
max_overflow = 20
# Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
# value)
@ -780,16 +808,18 @@ connection = sqlite:////var/lib/neutron/neutron.sqlite
[keystone_authtoken]
{% if server.backend.engine == "contrail" %}
admin_token = {{ server.backend.token }}
admin_user={{ server.backend.user }}
admin_password={{ server.backend.password }}
admin_tenant_name={{ server.backend.tenant }}
{%- endif %}
auth_region={{ server.identity.region }}
auth_protocol=http
revocation_cache_time = 10
auth_type = password
auth_host = {{ server.identity.host }}
auth_port = 35357
admin_token = {{ server.backend.token }}
admin_user={{ server.backend.user }}
admin_password={{ server.backend.password }}
admin_tenant_name={{ server.backend.tenant }}
user_domain_id = {{ server.identity.get('domain', 'default') }}
project_domain_id = {{ server.identity.get('domain', 'default') }}
project_name = {{ server.identity.tenant }}
@ -1368,10 +1398,12 @@ rabbit_max_retries = 0
# heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer
# value)
#heartbeat_timeout_threshold = 60
heartbeat_timeout_threshold = 0
# How often times during the heartbeat_timeout_threshold we check the
# heartbeat. (integer value)
#heartbeat_rate = 2
heartbeat_rate = 2
# Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value)
# Deprecated group/name - [DEFAULT]/fake_rabbit
@ -1529,7 +1561,9 @@ rabbit_max_retries = 0
# Default driver to use for quota checks (string value)
#quota_driver = neutron.db.quota.driver.DbQuotaDriver
{% if server.backend.engine == "contrail" %}
quota_driver = neutron_plugin_contrail.plugins.opencontrail.quota.driver.QuotaDriver
{% endif %}
# Keep in track in the database of current resourcequota usage. Plugins which
# do not leverage the neutron database should set this flag to False (boolean
@ -1584,8 +1618,7 @@ quota_driver = neutron_plugin_contrail.plugins.opencontrail.quota.driver.QuotaDr
# cipher list format. (string value)
#ciphers = <None>
[service_providers]
service_provider = LOADBALANCER:Opencontrail:neutron_plugin_contrail.plugins.opencontrail.loadbalancer.driver.OpencontrailLoadbalancerDriver:default
{% if server.backend.engine == "contrail" %}
service_provider = LOADBALANCER:Opencontrail:neutron_plugin_contrail.plugins.opencontrail.loadbalancer.driver.OpencontrailLoadbalancerDriver:default
{% include "neutron/files/"+server.version+"/ContrailPlugin.ini" %}
{% endif %}

View File

@ -0,0 +1,250 @@
{%- if pillar.neutron.gateway is defined %}
{%- from "neutron/map.jinja" import gateway as neutron with context %}
{%- else %}
{%- from "neutron/map.jinja" import compute as neutron with context %}
{%- endif %}
[DEFAULT]
#
# From oslo.log
#
# If set to true, the logging level will be set to DEBUG instead of the default INFO level. (boolean value)
#debug = false
# If set to false, the logging level will be set to WARNING instead of the default INFO level. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#verbose = true
# The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging
# configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging
# configuration is set in the configuration file and other logging configuration options are ignored (for example,
# logging_context_format_string). (string value)
# Deprecated group/name - [DEFAULT]/log_config
#log_config_append = <None>
# Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set.
# (string value)
#log_date_format = %Y-%m-%d %H:%M:%S
# (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This
# option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logfile
#log_file = <None>
# (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. (string value)
# Deprecated group/name - [DEFAULT]/logdir
#log_dir = <None>
# Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified
# path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if
# log_config_append is set. (boolean value)
#watch_log_file = false
# Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if
# log_config_append is set. (boolean value)
#use_syslog = false
# Syslog facility to receive log lines. This option is ignored if log_config_append is set. (string value)
#syslog_log_facility = LOG_USER
# Log output to standard error. This option is ignored if log_config_append is set. (boolean value)
#use_stderr = true
# Format string to use for log messages with context. (string value)
#logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
# Format string to use for log messages when context is undefined. (string value)
#logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
# Additional data to append to log message when logging level for the message is DEBUG. (string value)
#logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
# Prefix each line of exception output with this format. (string value)
#logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
# Defines the format string for %(user_identity)s that is used in logging_context_format_string. (string value)
#logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
# List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. (list value)
#default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO
# Enables or disables publication of error events. (boolean value)
#publish_errors = false
# The format for an instance that is passed with the log message. (string value)
#instance_format = "[instance: %(uuid)s] "
# The format for an instance UUID that is passed with the log message. (string value)
#instance_uuid_format = "[instance: %(uuid)s] "
# Enables or disables fatal status of deprecations. (boolean value)
#fatal_deprecations = false
[agent]
#
# From neutron.ml2.ovs.agent
#
# The number of seconds the agent will wait between polling for local device changes. (integer value)
#polling_interval = 2
# Minimize polling by monitoring ovsdb for interface changes. (boolean value)
#minimize_polling = true
# The number of seconds to wait before respawning the ovsdb monitor after losing communication with it. (integer value)
#ovsdb_monitor_respawn_interval = 30
# Network types supported by the agent (gre and/or vxlan). (list value)
#tunnel_types =
tunnel_types =vxlan
# The UDP port to use for VXLAN tunnels. (port value)
# Minimum value: 0
# Maximum value: 65535
#vxlan_udp_port = 4789
vxlan_udp_port = 4789
# MTU size of veth interfaces (integer value)
#veth_mtu = 9000
{%- if "vxlan" in neutron.backend.tenant_network_types %}
# Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability. (boolean value)
#l2_population = false
l2_population = True
# Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver. Allows the switch (when supporting an
# overlay) to respond to an ARP request locally without performing a costly ARP broadcast into the overlay. (boolean value)
#arp_responder = false
arp_responder = True
{%- endif %}
# Enable suppression of ARP responses that don't match an IP address that belongs to the port from which they originate. Note: This prevents
# the VMs attached to this agent from spoofing, it doesn't protect them from other devices which have the capability to spoof (e.g. bare
# metal or VMs attached to agents without this flag set to True). Spoofing rules will not be added to any ports that have port security
# disabled. For LinuxBridge, this requires ebtables. For OVS, it requires a version that supports matching ARP headers. This option will be
# removed in Newton so the only way to disable protection will be via the port security extension. (boolean value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#prevent_arp_spoofing = true
# Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel. (boolean value)
#dont_fragment = true
# Make the l2 agent run in DVR mode. (boolean value)
#enable_distributed_routing = false
enable_distributed_routing = {{ neutron.get('dvr', 'False') }}
# Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won't be changed (integer
# value)
#quitting_rpc_timeout = 10
# Reset flow table on start. Setting this to True will cause brief traffic interruption. (boolean value)
#drop_flows_on_start = false
drop_flows_on_start = False
# Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel. (boolean value)
#tunnel_csum = false
# Selects the Agent Type reported (string value)
# This option is deprecated for removal.
# Its value may be silently ignored in the future.
#agent_type = Open vSwitch agent
[ovs]
#
# From neutron.ml2.ovs.agent
#
# Integration bridge to use. Do not change this parameter unless you have a good reason to. This is the name of the OVS integration bridge.
# There is one per hypervisor. The integration bridge acts as a virtual 'patch bay'. All VM VIFs are attached to this bridge and then
# 'patched' according to their network connectivity. (string value)
#integration_bridge = br-int
integration_bridge = br-int
# Tunnel bridge to use. (string value)
#tunnel_bridge = br-tun
tunnel_bridge = br-tun
# Peer patch port in integration bridge for tunnel bridge. (string value)
#int_peer_patch_port = patch-tun
# Peer patch port in tunnel bridge for integration bridge. (string value)
#tun_peer_patch_port = patch-int
# Local IP address of tunnel endpoint. Can be either an IPv4 or IPv6 address. (IP address value)
#local_ip = <None>
local_ip = {{ neutron.local_ip }}
# Comma-separated list of <physical_network>:<bridge> tuples mapping physical network names to the agent's node-specific Open vSwitch bridge
# names to be used for flat and VLAN networks. The length of bridge names should be no more than 11. Each bridge must exist, and should have
# a physical network interface configured as a port. All physical networks configured on the server should have mappings to appropriate
# bridges on each agent. Note: If you remove a bridge from this mapping, make sure to disconnect it from the integration bridge as it won't
# be managed by the agent anymore. Deprecated for ofagent. (list value)
#bridge_mappings =
{%- if "vlan" in neutron.backend.tenant_network_types %}
bridge_mappings ={%- if neutron.get('external_access', True) %}physnet1:br-floating,{%- endif %}physnet2:br-prv
{%- elif neutron.get('external_access', True) %}
bridge_mappings =physnet1:br-floating
{%- endif %}
# Use veths instead of patch ports to interconnect the integration bridge to physical networks. Support kernel without Open vSwitch patch
# port support so long as it is set to True. (boolean value)
#use_veth_interconnection = false
# OpenFlow interface to use. (string value)
# Allowed values: ovs-ofctl, native
#of_interface = ovs-ofctl
# OVS datapath to use. 'system' is the default value and corresponds to the kernel datapath. To enable the userspace datapath set this value
# to 'netdev'. (string value)
# Allowed values: system, netdev
#datapath_type = system
# OVS vhost-user socket directory. (string value)
#vhostuser_socket_dir = /var/run/openvswitch
# Address to listen on for OpenFlow connections. Used only for 'native' driver. (IP address value)
#of_listen_address = 127.0.0.1
# Port to listen on for OpenFlow connections. Used only for 'native' driver. (port value)
# Minimum value: 0
# Maximum value: 65535
#of_listen_port = 6633
# Timeout in seconds to wait for the local switch connecting the controller. Used only for 'native' driver. (integer value)
#of_connect_timeout = 30
# Timeout in seconds to wait for a single OpenFlow request. Used only for 'native' driver. (integer value)
#of_request_timeout = 10
# The interface for interacting with the OVSDB (string value)
# Allowed values: vsctl, native
#ovsdb_interface = vsctl
# The connection string for the native OVSDB backend. Requires the native ovsdb_interface to be enabled. (string value)
#ovsdb_connection = tcp:127.0.0.1:6640
[securitygroup]
#
# From neutron.ml2.ovs.agent
#
# Driver for security groups firewall in the L2 agent (string value)
#firewall_driver = <None>
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
# Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the
# nova security group API. (boolean value)
#enable_security_group = true
enable_security_group = True
# Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node.
# (boolean value)
#enable_ipset = true

58
neutron/gateway.sls Normal file
View File

@ -0,0 +1,58 @@
{% from "neutron/map.jinja" import gateway with context %}
{%- if gateway.enabled %}
neutron_gateway_packages:
pkg.installed:
- names: {{ gateway.pkgs }}
{%- if pillar.neutron.server is not defined %}
/etc/neutron/neutron.conf:
file.managed:
- source: salt://neutron/files/{{ gateway.version }}/neutron-generic.conf.{{ grains.os_family }}
- template: jinja
- require:
- pkg: neutron_gateway_packages
{%- endif %}
/etc/neutron/l3_agent.ini:
file.managed:
- source: salt://neutron/files/{{ gateway.version }}/l3_agent.ini
- template: jinja
- require:
- pkg: neutron_gateway_packages
/etc/neutron/dhcp_agent.ini:
file.managed:
- source: salt://neutron/files/{{ gateway.version }}/dhcp_agent.ini
- require:
- pkg: neutron_gateway_packages
/etc/neutron/metadata_agent.ini:
file.managed:
- source: salt://neutron/files/{{ gateway.version }}/metadata_agent.ini
- template: jinja
- require:
- pkg: neutron_gateway_packages
/etc/neutron/plugins/ml2/openvswitch_agent.ini:
file.managed:
- source: salt://neutron/files/{{ gateway.version }}/openvswitch_agent.ini
- template: jinja
- require:
- pkg: neutron_gateway_packages
neutron_gateway_services:
service.running:
- names: {{ gateway.services }}
- enable: true
- watch:
- file: /etc/neutron/neutron.conf
- file: /etc/neutron/l3_agent.ini
- file: /etc/neutron/metadata_agent.ini
- file: /etc/neutron/plugins/ml2/openvswitch_agent.ini
- file: /etc/neutron/dhcp_agent.ini
{%- endif %}

View File

@ -3,8 +3,8 @@ include:
{% if pillar.neutron.server is defined %}
- neutron.server
{% endif %}
{% if pillar.neutron.bridge is defined %}
- neutron.bridge
{% if pillar.neutron.gateway is defined %}
- neutron.gateway
{% endif %}
{% if pillar.neutron.compute is defined %}
- neutron.compute

View File

@ -1,29 +1,25 @@
{% set compute = salt['grains.filter_by']({
'Debian': {
'pkgs': ['neutron-plugin-openvswitch-agent', 'openvswitch-switch', 'openvswitch-datapath-dkms'],
'services': ['openvswitch-switch', 'neutron-plugin-openvswitch-agent']
'pkgs': ['neutron-openvswitch-agent', 'openvswitch-switch', 'openvswitch-datapath-dkms'],
'services': ['neutron-openvswitch-agent']
},
'RedHat': {
'pkgs': ['openstack-neutron-openvswitch', 'openvswitch', 'fuel-utils'],
'services': ['openvswitch', 'neutron-openvswitch-agent']
'pkgs': ['openstack-neutron-openvswitch', 'openvswitch'],
'services': ['neutron-openvswitch-agent']
},
}, merge=pillar.neutron.get('compute', {})) %}
{% set bridge = salt['grains.filter_by']({
{% set gateway = salt['grains.filter_by']({
'Debian': {
'pkgs': ['neutron-dhcp-agent', 'neutron-plugin-openvswitch-agent', 'neutron-l3-agent', 'openvswitch-common'],
'precise_pkgs': ['openvswitch-datapath-lts-saucy-dkms'],
'migration': False,
'services': ['neutron-plugin-openvswitch-agent', 'neutron-metadata-agent', 'neutron-l3-agent', 'neutron-dhcp-agent']
'pkgs': ['neutron-dhcp-agent', 'neutron-openvswitch-agent', 'neutron-l3-agent', 'openvswitch-common', 'neutron-metadata-agent'],
'services': ['neutron-openvswitch-agent', 'neutron-metadata-agent', 'neutron-l3-agent', 'neutron-dhcp-agent']
},
'RedHat': {
'pkgs': ['openstack-neutron-openvswitch'],
'migration': False,
'migration_pkgs': ['fuel-utils',],
'services': ['neutron-openvswitch-agent', 'neutron-metadata-agent', 'neutron-l3-agent', 'neutron-dhcp-agent']
},
}, merge=pillar.neutron.get('brigde', {})) %}
}, merge=pillar.neutron.get('gateway', {})) %}
{% set server = salt['grains.filter_by']({
'Debian': {
@ -40,7 +36,7 @@
},
}, merge=pillar.neutron.get('server', {})) %}
{%- if pillar.neutron.server.enabled %}
{%- if pillar.neutron.server is defined %}
{%- set tmp_server = pillar.neutron.server %}

View File

@ -36,6 +36,32 @@ neutron_server_service:
{%- endif %}
{% if server.backend.engine == "ml2" %}
/etc/neutron/plugins/ml2/ml2_conf.ini:
file.managed:
- source: salt://neutron/files/{{ server.version }}/ml2_conf.ini
- template: jinja
- require:
- pkg: neutron_server_packages
ml2_plugin_link:
cmd.run:
- names:
- ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- unless: test -e /etc/neutron/plugin.ini
- require:
- file: /etc/neutron/plugins/ml2/ml2_conf.ini
neutron_db_manage:
cmd.run:
- name: neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
- require:
- file: /etc/neutron/neutron.conf
- file: /etc/neutron/plugins/ml2/ml2_conf.ini
{%- endif %}
/etc/neutron/neutron.conf:
file.managed:
- source: salt://neutron/files/{{ server.version }}/neutron-server.conf.{{ grains.os_family }}

View File

@ -0,0 +1,24 @@
neutron:
compute:
agent_mode: dvr
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
dvr: true
enabled: true
external_access: true
local_ip: 10.1.0.105
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
metadata:
host: 172.16.10.254
password: password
version: mitaka

View File

@ -0,0 +1,24 @@
neutron:
compute:
agent_mode: legacy
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
dvr: false
enabled: true
external_access: false
local_ip: 10.1.0.105
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
metadata:
host: 172.16.10.254
password: password
version: mitaka

View File

@ -0,0 +1,24 @@
neutron:
compute:
agent_mode: dvr
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
dvr: true
enabled: true
external_access: false
local_ip: 10.1.0.105
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
metadata:
host: 172.16.10.254
password: password
version: mitaka

View File

@ -0,0 +1,47 @@
neutron:
server:
backend:
engine: ml2
external_mtu: 1500
mechanism:
ovs:
driver: openvswitch
tenant_network_types: flat,vxlan
bind:
address: 172.16.10.101
port: 9696
compute:
host: 172.16.10.254
password: workshop
region: RegionOne
tenant: service
user: nova
database:
engine: mysql
host: 172.16.10.254
name: neutron
password: workshop
port: 3306
user: neutron
dns_domain: novalocal
dvr: true
enabled: true
global_physnet_mtu: 1500
identity:
engine: keystone
host: 172.16.10.254
password: workshop
port: 35357
region: RegionOne
tenant: service
user: neutron
l3_ha: false
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
plugin: ml2
version: mitaka

View File

@ -0,0 +1,47 @@
neutron:
server:
backend:
engine: ml2
external_mtu: 1500
mechanism:
ovs:
driver: openvswitch
tenant_network_types: flat,vxlan
bind:
address: 172.16.10.101
port: 9696
compute:
host: 172.16.10.254
password: workshop
region: RegionOne
tenant: service
user: nova
database:
engine: mysql
host: 172.16.10.254
name: neutron
password: workshop
port: 3306
user: neutron
dns_domain: novalocal
dvr: false
enabled: true
global_physnet_mtu: 1500
identity:
engine: keystone
host: 172.16.10.254
password: workshop
port: 35357
region: RegionOne
tenant: service
user: neutron
l3_ha: True
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
plugin: ml2
version: mitaka

View File

@ -0,0 +1,24 @@
neutron:
gateway:
agent_mode: dvr_snat
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
dvr: true
enabled: true
external_access: True
local_ip: 10.1.0.110
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
metadata:
host: 172.16.10.254
password: password
version: mitaka

View File

@ -0,0 +1,24 @@
neutron:
gateway:
agent_mode: legacy
backend:
engine: ml2
tenant_network_types: "flat,vxlan"
mechanism:
ovs:
driver: openvswitch
dvr: false
enabled: true
external_access: True
local_ip: 10.1.0.110
message_queue:
engine: rabbitmq
host: 172.16.10.254
password: workshop
port: 5672
user: openstack
virtual_host: /openstack
metadata:
host: 172.16.10.254
password: password
version: mitaka