Open for Rocky development

Mark any queens specs that have been implemented.

Move queens/{approved,backlog} -> rocky

Tidy misc formatting issues which for some reason managed
to get through gate checks.

Change-Id: Ib37f5cc45ae4f745e7beb7d39743e69d79e1323f
This commit is contained in:
James Page 2018-03-21 09:56:25 +00:00
parent 6e07e55d8c
commit e353ac4e39
26 changed files with 207 additions and 152 deletions

View File

@ -14,6 +14,7 @@ on for the upcoming release. This is the output of those discussions:
:glob:
:maxdepth: 1
priorities/rocky-priorities
priorities/queens-priorities
priorities/pike-priorities
priorities/ocata-priorities
@ -28,6 +29,7 @@ Here you can find the specs, and spec template, for each release:
:glob:
:maxdepth: 1
specs/rocky/index
specs/queens/index
specs/pike/index
specs/ocata/index

View File

@ -1 +0,0 @@
../../../../specs/queens/approved

View File

@ -1 +0,0 @@
../../../../specs/queens/backlog/

View File

@ -0,0 +1 @@
../../../../specs/rocky/approved

View File

@ -0,0 +1 @@
../../../../specs/rocky/backlog

View File

@ -0,0 +1,34 @@
===========================
Charm Rocky Specifications
===========================
Template:
.. toctree::
:maxdepth: 1
Specification Template (Rocky release) <template>
Rocky implemented specs:
.. toctree::
:glob:
:maxdepth: 1
implemented/*
Rocky approved (but not implemented) specs:
.. toctree::
:glob:
:maxdepth: 1
approved/*
Rocky backlog (carried over from previous cycle) specs:
.. toctree::
:glob:
:maxdepth: 1
backlog/*

View File

@ -33,9 +33,9 @@ ways, a way of communicating changes between OpenStack and external routers is
needed.
Since networks in OpenStack are self serviced, any addition or removal of
OpenStack network should not depend on network administrators actions. Network
limitations should be set up front. This is further facilitated by the use of
subnet pools in OpenStack.
OpenStack network should not depend on network administrators actions.
Network limitations should be set up front. This is further facilitated by
the use of subnet pools in OpenStack.
Other implications include decoupling subnets from layer 2, allowing different
next-hops for floating IPs.
@ -101,8 +101,8 @@ number of connections grows. In cloud, where many VMs can establish connections
to various external peers, this can require significant memory and cpu
resources. Neutron gateway doesn't really scale without manual intervention. On
top of that, if only NAT is used, peers cant really know which instance
established connection, they only know connection came from the cloud. Same
problems apply to DVR.
established connection, they only know connection came from the cloud.
Same problems apply to DVR.
Static routes on physical gateways are the closest thing to BGP solution. The
only drawback is operational; in case address scope in Neutron changes, these

View File

@ -12,9 +12,9 @@
http://sphinx-doc.org/rest.html To test out your formatting, see
http://www.tele3.cz/jbar/rest/rest.html
===============================
===========================================
Routed Provider Networks for Neutron Charms
===============================
===========================================
As L3-oriented fabrics get more widely deployed in DCs there is a need to
support multi-segment networks in Neutron charms. Routed provider networks
@ -32,7 +32,7 @@ define isolated broadcast domains and should have different subnets
used on them for proper addressing and routing to be implemented.
Provider Networks in Neutron used to have a single segment but starting with
`Newton <https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html>`__
`Newton <https://specs.openstack.org/openstack/neutron-specs/specs/newton/routed-networks.html>`__
there is support for multi-segment networks which are called routed provider
networks. From the data model perspective a subnet is associated with a
network segment and a given provider network becomes multi-segment as a
@ -76,7 +76,7 @@ Also, from the Charm and Juju perspective the following needs to be addressed:
* Appropriate placement of units to make sure necessary agents are deployed.
Deployment Scenarios
+++++++++++++++++++++++
--------------------
Deployment scenarios should include both setups with charm-neutron-gateway and
without it. Moreover, for the use-case where charm neutron-gateway is present
@ -95,9 +95,9 @@ Given that there are certain limitations in how routed provider networks are
implemented in Newton release this spec is targeted at Ocata release.
Segments Service Plugin
+++++++++++++++++++++++
-----------------------
`A related Ocata documentation section <https://docs.openstack.org/ocata/networking-guide/config-routed-networks.html#example-configuration>`__ contains 3 major configuration requirements:
`A related Ocata documentation section <https://docs.openstack.org/ocata/networking-guide/config-routed-networks.html#example-configuration>`__ contains 3 major configuration requirements:
* "segments" needs to be added to the service_plugins list (neutron-api);
* A compute placement API section needs to be added to neutron.conf;
@ -129,7 +129,7 @@ This distiction is what allows unconditional addition of this service plugin
for the standard ML2 core plugin.
Multiple L2 Networks (Switch Fabrics)
+++++++++++++++++++++++
-------------------------------------
Each network segment includes `network type information <https://github.com/openstack/neutron/blob/3e34db0c19cdcc86cd5f1b72d6374c3eca0faa7e/neutron/db/models/segment.py#L30-L54>`__ this means that for ML2 plugin flat_networks and network_vlan_ranges
options there may be different values depending on a switch fabric. In this
@ -159,125 +159,127 @@ connected to will have identical configuration, we need to account for a
general case.
Multi-application approach allows to avoid any charm modifications besides
charm-neutron-api and have the following content in bundle.yaml::
charm-neutron-api and have the following content in bundle.yaml:
variables:
data-port: &data-port br-data:bond1
vlan-ranges: &vlan-ranges provider-fab1:2:3 provider-fab2:4:5 provider-fab3:6:7
bridge-mappings-fab1: &bridge-mappings-fab1 provider-fab1:br-data
bridge-mappings-fab2: &bridge-mappings-fab2 provider-fab2:br-data
bridge-mappings-fab3: &bridge-mappings-fab3 provider-fab3:br-data
vlan-ranges-fab1: &vlan-ranges-fab1 provider-fab1:2:3
vlan-ranges-fab2: &vlan-ranges-fab2 provider-fab2:4:5
vlan-ranges-fab3: &vlan-ranges-fab3 provider-fab3:6:7
.. code-block:: yaml
# allocate machines such that there are enough
# machines in attached to each switch fabric
# fabrics do not necessarily correspond to
# availability zones
machines:
"0":
constraints: tags=compute,fab1
"1":
constraints: tags=compute,fab1
"2":
constraints: tags=compute,fab1
"3":
constraints: tags=compute,fab1
"4":
constraints: tags=compute,fab1
"5":
constraints: tags=compute,fab2
"6":
constraints: tags=compute,fab2
"7":
constraints: tags=compute,fab2
"8":
constraints: tags=compute,fab2
"9":
constraints: tags=compute,fab3
"10":
constraints: tags=compute,fab3
"11":
constraints: tags=compute,fab3
"12":
constraints: tags=compute,fab3
services:
nova-compute-kvm-fab1:
charm: cs:nova-compute
num_units: 5
bindings:
# ...
options:
# ...
default-availability-zone: az1
to:
- 0
- 1
- 2
- 3
- 4
nova-compute-kvm-fab2:
charm: cs:nova-compute
num_units: 5
bindings:
# ...
options:
# ...
default-availability-zone: az2
to:
- 5
- 6
- 7
- 8
nova-compute-kvm-fab3:
charm: cs:nova-compute
num_units: 5
bindings:
# ...
options:
# ...
default-availability-zone: az3
to:
- 9
- 10
- 11
- 12
neutron-openvswitch-fab1:
charm: cs:neutron-openvswitch
num_units: 0
bindings:
data: *overlay-space-fab1
options:
bridge-mappings: *bridge-mappings-fab1
vlan-ranges: *vlan-ranges-fab1
prevent-arp-spoofing: True
data-port: *data-port
enable-local-dhcp-and-metadata: True
neutron-openvswitch-fab2:
charm: cs:neutron-openvswitch
num_units: 0
bindings:
data: *overlay-space-fab2
options:
bridge-mappings: *bridge-mappings-fab2
vlan-ranges: *vlan-ranges-fab2
prevent-arp-spoofing: True
data-port: *data-port
enable-local-dhcp-and-metadata: True
neutron-openvswitch-fab3:
charm: cs:neutron-openvswitch
num_units: 0
bindings:
data: *overlay-space-fab3
options:
bridge-mappings: *bridge-mappings-fab3
vlan-ranges: *vlan-ranges-fab3
prevent-arp-spoofing: True
data-port: *data-port
enable-local-dhcp-and-metadata: True
# each of the apps needs to be related appropriately
variables:
data-port: &data-port br-data:bond1
vlan-ranges: &vlan-ranges provider-fab1:2:3 provider-fab2:4:5 provider-fab3:6:7
bridge-mappings-fab1: &bridge-mappings-fab1 provider-fab1:br-data
bridge-mappings-fab2: &bridge-mappings-fab2 provider-fab2:br-data
bridge-mappings-fab3: &bridge-mappings-fab3 provider-fab3:br-data
vlan-ranges-fab1: &vlan-ranges-fab1 provider-fab1:2:3
vlan-ranges-fab2: &vlan-ranges-fab2 provider-fab2:4:5
vlan-ranges-fab3: &vlan-ranges-fab3 provider-fab3:6:7
# allocate machines such that there are enough
# machines in attached to each switch fabric
# fabrics do not necessarily correspond to
# availability zones
machines:
"0":
constraints: tags=compute,fab1
"1":
constraints: tags=compute,fab1
"2":
constraints: tags=compute,fab1
"3":
constraints: tags=compute,fab1
"4":
constraints: tags=compute,fab1
"5":
constraints: tags=compute,fab2
"6":
constraints: tags=compute,fab2
"7":
constraints: tags=compute,fab2
"8":
constraints: tags=compute,fab2
"9":
constraints: tags=compute,fab3
"10":
constraints: tags=compute,fab3
"11":
constraints: tags=compute,fab3
"12":
constraints: tags=compute,fab3
services:
nova-compute-kvm-fab1:
charm: cs:nova-compute
num_units: 5
bindings:
# ...
options:
# ...
default-availability-zone: az1
to:
- 0
- 1
- 2
- 3
- 4
nova-compute-kvm-fab2:
charm: cs:nova-compute
num_units: 5
bindings:
# ...
options:
# ...
default-availability-zone: az2
to:
- 5
- 6
- 7
- 8
nova-compute-kvm-fab3:
charm: cs:nova-compute
num_units: 5
bindings:
# ...
options:
# ...
default-availability-zone: az3
to:
- 9
- 10
- 11
- 12
neutron-openvswitch-fab1:
charm: cs:neutron-openvswitch
num_units: 0
bindings:
data: \*overlay-space-fab1
options:
bridge-mappings: \*bridge-mappings-fab1
vlan-ranges: \*vlan-ranges-fab1
prevent-arp-spoofing: True
data-port: \*data-port
enable-local-dhcp-and-metadata: True
neutron-openvswitch-fab2:
charm: cs:neutron-openvswitch
num_units: 0
bindings:
data: \*overlay-space-fab2
options:
bridge-mappings: \*bridge-mappings-fab2
vlan-ranges: \*vlan-ranges-fab2
prevent-arp-spoofing: True
data-port: \*data-port
enable-local-dhcp-and-metadata: True
neutron-openvswitch-fab3:
charm: cs:neutron-openvswitch
num_units: 0
bindings:
data: \*overlay-space-fab3
options:
bridge-mappings: \*bridge-mappings-fab3
vlan-ranges: \*vlan-ranges-fab3
prevent-arp-spoofing: True
data-port: \*data-port
enable-local-dhcp-and-metadata: True
# each of the apps needs to be related appropriately
# ...
The above bundle part is for a setup without charm-neutron-gateway, although
it can be added easily using the same approach. Given that there are no
@ -287,7 +289,7 @@ is the infrastructure routing for overlay networks on different fabrics so
that VXLAN or other tunnels can be created between endpoints.
Documented Requirements and Limitations
+++++++++++++++++++++++
---------------------------------------
The Ocata Routed Provider Networks `guide mentions scheduler limitations, <https://docs.openstack.org/ocata/networking-guide/config-routed-networks.html#limitations>`__
however, they are made in reference to Newton and are likely outdated. Looking
@ -310,6 +312,11 @@ This might be considered a serious limitation, however:
An `RFE <https://pad.lv/1667329>`__ exists to address this limitation based on
`subnet service types <https://specs.openstack.org/openstack/neutron-specs/specs/newton/subnet-service-types.html>`__ and `dynamic routing <https://docs.openstack.org/newton/networking-guide/config-bgp-dynamic-routing.html>`__.
Alternatives
------------
N/A
Implementation
==============
@ -327,7 +334,7 @@ to this spec.
.. code-block:: bash
git-review -t 1743743-fe-routed-provider-networks
git-review -t 1743743-fe-routed-provider-networks
Work Items
----------

View File

@ -16,50 +16,62 @@
OpenStack with OVN
===============================
Openstack can be deployed with a number of SDN solutions (e.g. ODL). OVN provides
virtual-networking for Open vSwitch (OVS). OVN has a lot of desirable features and
is designed to be integrated into Openstack, among others.
Openstack can be deployed with a number of SDN solutions (e.g. ODL). OVN
provides virtual-networking for Open vSwitch (OVS). OVN has a lot of desirable
features and is designed to be integrated into Openstack, among others.
Since there is already a networking-ovn project under openstack, it is the obvious
next step to implement a Juju charm that provides this service.
Since there is already a networking-ovn project under openstack, it is the
obvious next step to implement a Juju charm that provides this service.
Problem Description
===================
Currently, Juju charms have support for deploying openstack, either with it's
default SDN solution (Neutron), or with others such as ODL. This project
will expand the deployment scenarios under Juju for openstack by including OVN in the
list of available SDN solutions.
will expand the deployment scenarios under Juju for openstack by including OVN
in the list of available SDN solutions.
This will also benefit OPNFV's JOID installer in providing another scenario in its deployment.
This will also benefit OPNFV's JOID installer in providing another scenario in
its deployment.
Proposed Change
===============
Charms implementing neutron-api, ovn-controller and neutron-ovn will need to be implemented.
These will be written using the new reactive framework of Juju.
Charms implementing neutron-api, ovn-controller and neutron-ovn will need to
be implemented. These will be written using the new reactive framework
of Juju.
Charm : neutron-ovn
-------------------
This charm will be deployed alongside nova-compute deployments. This will be a subordinate
charm to nova-compute, that installs and runs openvswitch and the ovn-controller.
This charm will be deployed alongside nova-compute deployments. This will be
a subordinate charm to nova-compute, that installs and runs openvswitch and
the ovn-controller.
Charm : ovn-controller
----------------------
This charm will deploy ovn itself. It will start the OVN services (ovsdb-server, ovn-northd).
Since there can only be a single instance of ovsdb-server and ovn-northd in a deployment,
we can also implement passive HA, but this can be included in further revisions of this charm.
This charm will deploy ovn itself. It will start the OVN services
(ovsdb-server, ovn-northd). Since there can only be a single instance of
ovsdb-server and ovn-northd in a deployment, we can also implement passive
HA, but this can be included in further revisions of this charm.
Charm : neutron-api-ovn
-----------------------
This charm will provide the api only integration of neutron to OVN. This charm will need to be
subordinate to the existing neutron-api charm. The main task of this charm is to setup the
"neutron.conf" and "ml2_ini.conf" config files with the right parameters for OVN.
The principal charm, neutron-api, handles the install and restart for neutron-server.
This charm will provide the api only integration of neutron to OVN. This charm
will need to be subordinate to the existing neutron-api charm. The main task
of this charm is to setup the "neutron.conf" and "ml2_ini.conf" config files
with the right parameters for OVN. The principal charm, neutron-api, handles
the install and restart for neutron-server.
Refer for more information : https://docs.openstack.org/networking-ovn/latest/install/manual.html
Alternatives
------------
N/A
Implementation
==============

0
specs/rocky/redirects Normal file
View File