Doc-cleanup: remove redundant documentation

Complete documentation, specs and devrefs, is to be maintained at Kuryr,
the common kuryr lib.

Change-Id: I27a41817ef0def4dd0e9e9929e6019d18f566ac2
Closes-bug: #1599362
This commit is contained in:
vikaschoudhary16 2016-07-06 10:28:37 +05:30
parent 9c94d3ccc5
commit fb5e6d9b82
20 changed files with 9 additions and 2950 deletions

View File

@ -193,3 +193,10 @@ i.e 10.0.0.0/16, but with different pool name, neutron_pool2:
bar
397badb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d786
External Resources
------------------
The latest and most in-depth documentation is available at:
<https://github.com/openstack/kuryr/tree/master/doc/source>

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

View File

@ -38,7 +38,7 @@ source_suffix = '.rst'
master_doc = 'index'
# General information about the project.
project = u'kuryr'
project = u'kuryr-libnetwork'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,65 +0,0 @@
===================
Goals And Use Cases
===================
Kuryr provides networking to Docker containers by leveraging the Neutron APIs
and services. It also provides containerized images for common Neutron plugins
Kuryr implements a `libnetwork remote driver`_ and maps its calls to OpenStack
`Neutron`_. It works as a translator between libnetwork's
`Container Network Model`_ (CNM) and `Neutron's networking model`_
and provides container-host or container-vm (nested VM) binding.
Using Kuryr any Neutron plugin can be used as a libnetwork remote driver
explicitly. Neutron APIs are vendor agnostic and thus all Neutron plugins will
have the capability of providing the networking backend of Docker with a common
lightweight plugging snippet as they have in nova.
Kuryr takes care of binding the container namespace to the networking
infrastructure by providing a generic layer for `VIF binding`_ depending on the
port type for example Linux bridge port, Open vSwitch port, Midonet port and so
on.
Kuryr should be the gateway between containers networking API and use cases and
Neutron APIs and services and should bridge the gaps between the two in both
domains. It will map the missing parts in Neutron and drive changes to adjust
it.
Kuryr should address `Magnum`_ project use cases in terms of containers
networking and serve as a unified interface for Magnum or any other OpenStack
project that needs to leverage containers networking through Neutron API.
In that regard, Kuryr aims at leveraging Neutron plugins that support VM
nested container's use cases and enhancing Neutron APIs to support these cases
(for example `OVN`_). An etherpad regarding `Magnum Kuryr Integration`_
describes the various use cases Kuryr needs to support.
Kuryr should provide containerized Neutron plugins for easy deployment and must
be compatible with OpenStack `Kolla`_ project and its deployment tools. The
containerized plugins have the common Kuryr binding layer which binds the
container to the network infrastructure.
Kuryr should leverage Neutron sub-projects and services (in particular LBaaS,
FWaaS, VPNaaS) to provide to support advanced containers networking use cases
and to be consumed by containers orchestration management systems (for example
Kubernetes , or even OpenStack Magnum).
Kuryr also support pre-allocating of networks, ports and subnets, and binding
them to Docker networks/endpoints upon creation depending on specific labels
that are passed during Docker creation. There is a patch being merged in Docker
to support providing user labels upon network creation. you can look at this
`User labels in docker patch`_.
References
----------
.. _libnetwork remote driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md
.. _Neutron: https://wiki.openstack.org/wiki/Neutron
.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model
.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification
.. _VIF binding: https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism
.. _Magnum: https://wiki.openstack.org/wiki/Magnum
.. _OVN: https://launchpad.net/networking-ovn
.. _Kolla: https://wiki.openstack.org/wiki/Kolla
.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api
.. _User labels in docker patch: https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254
.. _Magnum Kuryr Integration: https://etherpad.openstack.org/p/magnum-kuryr

View File

@ -1,48 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels in Neutron devref:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Design and Developer Docs
==========================
Kuryr goal is to bring containers networking to Neutron core API
and advanced networking services.
This section contains detailed designs / project integration plans and low level
use cases for the various parts inside Kuryr.
Programming HowTos and Tutorials
--------------------------------
.. toctree::
:maxdepth: 4
goals_and_use_cases
libnetwork_remote_driver_design
kuryr_mitaka_milestone
k8s_api_watcher_design
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

File diff suppressed because it is too large Load Diff

View File

@ -1,118 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
=====================================
Kuryr - Milestone for Mitaka
=====================================
https://launchpad.net/kuryr
Kuryr Roles and Responsibilities - First Milestone for Mitaka release
-----------------------------------------------------------------------
This chapter includes the various use cases that Kuryr aims at solving,
some were briefly described in the introduction chapter.
This list of items will need to be prioritized.
1) Deploy Kuryr as a libnetwork remote driver (map between libnetwork
API and Neutron API)
2) Configuration
https://etherpad.openstack.org/p/kuryr-configuration
Includes authentication to Neutron and Docker (Keystone integration)
3) VIF Binding
https://etherpad.openstack.org/p/Kuryr_vif_binding_unbinding
https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism
4) Containerized neutron plugins + Kuryr common layer (Kolla)
5) Nested VM - agent less mode (or with Kuryr shim layer)
6) Magnum Kuryr Integration
https://blueprints.launchpad.net/kuryr/+spec/containers-in-instances
Create Kuryr heat resources for Magnum to consume
7) Missing APIs in Neutron to support docker networking model
Port-Mapping:
Docker port-mapping will be implemented in services and not networks
(libnetwork).
There is a relationship between the two.
Here are some details:
https://github.com/docker/docker/blob/master/experimental/networking.md
https://github.com/docker/docker/blob/master/api/server/server_experimental_unix.go#L13-L16
Here is an example of publishing a service on a particular network and attaching
a container to the service:
docker service publish db1.prod cid=$(docker run -itd -p 8000:8000 ubuntu)
docker service attach $cid db1.prod
Kuryr will need to interact with the services object of the docker
api to support port-mapping.
We are planning to propose a port forwarding spec in Mitaka that
introduces the API and reference implementation of port forwarding
in Neutron to enable this feature.
Neutron relevant specs:
VLAN trunk ports
( https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms)
(Used for nested VM's defining trunk port and sub-ports)
DNS resolution according to port name
(https://review.openstack.org/#/c/90150/)
(Needed for feature compatibility with Docker services publishing)
8) Mapping between Neutron identifiers and Docker identifiers
A new spec in Neutron is being proposed that we can
leverage for this use case: `Adding tags to resources`_ .
Tags are similar in concept to Docker labels.
9) Testing (CI)
There should be a testing infrastructure running both unit and functional tests with full
setup of docker + kuryr + neutron.
10) Packaging and devstack plugin for Kuryr
Kuryr Future Scope
------------------
1) Kuryr is planned to support other networking backend models defined by Kubernetes
(and not just libnetwork).
2) In addition to Docker, services are a key component of Kubernetes.
In Kubernetes, I create a pod and optionally create/attach a service to a pod:
https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md
Services could be implemented with LBaaS APIs
An example project that does this for Kubernetes and Neutron LBaaS:
https://github.com/kubernetes/kubernetes/blob/release-1.0/pkg/cloudprovider/openstack/openstack.go
References
==========
.. _libnetwork remote driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md
.. _Neutron: https://wiki.openstack.org/wiki/Neutron
.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model
.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification
.. _Magnum: https://wiki.openstack.org/wiki/Magnum
.. _OVN: https://launchpad.net/networking-ovn
.. _Kolla: https://wiki.openstack.org/wiki/Kolla
.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api
.. _plugin discovery mechanism: https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery
.. _Neutron client: http://docs.openstack.org/developer/python-neutronclient/
.. _libkv: https://github.com/docker/libkv
.. _VIF binding: https://blueprints.launchpad.net/kuryr/+spec/vif-binding-and-unbinding-mechanism
.. _Adding tags to resources: https://review.openstack.org/#/c/216021/
.. _User labels in docker patch: https://github.com/docker/libnetwork/pull/222/files#diff-2b9501381623bc063b38733c35a1d254

View File

@ -1,419 +0,0 @@
=======================================
Libnetwork Remote Network Driver Design
=======================================
What is Kuryr
-------------
Kuryr implements a `libnetwork remote network driver`_ and maps its calls to OpenStack
`Neutron`_. It works as a translator between libnetwork's
`Container Network Model`_ (CNM) and `Neutron's networking model`_. Kuryr also acts as
a `libnetwork IPAM driver`_.
Goal
~~~~
Through Kuryr any Neutron plugin can be used as libnetwork backend with no
additional effort. Neutron APIs are vendor agnostic and thus all Neutron
plugins will have the capability of providing the networking backend of Docker
for a similar small plugging snippet as they have in nova.
Kuryr also takes care of binding one of a veth pair to a network interface on
the host, e.g., Linux bridge, Open vSwitch datapath and so on.
Kuryr Workflow - Host Networking
--------------------------------
Kuryr resides in each host that runs Docker containers and serves `APIs`_
required for the libnetwork remote network driver. It is planned to use the
`Adding tags to resources`_ new Neutron feature by Kuryr, to map between
Neutron resource Id's and Docker Id's (UUID's)
1. libnetwork discovers Kuryr via `plugin discovery mechanism`_ *before the
first request is made*
- During this process libnetwork makes a HTTP POST call on
``/Plugin.Active`` and examines the driver type, which defaults to
``"NetworkDriver"`` and ``"IpamDriver"``
- libnetwork also calls the following two API endpoints
1. ``/NetworkDriver.GetCapabilities`` to obtain the capability of Kuryr
which defaults to ``"local"``
2. ``/IpamDriver.GetDefaultAddressSpcaces`` to get the default address
spaces used for the IPAM
2. libnetwork registers Kuryr as a remote driver
3. A user makes requests against libnetwork with the network driver specifier for Kuryr
- i.e., ``--driver=kuryr`` or ``-d kuryr`` **and** ``--ipam-driver=kuryr``
for the Docker CLI
4. libnetwork makes API calls against Kuryr
5. Kuryr receives the requests and calls Neutron APIs with `Neutron client`_
6. Kuryr receives the responses from Neutron and compose the responses for
libnetwork
7. Kuryr returns the responses to libnetwork
8. libnetwork stores the returned information to its key/value datastore
backend
- the key/value datastore is abstracted by `libkv`_
Libnetwork User Workflow (with Kuryr as remote network driver) - Host Networking
---------------------------------------------------------------------------------
1. A user creates a network ``foo`` with the subnet information
::
$ sudo docker network create --driver=kuryr --ipam-driver=kuryr \
--subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 foo
286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364
This makes a HTTP POST call on ``/IpamDriver.RequestPool`` with the following
JSON data.
::
{
"AddressSpace": "global_scope",
"Pool": "10.0.0.0/16",
"SubPool": "10.0.0.0/24",
"Options": null
"V6": false
}
The value of ``SubPool`` comes from the value specified in ``--ip-range``
option in the command above and value of ``AddressSpace`` will be ``global_scope`` or ``local_scope`` depending on value of ``capability_scope`` configuration option. Kuryr creates a subnetpool, and then returns
the following response.
::
{
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376",
"Pool": "10.0.0.0/16",
"Data": {}
}
If the ``--gateway`` was specified like the command above, another HTTP POST
call against ``/IpamDriver.RequestAddress`` follows with the JSON data below.
::
{
"Address": "10.0.0.1",
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376",
"Options": null,
}
As the IPAM driver Kuryr allocates a requested IP address and returns the
following response.
::
{
"Address": "10.0.0.1/16",
"Data": {}
}
Finally a HTTP POST call on ``/NetworkDriver.CreateNetwork`` with the
following JSON data.
::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"IPv4Data": [{
"Pool": "10.0.0.0/16",
"Gateway": "10.0.0.1/16",
"AddressSpace": ""
}],
"IPv6Data": [],
"Options": {"com.docker.network.generic": {}}
}
The Kuryr remote network driver will then generate a Neutron API request to
create subnet with pool cidr and an underlying Neutron network. When the
Neutron subnet and network has been created, the Kuryr remote network driver
will generate an empty success response to the docker daemon. Kuryr tags the
Neutron network with the NetworkID from docker.
2. A user launches a container against network ``foo``
::
$ sudo docker run --net=foo -itd --name=container1 busybox
78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703
This makes a HTTP POST call on ``/IpamDriver.RequestAddress`` with the
following JSON data.
::
{
"Address": "",
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376",
"Options": null,
}
The IPAM driver Kuryr sends a port creation request to neutron and returns the following response with neutron provided ip address.
::
{
"Address": "10.0.0.2/16",
"Data": {}
}
Then another HTTP POST call on ``/NetworkDriver.CreateEndpoint`` with the
following JSON data is made.
::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"Interface": {
"AddressIPv6": "",
"MacAddress": "",
"Address": "10.0.0.2/16"
},
"Options": {
"com.docker.network.endpoint.exposedports": [],
"com.docker.network.portmap": []
},
"EndpointID": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd"
}
The Kuryr remote network driver then generates a Neutron API request to
fetch port with the matching fields for interface in the request. Kuryr
then updates this port's name, tagging it with endpoint ID.
Following steps are taken:
1) On the endpoint creation Kuryr examines if there's a Port with CIDR
that corresponds to Address or AddressIPv6 requested.
2) If there's a Port, Kuryr tries to reuse it without creating a new
Port. Otherwise it creates a new one with the given address.
3) Kuryr tags the Neutron port with EndpointID.
When the Neutron port has been updated, the Kuryr remote driver will
generate a response to the docker daemon in following form:
(https://github.com/docker/libnetwork/blob/master/docs/remote.md#create-endpoint)
::
{
"Interface": {"MacAddress": "08:22:e0:a8:7d:db"}
}
On receiving success response, libnetwork makes a HTTP POST call on ``/NetworkDriver.Join`` with
the following JSON data.
::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"SandboxKey": "/var/run/docker/netns/052b9aa6e9cd",
"Options": null,
"EndpointID": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd"
}
Kuryr connects the container to the corresponding neutron network by doing
the following steps:
1) Generate a veth pair.
2) Connect one end of the veth pair to the container (which is running in a
namespace that was created by Docker).
3) Perform a neutron-port-type-dependent VIF-binding to the corresponding
Neutron port using the VIF binding layer and depending on the specific
port type.
After the VIF-binding is completed, the Kuryr remote network driver
generates a response to the Docker daemon as specified in the libnetwork
documentation for a join request.
(https://github.com/docker/libnetwork/blob/master/docs/remote.md#join)
3. A user requests information about the network
::
$ sudo docker network inspect foo
{
"Name": "foo",
"Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"Scope": "local",
"Driver": "kuryr",
"IPAM": {
"Driver": "default",
"Config": [{
"Subnet": "10.0.0.0/16",
"IPRange": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}]
},
"Containers": {
"78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703": {
"endpoint": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd",
"mac_address": "02:42:c0:a8:7b:cb",
"ipv4_address": "10.0.0.2/16",
"ipv6_address": ""
}
}
}
4. A user connects one more container to the network
::
$ sudo docker network connect foo container2
d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646
$ sudo docker network inspect foo
{
"Name": "foo",
"Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"Scope": "local",
"Driver": "kuryr",
"IPAM": {
"Driver": "default",
"Config": [{
"Subnet": "10.0.0.0/16",
"IPRange": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}]
},
"Containers": {
"78c0458ba00f836f609113dd369b5769527f55bb62b5680d03aa1329eb416703": {
"endpoint": "edb23d36d77336d780fe25cdb5cf0411e5edd91b0777982b4b28ad125e28a4dd",
"mac_address": "02:42:c0:a8:7b:cb",
"ipv4_address": "10.0.0.2/16",
"ipv6_address": ""
},
"d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646": {
"endpoint": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc",
"mac_address": "02:42:c0:a8:7b:cc",
"ipv4_address": "10.0.0.3/16",
"ipv6_address": ""
}
}
}
5. A user disconnects a container from the network
::
$ CID=d7fcc280916a8b771d2375688b700b036519d92ba2989622627e641bdde6e646
$ sudo docker network disconnect foo $CID
This makes a HTTP POST call on ``/NetworkDriver.Leave`` with the following
JSON data.
::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"EndpointID": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc"
}
Kuryr remote network driver will remove the VIF binding between the
container and the Neutron port, and generate an empty response to the
Docker daemon.
Then libnetwork makes a HTTP POST call on ``/NetworkDriver.DeleteEndpoint`` with the
following JSON data.
::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"EndpointID": "a55976bafaad19f2d455c4516fd3450d3c52d9996a98beb4696dc435a63417fc"
}
Kuryr remote network driver generates a Neutron API request to delete the
associated Neutron port, in case the relevant port subnet is empty, Kuryr
also deletes the subnet object using Neutron API and generate an empty
response to the Docker daemon: {}
Finally libnetwork makes a HTTP POST call on ``/IpamDriver.ReleaseAddress``
with the following JSON data.
::
{
"Address": "10.0.0.3",
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376"
}
Kuryr remote IPAM driver generates a Neutron API request to delete the associated Neutron port.
As the IPAM driver Kuryr deallocates the IP address and returns the following response.
::
{}
7. A user deletes the network
::
$ sudo docker network rm foo
This makes a HTTP POST call against ``/NetworkDriver.DeleteNetwork`` with the
following JSON data.
::
{
"NetworkID": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364"
}
Kuryr remote network driver generates a Neutron API request to delete the
corresponding Neutron network and subnets. When the Neutron network and subnets has been deleted,
the Kuryr remote network driver generate an empty response to the docker
daemon: {}
Then another HTTP POST call on ``/IpamDriver.ReleasePool`` with the
following JSON data is made.
::
{
"PoolID": "941f790073c3a2c70099ea527ee3a6205e037e84749f2c6e8a5287d9c62fd376"
}
Kuryr delete the corresponding subnetpool and returns the following response.
::
{}
Mapping between the CNM and the Neutron's Networking Model
----------------------------------------------------------
Kuryr communicates with Neutron via `Neutron client`_ and bridges between
libnetwork and Neutron by translating their networking models. The following
table depicts the current mapping between libnetwork and Neutron models:
===================== ======================
libnetwork Neutron
===================== ======================
Network Network
Sandbox Subnet, Port and netns
Endpoint Port
===================== ======================
libnetwork's Sandbox and Endpoint can be mapped into Neutron's Subnet and Port,
however, Sandbox is invisible from users directly and Endpoint is only the
visible and editable resource entity attachable to containers from users'
perspective. Sandbox manages information exposed by Endpoint behind the scene
automatically.
Notes on implementing the libnetwork remote driver API in Kuryr
---------------------------------------------------------------
1. DiscoverNew Notification:
Neutron does not use the information related to discovery of new resources such
as new nodes and therefore the implementation of this API method does nothing.
2. DiscoverDelete Notification:
Neutron does not use the information related to discovery of resources such as
nodes being deleted and therefore the implementation of this API method does
nothing.
.. _libnetwork remote network driver: https://github.com/docker/libnetwork/blob/master/docs/remote.md
.. _libnetwork IPAM driver: https://github.com/docker/libnetwork/blob/master/docs/ipam.md
.. _Neutron: https://wiki.openstack.org/wiki/Neutron
.. _Container Network Model: https://github.com/docker/libnetwork/blob/master/docs/design.md#the-container-network-model
.. _Neutron's networking model: https://wiki.openstack.org/wiki/Neutron/APIv2-specification
.. _Neutron client: http://docs.openstack.org/developer/python-neutronclient/
.. _plugin discovery mechanism: https://github.com/docker/docker/blob/master/docs/extend/plugin_api.md#plugin-discovery
.. _Adding tags to resources: https://review.openstack.org/#/c/216021/
.. _APIs: https://github.com/docker/libnetwork/blob/master/docs/design.md#api
.. _libkv: https://github.com/docker/libkv
.. _IPAM blueprint: https://blueprints.launchpad.net/kuryr/+spec/ipam
.. _Neutron's API reference: http://developer.openstack.org/api-ref-networking-v2.html#createSubnet

View File

@ -3,7 +3,7 @@
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to kuryr's documentation!
Welcome to kuryr-libnetwork's documentation!
========================================================
Contents:
@ -12,26 +12,6 @@ Contents:
:maxdepth: 2
readme
installation
usage
contributing
releasenotes
Design and Developer Docs
==========================
.. toctree::
:maxdepth: 1
devref/index
Kuryr Specs
===========
.. toctree::
:maxdepth: 2
specs/index
Indices and tables
==================

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install kuryr
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv kuryr
$ pip install kuryr

View File

@ -1,5 +0,0 @@
===============
Release Notes
===============
.. release-notes::

View File

@ -1,176 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
======================================
Reuse of the existing Neutron networks
======================================
https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network
The current Kuryr implementation assumes the Neutron networks, subnetpools,
subnets and ports are created by Kuryr and their lifecycles are completely
controlled by Kuryr. However, in the case where users need to mix the VM
instances and/or the bare metal nodes with containers, the capability of
reusing existing Neutron networks for implementing Kuryr networks becomes
valuable.
Problem Description
-------------------
The main use case being addressed in this spec is described below:
* Use of existing Neutron network and subnet resources created independent of
Kuryr
With the addition of Tags to neutron resources
`Add tags to neutron resources spec`_
the association between container networks and Neutron networks is
implemented by associating tag(s) to Neutron networks. In particular,
the container network ID is stored in such tags. Currently the
maximum size for tags is 64 bytes. Therefore, we currently use two
tags for each network to store the corresponding Docker ID.
Proposed Change
---------------
This specification proposes to use the ``Options`` that can be specified by
user during the creation of Docker networks. We propose to use either the
Neutron network uuid or name to identify the Neutron network to use. If the
Neutron network uuid or name is specified but such a network does not exist or
multiple such networks exist in cases where a network name is specified, the
create operation fails. Otherwise, the existing network will be used.
Similarly, if a subnet is not associated with the existing network it will be
created by Kuryr. Otherwise, the existing subnet will be used.
The specified Neutron network is tagged with a well known string such that it
can be verified whether it already existed at the time of the creation of the
Docker network or not.
.. NOTE(banix): If a Neutron network is specified but it is already
associated with an existing Kuryr network we may refuse the request
unless there are use cases which allow the use of a Neutron network
for realizing more than one Docker networks.
.. _workflow:
Proposed Workflow
~~~~~~~~~~~~~~~~~
1. A user creates a Docker network and binds it to an existing Neutron network
by specifying it's uuid:
::
$ sudo docker network create --driver=kuryr --ipam-driver=kuryr \
--subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 \
-o neutron.net.uuid=25495f6a-8eae-43ff-ad7b-77ba57ed0a04 \
foo
286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364
$ sudo docker network create --driver=kuryr --ipam-driver=kuryr \
--subnet 10.0.0.0/16 --gateway 10.0.0.1 --ip-range 10.0.0.0/24 \
-o neutron.net.name=my_network_name \
foo
286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364
This creates a Docker network with the given name, ``foo`` in this case, by
using the Neutron network with the specified uuid or name.
If subnet information is specified by ``--subnet``, ``--gateway``, and
``--ip-range`` as shown in the command above, the corresponding subnetpools
and subnets are created or the exising resources are appropriately reused
based on the provided information such as CIDR. For instance, if the network
with the given UUID in the command exists and that network has the subnet
whose CIDR is the same as what is given by ``--subnet`` and possibly
``--ip-range``, Kuryr doesn't create a subnet and just leaves the existing
subnets as they are. Kuryr composes the response from the information of
the created or reused subnet.
It is expected that when Kuryr driver is used, the Kuryr IPAM driver is also
used.
If the gateway IP address of the reused Neutron subnet doesn't match with
the one given by ``--gateway``, Kuryr returns the IP address set in the
Neutron subnet nevertheless and the command is going to fail because of
Dockers's validation against the response.
2. A user inspects the created Docker network
::
$ sudo docker network inspect foo
{
"Name": "foo",
"Id": "286eddb51ebca09339cb17aaec05e48ffe60659ced6f3fc41b020b0eb506d364",
"Scope": "global",
"Driver": "kuryr",
"IPAM": {
"Driver": "kuryr",
"Config": [{
"Subnet": "10.0.0.0/16",
"IPRange": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}]
},
"Containers": {}
"Options": {
"com.docker.network.generic": {
"neutron.net.uuid": "25495f6a-8eae-43ff-ad7b-77ba57ed0a04"
}
}
}
A user can see the Neutron ``uuid`` given in the command is stored in the
Docker's storage and can be seen by inspecting the network.
3. A user launches a container and attaches it to the network
::
$ CID=$(sudo docker run --net=foo -itd busybox)
This process is identical to the existing logic described in `Kuryr devref`_.
libnetwork calls ``/IpamDriver.RequestAddress``,
``/NetworkDriver.CreateEndpoint`` and then ``/NetworkDriver.Join``. The
appropriate available IP address shall be returned by Neutron through Kuryr
and a port with the IP address is created under the subnet on the network.
4. A user terminates the container
::
$ sudo docker kill ${CID}
This process is identical to the existing logic described in `Kuryr devref`_
as well. libnetwork calls ``/IpamDriver.ReleaseAddress``,
``/NetworkDriver.Leave`` and then ``/NetworkDriver.DeleteEndpoint``.
5. A user deletes the network
::
$ sudo docker network rm foo
When an existing Neutron network is used to create a Docker network, it is
tagged such that during the delete operation the Neutron network does not
get deleted. Currently, if an existing Neutron network is used, the subnets
associated with it (whether pre existing or newly created) are preserved as
well. In the future, we may consider tagging subnets themselves or the
networks (with subnet information) to decide whether a subnet is to be
deleted or not.
Challenges
----------
None
References
----------
* `Add tags to neutron resources spec`_
.. _Add tags to neutron resources spec: http://docs.openstack.org/developer/neutron/devref/tag.html
.. _Kuryr devref: http://docs.openstack.org/developer/kuryr/devref/index.html

View File

@ -1,49 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Kuryr Specs
===========
This section contains detailed specification documents for
different features inside Kuryr.
.. toctree::
:maxdepth: 1
existing-neutron-network
Spec Template
--------------
.. toctree::
:maxdepth: 3
skeleton
template
newton/index
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,43 +0,0 @@
..
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
Convention for heading levels:
======= Heading 0 (reserved for the title in a document)
------- Heading 1
~~~~~~~ Heading 2
+++++++ Heading 3
''''''' Heading 4
(Avoid deeper levels because they do not render well.)
Mitaka Specifications
=====================
This section contains detailed specification documents for
different features inside Kuryr.
Spec
----
.. toctree::
:maxdepth: 1
nested_containers
kuryr_k8s_integration
Indices and tables
------------------
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,303 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================
Kuryr Kubernetes Integration
============================
https://blueprints.launchpad.net/kuryr/+spec/kuryr-k8s-integration
This spec proposes how to integrate Kubernetes Bare Metal cluster with Neutron
being used as network provider.
Kubernetes is a platform for automating deployment, scaling and operations of
application containers across clusters of hosts. There are already a number of
implementations of kubernetes network model, such as Flannel, Weave, Linux
Bridge, OpenvSwitch, Calico as well as other vendor implementations. Neutron
already serves as a common way to support various networking providers via
common API. Therefore, using neutron to provide kubernetes networking will
enable different backend support in a common way.
This approach provides clear benefit for operators who will have variety of
networking choices that already supported via neutron.
Problem Description
===================
Application developers usually are not networking engineers. They should be
able to express the application intent. Currently, there is no integration
between kubernetes and Neutron. Kuryr should bridge the gap between kubernetes
and neutron by using the application intent to infer the connectivity and
isolation requirements necessary to provision the networking entities in a
consistent way.
Kubernetes Overview
-------------------
Kubernetes API abstractions:
**Namespace**
Serves as logical grouping of partition resources. Names of resources need to
be unique within a namespace, but not across namespaces.
**Pod**
Contains a group of tightly coupled containers that share single network
namespace. Pod models an application-specific "logical host" in a
containerized environment. It may contain one or more containers which are
relatively tightly coupled. Each pod gets its own IP that is also an IP of
the contained Containers.
**Deployment/Replication Controller**
Ensures the requested number of pods are running at any time.
**Service**
Is an abstraction which defines a logical set of pods and a policy by which
to access them. The set of service endpoints, usually pods that implement a
given service is defined by the label selector. The default service type
(ClusterIP) is used to provide consistent application inside the kubernetes
cluster. Service receives a service portal (VIP and port). Service IPs are
only available inside the cluster.
Service can abstract access not only to pods. For example, it can be for
external database cluster, service in another namespace, etc. In such case
service does not have a selector and endpoint are defined as part of the
service. The service can be headless (clusterIP=None). For such Services,
a cluster IP is not allocated. DNS should return multiple addresses for the
Service name, which point directly to the Pods backing the Service.
To receive traffic from the outside, service should be assigned an external
IP address.
For more details on service, please refer to [1]_.
Kubernetes provides two options for service discovery, environments variables
and DNS. Environment variables are added for each active service when pod is
run on the node. DNS is kubernetes cluster add-on that provides DNS server,
more details on this below.
Kubernetes has two more powerful tools, labels and annotations. Both can be
attached to the API objects. Labels are an arbitrary key/value pairs. Labels
do not provide uniqueness. Labels are queryable and used to organize and to
select subsets of objects.
Annotations are string keys and values that can be used by external tooling to
store arbitrary metadata.
More detailed information on k8s API can be found in [2]_
Network Requirements
^^^^^^^^^^^^^^^^^^^^
k8s imposes some fundamental requirements on the networking implementation:
* All containers can communicate without NAT.
* All nodes can communicate with containers without NAT.
* The IP the containers sees itself is the same IP that others see.
The kubernetes model is for each pod to have an IP in a flat shared namespace
that allows full communication with physical computers and containers across
the network. The above approach makes it easier than native Docker model to
port applications from VMs to containers. More on kubernetes network model
is here [3]_.
Use Cases
---------
The kubernetes networking should address requirements of several stakeholders:
* Application developer, the one that runs its application on the k8s cluster
* Cluster administrator, the one that runs the k8s cluster
* Network infrastructure administrator, the one that provides the physical
network
Use Case 1:
^^^^^^^^^^^
Support current kubernetes network requirements that address application
connectivity needs. This will enable default kubernetes behavior to allow all
traffic from all sources inside or outside the cluster to all pods within the
cluster. This use case does not add multi-tenancy support.
Use Case 2:
^^^^^^^^^^^
Application isolation policy support.
This use case is about application isolation policy support as it is defined
by kubernetes community, based on spec [4]_. Network isolation policy will
impose limitations on the connectivity from an optional set of traffic sources
to an optional set of destination TCP/UDP ports.
Regardless of network policy, pods should be accessible by the host on which
they are running to allow local health checks. This use case does not address
multi-tenancy.
More enhanced use cases can be added in the future, that will allow to add
extra functionality that is supported by neutron.
Proposed Change
===============
Model Mapping
-------------
In order to support kubernetes networking via neutron, we should define how
k8s model maps into neutron model.
With regards to the first use case, to support default kubernetes networking
mode, the mapping can be done in the following way:
+-----------------+-------------------+---------------------------------------+
| **k8s entity** | **neutron entity**| **notes** |
+=================+===================+=======================================+
|namespace | network | |
+-----------------+-------------------+---------------------------------------+
|cluster subnet | subnet pool | subnet pool for subnets to allocate |
| | | Pod IPs. Current k8s deployment on |
| | | GCE uses subnet per node to leverage |
| | | advanced routing. This allocation |
| | | scheme should be supported as well |
+-----------------+-------------------+---------------------------------------+
|service cluster | subnet | VIP subnet, service VIP will be |
|ip range | | allocated from |
+-----------------+-------------------+---------------------------------------+
|external subnet | floating ip pool | To allow external access to services,|
| | external network | each service should be assigned with |
| | router | external (floating IP) router is |
| | | required to enable north-south traffic|
+-----------------+-------------------+---------------------------------------+
|pod | port | A port gets its IP address from the |
| | | cluster subnet pool |
+-----------------+-------------------+---------------------------------------+
|service | load balancer | each endpoint (pod) is a member in the|
| | | load balancer pool. VIP is allocated |
| | | from the service cluster ip range. |
+-----------------+-------------------+---------------------------------------+
k8s Service Implementation
^^^^^^^^^^^^^^^^^^^^^^^^^^
Kubernetes default **ClusterIP** service type is used to expose service inside
the cluster. If users decide to expose services to external traffic, they will
assign ExternalIP to the services they choose to expose. Kube-proxy should be
an optional part of the deployment, since it may not work with some neutron
backend solutions, i.e. MidoNet or Contrail. Kubernetes service will be mapped
to the neutron Load Balancer, with ClusterIP as the load balancer VIP and
EndPoints (Pods) are members of the load balancer.
Once External IP is assigned, it will create FIP on external network and
associate it with the VIP.
Isolation Policy
^^^^^^^^^^^^^^^^
In order to support second use case, the application isolation policy mode,
requested policy should be translated into security group that reflects the
requested ACLs as the group rules. This security group will be associated with
pods that policy is applied to. Kubernetes namespace can be used as isolation
scope of the contained Pods. For isolated namespace, all incoming connections
to pods in that namespace from any source inside or outside of the Kubernetes
cluster will be denied unless allowed by a policy.
For non-isolated namespace, all incoming connections to pods in that namespace
will be allowed.
The exact translation details are provided in the [5]_.
As an alternative, and this goes beyond neutron, it seems that more native way
might be to use policy (intent) based API to request the isolation policy.
Group Based Policy can be considered, but this will be left for the later phase.
Service Discovery
-----------------
Service discovery should be supported via environment variables.
Kubernetes also offers a DNS cluster add-on to support application services name
resolution. It uses SkyDNS with helper container, kube2sky to bridge between
kubernetes to SkyDNS and etcd to maintain services registry.
Kubernetes Service DNS names can be resolved using standard methods inside the
pods (i.e. gethostbyname). DNS server runs as kubernetes service with assigned
static IP from the service cluster ip range. Both DNS server IP and domain are
configured and passed to the kubelet service on each worker node that passes it
to containers. SkyDNS service is deployed in the kube-system namespace.
This integration should enable SkyDNS support as well as it may add support
for external DNS servers. Since SkyDNS service will be deployed as any other
k8s service, this should just work.
Other alternatives for DNS, such as integration with OpenStack Designate for
local DNS resolution by port name will be considered for later phases.
Integration Decomposition
-------------------------
The user interacts with the system via the kubectl cli or directly via REST API
calls. Those calls define Kubernetes resources such as RC, Pods and services.
The scheduler sees the requests for Pods and assigns them to a specific worker
nodes.
On the worker nodes, kubelet daemons see the pods that are being scheduled for
the node and take care of creating the Pods, i.e. deploying the infrastructure
and application containers and ensuring the required connectivity.
There are two conceptual parts that kuryr needs to support:
API Watcher
^^^^^^^^^^^
To watch kubernetes API server for changes in services and pods and later
policies collections.
Upon changes, it should map services/pods into the neutron constructs,
ensuring connectivity. It should use neutron client to invoke neutron API to
maintain networks, ports, load balancers, router interfaces and security groups.
The API Watcher will add allocated port details to the Pod object to make it
available to the kubelet process and eventually to the kuryr CNI driver.
CNI Driver
^^^^^^^^^^
To enable CNI plugin on each worker node to setup, teardown and provide status
of the Pod, more accurately of the infrastructure container. Kuryr will provide
CNI Driver that implements [6]_. In order to be able to configure and report an
IP configuration, the Kuryr CNI driver must be able to access IPAM to get IP
details for the Pod. The IP, port UUID, GW and port type details should be
available to the driver via **CNI_ARGS** in addition to the standard content::
CNI_ARGS=K8S_POD_NAMESPACE=default;\
K8S_POD_NAME=nginx-app-722l8;\
K8S_POD_INFRA_CONTAINER_ID=8ceb00926acf251b34d70065a6158370953ab909b0745f5f4647ee6b9ec5c250\
PORT_UUID=a28c7404-7495-4557-b7fc-3e293508dbc6,\
IPV4=10.0.0.15/16,\
GW=10.0.0.1,\
PORT_TYPE=midonet
For more details on kuryr CNI Driver, see [7]_.
Kube-proxy service that runs on each worker node and implements the service in
native implementation is not required since service is implemented via neutron
load balancer.
Community Impact
----------------
This spec invites community to collaborate on unified solution to support
kubernetes networking by using neutron as a backend via Kuryr.
Implementation
==============
Assignee(s)
-----------
TBD
Work Items
----------
TBD
References
==========
.. [1] http://kubernetes.io/v1.1/docs/user-guide/services.html
.. [2] http://kubernetes.io/docs/api/
.. [3] http://kubernetes.io/docs/admin/networking/#kubernetes-model
.. [4] https://docs.google.com/document/d/1qAm-_oSap-f1d6a-xRTj6xaH1sYQBfK36VyjB5XOZug
.. [5] https://review.openstack.org/#/c/290172/
.. [6] https://github.com/appc/cni/blob/master/SPEC.md
.. [7] https://blueprints.launchpad.net/kuryr/+spec/kuryr-cni-plugin

View File

@ -1,527 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
============================================================================
Networking for Nested Containers in OpenStack / Magnum - Neutron Integration
=============================================================================
Launchpad blueprint:
https://blueprints.launchpad.net/kuryr/+spec/containers-in-instances
This blueprint proposes how to integrate Magnum with Neutron based
networking and how the problem of networking for nested containers
can be solved.
Problem Description
===================
Magnum (containers-as-a-service for OpenStack) provisions containers
inside Nova instances and those instances use standard Neutron
networking. These containers are referred to as nested containers.
Currently, there is no integration between Magnum resources and
Neutron and the nested containers are served networking outside
of that provided by OpenStack (Neutron) today.
Definitions
-----------
COE
Container Orchestration Engine
Bay
A Magnum resource that includes at least one host to run containers on,
and a COE to manage containers created on hosts within the bay.
Baymodel
An object that stores template information about the bay which is
used to create new bays consistently.
Pod
Is the smallest deployable unit that can be created, scheduled, and
managed within Kubernetes.
deviceowner (in Neutron ports)
device_owner is an attribute which is used internally by Neutron.
It identifies the service which manages the port. For example
router interface, router gateway will have their respective
device owners entries. Similarly, Neutron ports attached to Nova
instances have device_owner as compute.
Requirements
------------
Following are the requirements of Magnum around networking:
1. Provide networking capabilities to containers running in Nova
instances.
2. Magnum uses Heat to orchestrate multi-tenant application container
environments. Heat uses user-data scripts underneath. Therefore,
Kuryr must have the ability to be deployed/orchestrated using Heat
via the scripts.
3. Current Magnum container networking implementations such as Flannel,
provide networking connectivity to containers that reside across
multiple Nova instances. Kuryr must provide multi-instance container
networking capabilities. The existing networking capabilities like
Flannel that Magnum uses will remain and Kuryr to be introduced
in parallel. Decision on default is for later and default may vary
based on the type of Magnum Bay. Magnum currently supports three
types of Bays: Swarm, Kubernetes, and Mesos. They are
referred to as COEs (Container Orchestration Engine).
4. Kuryr must provide a simple user experience like "batteries included
but replaceable" philosophy. Magnum must have the ability to deploy
Kuryr without any user intervention, but allow more advanced users
to modify Kuryr's default settings as needed.
5. If something needs to be installed in the Nova VMs used by Magnum,
it needs to be installed in the VMs in a secure manner.
6. Communication between Kuryr and other services must be secure. For example,
if there is a Kuryr agent running inside the Nova instances, the
communication between Kuryr components (Kuryr, Kuryr Agent),
Neutron-Kuryr, Magnum-Kuryr should all be secure.
7. Magnum Bays (Swarm, Kubernetes, etc..) must work the same or
better than they do with existing network providers such as Flannel.
8. Kuryr must scale just as well, if not better, than existing container
networking providers.
Use cases
----------
* Any container within a nova instance (VM, baremetal, container)
may communicate with any other nova instance (VM, baremetal, container),
or container therein, regardless if the containers are on the same nova
instance, same host, or different hosts within the same Magnum bay.
Such containers shall be able to communicate with any OpenStack cloud
resource in the same Neutron network as the Magnum bay nodes, including
(but not limited to) Load Balancers, Databases, and other Nova instances.
* Any container should be able to have access to any Neutron resource and
it's capabilities. Neutron resources include DHCP, router, floating IPs etc.
Proposed Change
===============
The proposal is to leverage the concept of VLAN aware VMs/Trunk Ports [2],
that would be able to discriminate the traffic coming from VM by using
VLAN tags. The trunk port would get attached to a VM and be capable of
receiving both untagged and tagged traffic. Each VLAN would be represented
by a sub port (Neutron ports). A subport must have a network attached.
Each subport will have an additional parameter of VID. VID can be of
different types and VLAN is one of the options.
Each VM running containers by Magnum would need to have a Kuryr container
agent [3]. Kuryr container agent would be like a CNI/CNM plugin, capable of
assigning IPs to the container interfaces and tagging with VLAN IDs.
Magnum baymodel resource can be passed along information for
network type and kuryr will serve Neutron networking. Based on the baymodel,
Magnum can provision necessary services inside the Nova instance using Heat
templates and the scripts Heat uses. The Kuryr container agent would be
responsible for providing networking to the nested containers by tagging
each container interface with a VLAN ID. Kuryr container agent [3] would be
agnostic of COE type and will have different modes based on the COE.
First implementation would support Swarm and the corresponding container
network model via libnetwork.
There are two mechanisms in which nested containers will be served networking
via Kuryr:
1. When user interacts with Magnum APIs to provision containers.
2. Magnum allows end-users to access native COE APIs. It means end-users
can alternatively create containers using docker CLI etc. If the
end-users interact with the native APIs, they should be able to get
the same functionality that is available via Magnum interfaces/orchestration.
COEs use underlying container runtimes tools so this option is also applicable
for non-COE APIs as well.
For the case, where user interacts with Magnum APIs, Magnum would need to
integrate a 'network' option in the container API to choose Neutron networks
for containers. This option will be applicable for baymodels
running kuryr type networking. For each container launched, Magnum would
pick up a network, and talk to the COE to provision the container(s), Kuryr agent
would be running inside the Nova instance as a driver/plugin to COE networking
model and based on the network UUID/name, Kuryr agent will create a subport on
parent trunk port, where Nova instance is attached to, Kuryr will allocate
a VLAN ID and subport creation be invoked in Neutron and that will allocate the
IP address. Based on the information returned, Kuryr agent will assign IP to
the container/pod and assign a VLAN, which would match VLAN in the subport
metadata. Once the sub-port is provisioned, it will have an IP address and a
VLAN ID allocated by Neutron and Kuryr respectively.
For the case, where native COE APIs are used, user would be required to specify
information about Kuryr driver and Neutron networks when launching containers.
Kuryr agent will take care of providing networking to the containers in exactly
the same fashion as it would when Magnum talks to the COEs.
Now, all the traffic coming from the containers inside the VMs would be
tagged and backend implementation of how those containers communicate
will follow a generic onboarding mechanism. Neutron supports several plugins
and each plugin uses some backend technology. The plugins would be
responsible for implementing VLAN aware VMs Neutron extension and onboard
the container based on tenant UUID, trunk port ID, VLAN ID, network UUID
and sub-port UUID. Subports will have deviceowner=kuryr. At this
point, a plugin can onboard the container using unique classification per
tenant to the relevant Neutron network and nested container would be
onboarded onto Neutron networks and will be capable of passing packets.
The plugins/onboarding engines would be responsible for tagging the packets
with the correct VLAN ID on their way back to the containers.
Integration Components
-----------------------
Kuryr:
Kuryr and Kuryr Agent will be responsible for providing the networking
inside the Nova instances. Kuryr is the main service/utility running
on the controller node and capabilities like segmentation ID allocation
will be performed there. Kuryr agent will be like a CNI/CNM plugin,
capable of allocating IPs and VLANs to container interfaces. Kuryr
agent will be a helper running inside the Nova instances that can
communicate with Neutron endpoint and Kuryr server. This will require
availability of credentials inside the Bay that Kuryr can use to
communicate. There is a security impact of storing credentials and
it is discussed in the Security Impact section of this document.
More details on the Kuryr Agent can be found here [3].
Neutron:
vlan-aware-vms and notion of trunk port, sub-ports from Neutron will be
used in this design. Neutron will be responsible for all the backend
networking that Kuryr will expose via its mechanisms.
Magnum:
Magnum will be responsible for launching containers on specified/pre-provisioned
networks, using Heat to provisioning Kuryr components inside Nova instances and passing
along network information to the COEs, which can invoke their networking part.
Heat:
Heat templates use use-data scripts to launch tools for containers that Magnum
relies on. The scripts will be updated to handle Kuryr. We should not expect
to run scripts each time a container is started. More details can be
found here [4].
Example of model::
+-------------------------------+ +-------------------------------+
| +---------+ +---------+ | | +---------+ +---------+ |
| | c1 | | c2 | | | | c3 | | c4 | |
| +---------+ +---------+ | | +---------+ +---------+ |
| | | |
| VM1 | | VM2 |
| | | |
| | | |
+---------+------------+--------+ +---------+------------+--------+
|Trunk Port1 | |Trunk Port2 |
+------------+ +------------+
/|\ /|\
/ | \ / | \
/ | \ / | \
+--+ +-++ +--+ +--+ +-++ +--+
|S1| |S2| |S3| |S4| |S5| |S6|
+-++ +--+ +-++ +--+ +-++ +-++
| | | | |
| | | +---+ | |
| | +---+N1+ +-+N2+-----------+
| | | | | |
+-------------+ | | |
| | | |
+ ++ x x +-+ +
N3+--------+x x+-----------+N4
x x
x Router x
x x
x x
C1-4 = Magnum containers
N1-4 = Neutron Networks and Subnets
S1,S3,S4,S6 = Subports
S2,S5 = Trunk ports (untagged traffic)
In the example above, Magnum launches four containers (c1, c2, c3, c4)
spread across two Nova instances. There are four Neutron
networks(N1, N2, N3, N4) in the deployment and all of them are
connected to a router. Both the Nova instances (VM1 and VM2) have one
NIC each and a corresponding trunk port. Each trunk port has three
sub-ports: S1, S2, S3 and S4, S5, S6 for VM1 and VM2 respectively.
The untagged traffic goes to S2 and S5 and tagged to S1, S3, S4 and
S6. On the tagged sub-ports, the tags will be stripped and packets
will be sent to the respective Neutron networks.
On the way back, the reverse would be applied and each sub-port to VLAN
mapping be checked using something like following and packets will be
tagged:
+------+----------------------+---------------+
| Port | Tagged(VID)/untagged | Packets go to |
+------+----------------------+---------------+
| S1 | 100 | N1 |
| S2 | untagged | N3 |
| S3 | 200 | N1 |
| S4 | 100 | N2 |
| S5 | untagged | N4 |
| S6 | 300 | N2 |
+------+----------------------+---------------+
One thing to note over here is S1.vlan == S4.vlan is a valid scenario
since they are part of different trunk ports. It is possible that some
implementations do not use VLAN IDs, the VID can be something
other than VLAN ID. The fields in the sub-port can be treated as key
value pairs and corresponding support can be extended in the Kuryr agent
if there is a need.
Example of commands:
::
magnum baymodel-create --name <name> \
--image-id <image> \
--keypair-id <kp> \
--external-network-id <net-id> \
--dns-nameserver <dns> \
--flavor-id <flavor-id> \
--docker-volume-size <vol-size> \
--coe <coe-type> \
--network-driver kuryr
::
neutron port-create --name S1 N1 \
--device-owner kuryr
::
neutron port-create --name S2 N3
::
# trunk-create may refer to 0, 1 or more subport(s).
$ neutron trunk-create --port-id PORT \
[--subport PORT[,SEGMENTATION-TYPE,SEGMENTATION-ID]] \
[--subport ...]
Note: All ports referred must exist.
::
# trunk-add-subport adds 1 or more subport(s)
$ neutron trunk-subport-add TRUNK \
PORT[,SEGMENTATION-TYPE,SEGMENTATION-ID] \
[PORT,...]
::
magnum container-create --name <name> \
--image <image> \
--bay <bay> \
--command <command> \
--memory <memory> \
--network network_id
Magnum changes
--------------
Magnum will launch containers on Neutron networks.
Magnum will provision the Kuryr Agent inside the Nova instances via Heat templates.
Alternatives
------------
None
Data Model Impact (Magnum)
--------------------------
This document adds the network_id attribute to the container database
table. A migration script will be provided to support the attribute
being added. ::
+-------------------+-----------------+---------------------------------------------+
| Attribute | Type | Description |
+===================+=================+=============================================+
+-------------------+-----------------+---------------------------------------------+
| network_id | uuid | UUID of a Neutron network |
+-------------------+-----------------+---------------------------------------------+
REST API Impact (Magnum)
-------------------------
This document adds network_id attribute to the Container
API class. ::
+-------------------+-----------------+---------------------------------------------+
| Attribute | Type | Description |
+===================+=================+=============================================+
+-------------------+-----------------+---------------------------------------------+
| network_id | uuid | UUID of a Neutron network |
+-------------------+-----------------+---------------------------------------------+
Security Impact
---------------
Kuryr Agent running inside Nova instances will communicate with OpenStack APIs. For this to
happen, credentials will have to be stored inside Nova instances hosting Bays.
This arrangement poses a security threat that credentials might be compromised and there
could be ways malicious containers could get access to credentials or Kuryr Agent.
To mitigate the impact, there are multiple options:
1. Run Kuryr Agent in two modes: primary and secondary. Only primary mode has access to the
credentials and talks to Neutron and fetches information about available resources
like IPs, VLANs. Secondary mode has no information about credentials and performs operations
based on information coming in the input like IP, VLAN etc. Primary mode can be tied to the
Kubernetes, Mesos master nodes. In this option, containers will be running on nodes other
than the ones that talk to OpenStack APIs.
2. Containerize the Kuryr Agent to offer isolation from other containers.
3. Instead of storing credentials in text files, use some sort of binaries
and make them part of the container running Kuryr Agent.
4. Have an Admin provisioned Nova instance that carries the credentials
and has connectivity to the tenant Bays. The credentials are accessible only to the Kuryr
agent via certain port that is allowed through security group rules and secret key.
In this option, operations like VM snapshot in tenant domains will not lead to stolen credentials.
5. Introduce Keystone authentication mechanism for Kuryr Agent. In case of a compromise, this option
will limit the damage to the scope of permissions/roles the Kuryr Agent will have.
6. Use HTTPS for communication with OpenStack APIs.
7. Introduce a mechanism/tool to detect if a host is compromised and take action to stop any further
damage.
Notifications Impact
--------------------
None
Other End User Impact
---------------------
None
Performance Impact
------------------
For containers inside the same VM to communicate with each other,
the packets will have to step outside the VMs and come back in.
IPv6 Impact
-----------
None
Other Deployer Impact
---------------------
None
Developer Impact
----------------
Extended attributes in Magnum container API to be used.
Introduction of Kuryr Agent.
Requires the testing framework changes.
Community Impact
----------------
The changes bring significant improvement in the container
networking approach by using Neutron as a backend via Kuryr.
Implementation
==============
Assignee(s)
-----------
Fawad Khaliq (fawadkhaliq)
Work Items
----------
Magnum:
* Extend the Magnum API to support new network attribute.
* Extend the Client API to support new network attribute.
* Extend baymodel objects to support new container
attributes. Provide a database migration script for
adding the attribute.
* Extend unit and functional tests to support new port attribute
in Magnum.
Heat:
* Update Heat templates to support the Magnum container
port information.
Kuryr:
* Kuryr container agent.
* Kuryr VLAN/VID allocation engine.
* Extend unit test cases in Kuryr for the agent and VLAN/VID allocation
engine.
* Other tempest tests.
* Other scenario tests.
Dependencies
============
VLAN aware VMs [2] implementation in Neutron
Testing
=======
Tempest and functional tests will be created.
Documentation Impact
====================
Documentation will have to updated to take care of the
Magnum container API changes and use the Kuryr network
driver.
User Documentation
------------------
Magnum and Kuryr user guides will be updated.
Developer Documentation
-----------------------
The Magnum and Kuryr developer quickstart documents will be
updated to include the nested container use case and the
corresponding details.
References
==========
[1] https://review.openstack.org/#/c/204686/7
[2] http://specs.openstack.org/openstack/neutron-specs/specs/mitaka/vlan-aware-vms.html
[3] https://blueprints.launchpad.net/kuryr/+spec/kuryr-agent
[4] https://blueprints.launchpad.net/kuryr/+spec/kuryr-magnum-heat-deployment
[5] http://docs.openstack.org/developer/magnum/

View File

@ -1,23 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
==========================================
Title of your RFE
==========================================
Problem Description
===================
Proposed Change
===============
References
==========

View File

@ -1,64 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
====================================
Example Spec - The title of your RFE
====================================
Include the URL of your launchpad RFE:
https://bugs.launchpad.net/kuryr/+bug/example-id
Introduction paragraph -- why are we doing this feature? A single paragraph of
prose that **deployers, and developers, and operators** can understand.
Do you even need to file a spec? Most features can be done by filing an RFE bug
and moving on with life. In most cases, filing an RFE and documenting your
design is sufficient. If the feature seems very large or contentious, then
you may want to consider filing a spec.
Problem Description
===================
A detailed description of the problem:
* For a new feature this should be use cases. Ensure you are clear about the
actors in each use case: End User vs Deployer
* For a major reworking of something existing it would describe the
problems in that feature that are being addressed.
Note that the RFE filed for this feature will have a description already. This
section is not meant to simply duplicate that; you can simply refer to that
description if it is sufficient, and use this space to capture changes to
the description based on bug comments or feedback on the spec.
Proposed Change
===============
How do you propose to solve this problem?
This section is optional, and provides an area to discuss your high-level
design at the same time as use cases, if desired. Note that by high-level,
we mean the "view from orbit" rough cut at how things will happen.
This section should 'scope' the effort from a feature standpoint: how is the
'kuryr end-to-end system' going to look like after this change? What Kuryr
areas do you intend to touch and how do you intend to work on them? The list
below is not meant to be a template to fill in, but rather a jumpstart on the
sorts of areas to consider in your proposed change description.
You do not need to detail API or data model changes.
References
==========
Please add any useful references here. You are not required to have any
reference. Moreover, this specification should still make sense when your
references are unavailable.

View File

@ -1,7 +0,0 @@
========
Usage
========
To use kuryr in a project::
import kuryr