Fix docs & specs erros.

- Fix erros when run tox -edocs (except one spec - cinder-integration.rst,
  will be fixed when merge patch set [1])
- Update urls.

[1] https://review.openstack.org/#/c/468658/

Change-Id: I05865708ef356f17a388eace234ac004dd1a364f
This commit is contained in:
Kien Nguyen 2017-07-14 17:07:14 +07:00
parent 5e22975429
commit 76057450b5
11 changed files with 141 additions and 125 deletions

View File

@ -1,7 +1,7 @@
Zun Style Commandments
======================
Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/
Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
Zun Specific Commandments
-------------------------

View File

@ -24,8 +24,8 @@ Note that this is a hard requirement.
* Documentation: https://docs.openstack.org/zun/latest/
* Source: https://git.openstack.org/cgit/openstack/zun
* Bugs: https://bugs.launchpad.net/zun
* Blueprints:** https://blueprints.launchpad.net/zun
* REST Client:** https://git.openstack.org/cgit/openstack/python-zunclient
* Blueprints: https://blueprints.launchpad.net/zun
* REST Client: https://git.openstack.org/cgit/openstack/python-zunclient
Features
--------

View File

@ -16,7 +16,7 @@ Versioned Objects
=================
Zun uses the `oslo.versionedobjects library
<https://docs.openstack.org/developer/oslo.versionedobjects/index.html>`_ to
<https://docs.openstack.org/oslo.versionedobjects/latest/>`_ to
construct an object model that can be communicated via RPC. These objects have
a version history and functionality to convert from one version to a previous
version. This allows for 2 different levels of the code to still pass objects
@ -28,7 +28,7 @@ Object Version Testing
In order to ensure object versioning consistency is maintained,
oslo.versionedobjects has a fixture to aid in testing object versioning.
`oslo.versionedobjects.fixture.ObjectVersionChecker
<https://docs.openstack.org/developer/oslo.versionedobjects/api/fixture.html#oslo_versionedobjects.fixture.ObjectVersionChecker>`_
<https://docs.openstack.org/oslo.versionedobjects/latest/reference/fixture.html#objectversionchecker>`_
generates fingerprints of each object, which is a combination of the current
version number of the object, along with a hash of the RPC-critical parts of
the object (fields and remotable methods).

View File

@ -18,7 +18,7 @@
This is the demo for Zun integrating with osprofiler. `Zun
<https://wiki.openstack.org/wiki/Zun>`_ is a OpenStack container
management services, while `OSProfiler
<https://docs.openstack.org/developer/osprofiler/>`_ provides
<https://docs.openstack.org/osprofiler/latest/>`_ provides
a tiny but powerful library that is used by most OpenStack projects and
their python clients.
@ -30,7 +30,7 @@ option without using ceilometer. Here just use Redis as an example, user
can choose mongodb, elasticsearch, and `etc
<https://git.openstack.org/cgit/openstack/osprofiler/tree/osprofiler/drivers>`_.
Install Redis as the `centralized collector
<https://docs.openstack.org/developer/osprofiler/collectors.html>`_
<https://docs.openstack.org/osprofiler/latest/user/collectors.html>`_
Redis in container is easy to launch, `choose Redis Docker
<https://hub.docker.com/_/redis/>`_ and run::

View File

@ -49,8 +49,8 @@ container which will support the network namespace for the capsule.
* Scheduler: Containers inside a capsule are scheduled as a unit, thus all
containers inside a capsule is co-located. All containers inside a capsule
will be launched in one compute host.
* Network: Containers inside a capsule share the same network namespace,
so they share IP address(es) and can find each other via localhost by using
* Network: Containers inside a capsule share the same network namespace, so
they share IP address(es) and can find each other via localhost by using
different remapping network port. Capsule IP address(es) will re-use the
sandbox IP. Containers communication between different capsules will use
capsules IP and port.
@ -58,19 +58,19 @@ capsules IP and port.
Starting: Capsule is created, but one or more container inside the capsule is
being created.
Running: Capsule is created, and all the containers are running.
Finished: All containers inside the capsule have successfully executed and
exited.
Finished: All containers inside the capsule have successfully executed
and exited.
Failed: Capsule creation is failed
* Restart Policy: Capsule will have a restart policy just like container. The
restart policy relies on container restart policy to execute.
* Restart Policy: Capsule will have a restart policy just like container.
The restart policy relies on container restart policy to execute.
* Health checker:
In the first step of realization, container inside the capsule will send its
status to capsule when its status changed.
* Upgrade and rollback:
Upgrade: Support capsule update(different from zun update). That means the
container image will update, launch the new capsule from new image, then
destroy the old capsule. The capsule IP address will change. For Volume, need
to clarify it after Cinder integration.
destroy the old capsule. The capsule IP address will change. For Volume,
need to clarify it after Cinder integration.
Rollback: When update failed, rollback to it origin status.
* CPU and memory resources: Given that host resource allocation, cpu and memory
support will be implemented.
@ -86,50 +86,52 @@ Implementation:
in a capsule should be scheduled to and spawned on the same host. Server
side will keep the information in DB.
3. Add functions about yaml file parser in the CLI side. After parsing the
yaml, send the REST to API server side, scheduler will decide which host
to run the capsule.
yaml, send the REST to API server side, scheduler will decide which host to
run the capsule.
4. Introduce new REST API for capsule. The capsule creation workflow is:
CLI Parsing capsule information from yaml file -->
API server do the CRUD operation, call scheduler to launch the capsule, from
Cinder to get volume, from Kuryr to get network support-->
Compute host launch the capsule, attach the volume-->
API server do the CRUD operation, call scheduler to launch the capsule,
from Cinder to get volume, from Kuryr to get network support -->
Compute host launch the capsule, attach the volume -->
Send the status to API server, update the DB.
5. Capsule creation will finally depend on the backend container driver. Now
choose Docker driver first.
5. Capsule creation will finally depend on the backend container driver.
Now choose Docker driver first.
6. Define a yaml file structure for capsule. The yaml file will be compatible
with Kubernetes pod yaml file, at the same time Zun will define the
available properties, metadata and template of the yaml file. In the first
step, only essential properties will be defined.
The diagram below offers an overview of the architecture of ``capsule``:
The diagram below offers an overview of the architecture of ``capsule``.
::
+-----------------------------------------------------------+
| +-----------+ |
| | | |
| | Sandbox | |
| | | |
| +-----------+ |
| |
| |
| +-------------+ +-------------+ +-------------+ |
| | | | | | | |
| | Container | | Container | | Container | |
| | | | | | | |
| +-------------+ +-------------+ +-------------+ |
| |
| |
| +----------+ +----------+ |
| | | | | |
| | Volume | | Volume | |
| | | | | |
| +----------+ +----------+ |
| |
+-----------------------------------------------------------+
+-----------------------------------------------------------+
| +-----------+ |
| | | |
| | Sandbox | |
| | | |
| +-----------+ |
| |
| |
| +-------------+ +-------------+ +-------------+ |
| | | | | | | |
| | Container | | Container | | Container | |
| | | | | | | |
| +-------------+ +-------------+ +-------------+ |
| |
| |
| +----------+ +----------+ |
| | | | | |
| | Volume | | Volume | |
| | | | | |
| +----------+ +----------+ |
| |
+-----------------------------------------------------------+
Yaml format for ``capsule``:
Sample capsule:
.. code-block:: yaml
apiVersion: beta
@ -228,11 +230,10 @@ Volumes fields:
* driver(string): volume drivers
* driverOptions(string): options for volume driver
* size(string): volume size
* volumeType(string): volume type that cinder need. by default is from
cinder config
* volumeType(string): volume type that cinder need. by default is from cinder
config
* image(string): cinder needed to boot from image
Alternatives
------------
1. Abstract all the information from yaml file and implement the capsule CRUD
@ -286,30 +287,29 @@ REST API impact
* Capsule API: Capsule consider to support multiple operations as container
composition.
* Container API: Many container API will be extended to capsule. Here in this
section will define the API usage range.
section will define the API usage range.
Capsule API:
list <List all the capsule, add parameters about list capsules
with the same labels>
create <-f yaml file><-f directory>
describe <display the details state of one or more resource>
delete <capsule name>
<-l name=label-name>
<all>
run <--capsule ... container-image>
If "--capsule .." is set, the container will be created
inside the capsule.
Otherwise, it will be created as normal.
::
Container API:
* show/list allow all containers
* create/delete allow bare container only
(disallow in-capsule containers)
* attach/cp/logs/top allow all containers
* start/stop/restart/kill/pause/unpause allow bare container only (disallow
in-capsule containers)
* update for container in the capsule, need <--capsule>
params. Bare container doesn't need.
Capsule API:
list <List all the capsule, add parameters about list capsules with the same labels>
create <-f yaml file><-f directory>
describe <display the details state of one or more resource>
delete
<capsule name>
<-l name=label-name>
<all>
run <--capsule ... container-image>
If "--capsule .." is set, the container will be created inside the capsule.
Otherwise, it will be created as normal.
Container API:
* show/list allow all containers
* create/delete allow bare container only (disallow in-capsule containers)
* attach/cp/logs/top allow all containers
* start/stop/restart/kill/pause/unpause allow bare container only (disallow in-capsule containers)
* update for container in the capsule, need <--capsule> params.
Bare container doesn't need.
Security impact
---------------
@ -395,5 +395,7 @@ A set of documentation for this new feature will be required.
References
==========
[1] https://kubernetes.io/
[2] https://docs.docker.com/compose/
[3] https://etherpad.openstack.org/p/zun-container-composition

View File

@ -22,16 +22,21 @@ taking a snapshot of a container.
Proposed change
===============
1. Introduce a new CLI command to enable a user to take a snapshot of a running
container instance.
zun commit <container-name> <image-name>
# zun help commit
usage: zun commit <container-name> <image-name>
Create a new image by taking a snapshot of a running container.
Positional arguments:
<container-name> Name or ID of container.
<image-name> Name of snapshot.
container instance::
$ zun commit <container-name> <image-name>
$ zun help commit
usage: zun commit <container-name> <image-name>
Create a new image by taking a snapshot of a running container.
Positional arguments:
<container-name> Name or ID of container.
<image-name> Name of snapshot.
2. Extend docker driver to enable “docker commit” command to create a
new image.
3. The new image should be accessable from other hosts. There are two
options to support this:
a) upload the image to glance
@ -66,23 +71,27 @@ also see the new image in the image back end that OpenStack Image service
manages.
Preconditions:
1. The container must exist.
2. User can only create a new image from the container when its status is
Running, Stopped and Paused.
Running, Stopped, and Paused.
3. The connection to the Image service is valid.
::
POST /containers/<ID>/commit: commit a container
Example commit
{
"image-name" : "foo-image"
}
POST /containers/<ID>/commit: commit a container
Example commit
{
"image-name" : "foo-image"
}
Response:
If successful, this method does not return content in the response body.
Normal response codes: 202
Error response codes: badRequest(400), unauthorized(401), forbidden(403),
itemNotFound(404)
- Normal response codes: 202
- Error response codes: BadRequest(400), Unauthorized(401), Forbidden(403),
ItemNotFound(404)
Security impact
===============

View File

@ -61,9 +61,9 @@ cpusets requested for dedicated usage.
6. If this feature is being used with the zun scheduler, then the scheduler
needs to be aware of the host capabilities to choose the right host.
For example:
For example::
zun run -i -t --name test --cpu 4 --cpu-policy dedicated
$ zun run -i -t --name test --cpu 4 --cpu-policy dedicated
We would try to support scheduling using both of these policies on the same
host.
@ -71,20 +71,21 @@ host.
How it works internally?
Once the user specifies the number of cpus, we would try to select a numa node
that has the same or more number of cpusets unpinned that can satisfy the
request.
that has the same or more number of cpusets unpinned that can satisfy
the request.
Once the cpusets are determined by the scheduler and it's corresponding numa
node, a driver method should be called for the actual provisoning of the
request on the compute node. Corresponding updates would be made to the
inventory table.
In case of the docker driver - this can be achieved by a docker run equivalent:
In case of the docker driver - this can be achieved by a docker run
equivalent::
docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3"
$ docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3"
The cpuset-mems would allow the memory access for the cpusets to stay
localized.
The cpuset-mems would allow the memory access for the cpusets to
stay localized.
If the container is in paused/stopped state, the DB will still continue to
block the pinset information for the container instead of releasing it.

View File

@ -52,9 +52,9 @@ Proposed change
The typical workflow will be as following:
1. Users call Zun APIs to create a container network by passing a name/uuid of
a neutron network.
a neutron network::
$ zun network-create --neutron-net private --name foo
$ zun network-create --neutron-net private --name foo
2. After receiving this request, Zun will make several API calls to Neutron
to retrieve the necessary information about the specified network
@ -63,19 +63,19 @@ The typical workflow will be as following:
should only be one or two. If the number of subnets is two, they must be
a ipv4 subnet and a ipv6 subnet respectively. Zun will retrieve the
cidr/gateway/subnetpool of each subnet and pass these information to
Docker to create a Docker network. The API call will be similar to:
Docker to create a Docker network. The API call will be similar to::
$ docker network create -d kuryr --ipam-driver=kuryr \
--subnet <ipv4_cidr> \
--gateway <ipv4_gateway> \
-ipv6 --subnet <ipv6_cidr> \
--gateway <ipv6_gateway> \
-o neutron.net.uuid=<network_uuid> \
-o neutron.pool.uuid=<ipv4_pool_uuid> \
--ipam-opt neutron.pool.uuid=<ipv4_pool_uuid> \
-o neutron.pool.v6.uuid=<ipv6_pool_uuid> \
--ipam-opt neutron.pool.v6.uuid=<ipv6_pool_uuid> \
foo
$ docker network create -d kuryr --ipam-driver=kuryr \
--subnet <ipv4_cidr> \
--gateway <ipv4_gateway> \
-ipv6 --subnet <ipv6_cidr> \
--gateway <ipv6_gateway> \
-o neutron.net.uuid=<network_uuid> \
-o neutron.pool.uuid=<ipv4_pool_uuid> \
--ipam-opt neutron.pool.uuid=<ipv4_pool_uuid> \
-o neutron.pool.v6.uuid=<ipv6_pool_uuid> \
--ipam-opt neutron.pool.v6.uuid=<ipv6_pool_uuid> \
foo
NOTE: In this step, docker engine will check the list of registered network
plugin and find the API endpoint of Kuryr, then make a call to Kuryr to create
@ -84,23 +84,24 @@ This example assumed that the Neutron resources were pre-created by cloud
administrator (which should be the case at most of the clouds). If this is
not true, users need to manually create the resources.
3. Users call Zun APIs to create a container from the container network 'foo'.
3. Users call Zun APIs to create a container from the container network 'foo'::
$ zun run --net=foo nginx
$ zun run --net=foo nginx
4. Under the hood, Zun will perform several steps to configure the networking.
First, call neutron API to create a port from the specified neutron network.
First, call neutron API to create a port from the specified neutron
network::
$ neutron port-create private
$ neutron port-create private
5. Then, Zun will retrieve information of the created neutron port and retrieve
its IP address(es). A port could have one or two IP addresses: a ipv4
address and/or a ipv6 address. Then, call Docker APIs to create the
container by using the IP address(es) of the neutron port. This is
equivalent to:
equivalent to::
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> \
--ip6 <ipv6_address>
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> \
--ip6 <ipv6_address>
NOTE: In this step, docker engine will make a call to Kuryr to setup the
networking of the container. After receiving the request from Docker, Kuryr
@ -109,15 +110,15 @@ This might include something like: create a veth pair, connect one end of the
veth pair to the container, connect the other end of the veth pair a
neutron-created bridge, etc.
6. Users calls Zun API to list/show the created network(s).
6. Users calls Zun API to list/show the created network(s)::
$ zun network-list
$ zun network-show foo
$ zun network-list
$ zun network-show foo
7. Upon completion, users calls Zun API to remove the container and network.
7. Upon completion, users calls Zun API to remove the container and network::
$ zun delete <container_id>
$ zun network-delete foo
$ zun delete <container_id>
$ zun network-delete foo
Alternatives
@ -215,6 +216,9 @@ A set of documentation for this new feature will be required.
References
==========
[1] https://git.openstack.org/cgit/openstack/kuryr-libnetwork
[2] https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network
[3] https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnetpool
[4] https://git.openstack.org/cgit/openstack/zun/tree/specs/container-sandbox.rst

View File

@ -15,7 +15,7 @@
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from
# https://docs.openstack.org/developer/oslo.i18n/usage.html
# https://docs.openstack.org/oslo.i18n/latest/user/usage.html
import oslo_i18n

View File

@ -48,7 +48,7 @@ def init(policy_file=None, rules=None,
"""
global _ENFORCER
if not _ENFORCER:
# https://docs.openstack.org/developer/oslo.policy/usage.html
# https://docs.openstack.org/oslo.policy/latest/user/usage.html
_ENFORCER = policy.Enforcer(CONF,
policy_file=policy_file,
rules=rules,

View File

@ -13,7 +13,7 @@
# It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from
# https://docs.openstack.org/developer/oslo.i18n/usage.html
# https://docs.openstack.org/oslo.i18n/latest/user/usage.html
"""Utilities and helper functions."""
import eventlet