Fix docs & specs erros.

- Fix erros when run tox -edocs (except one spec - cinder-integration.rst,
  will be fixed when merge patch set [1])
- Update urls.

[1] https://review.openstack.org/#/c/468658/

Change-Id: I05865708ef356f17a388eace234ac004dd1a364f
This commit is contained in:
Kien Nguyen 2017-07-14 17:07:14 +07:00
parent 5e22975429
commit 76057450b5
11 changed files with 141 additions and 125 deletions

View File

@ -1,7 +1,7 @@
Zun Style Commandments Zun Style Commandments
====================== ======================
Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/ Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/
Zun Specific Commandments Zun Specific Commandments
------------------------- -------------------------

View File

@ -24,8 +24,8 @@ Note that this is a hard requirement.
* Documentation: https://docs.openstack.org/zun/latest/ * Documentation: https://docs.openstack.org/zun/latest/
* Source: https://git.openstack.org/cgit/openstack/zun * Source: https://git.openstack.org/cgit/openstack/zun
* Bugs: https://bugs.launchpad.net/zun * Bugs: https://bugs.launchpad.net/zun
* Blueprints:** https://blueprints.launchpad.net/zun * Blueprints: https://blueprints.launchpad.net/zun
* REST Client:** https://git.openstack.org/cgit/openstack/python-zunclient * REST Client: https://git.openstack.org/cgit/openstack/python-zunclient
Features Features
-------- --------

View File

@ -16,7 +16,7 @@ Versioned Objects
================= =================
Zun uses the `oslo.versionedobjects library Zun uses the `oslo.versionedobjects library
<https://docs.openstack.org/developer/oslo.versionedobjects/index.html>`_ to <https://docs.openstack.org/oslo.versionedobjects/latest/>`_ to
construct an object model that can be communicated via RPC. These objects have construct an object model that can be communicated via RPC. These objects have
a version history and functionality to convert from one version to a previous a version history and functionality to convert from one version to a previous
version. This allows for 2 different levels of the code to still pass objects version. This allows for 2 different levels of the code to still pass objects
@ -28,7 +28,7 @@ Object Version Testing
In order to ensure object versioning consistency is maintained, In order to ensure object versioning consistency is maintained,
oslo.versionedobjects has a fixture to aid in testing object versioning. oslo.versionedobjects has a fixture to aid in testing object versioning.
`oslo.versionedobjects.fixture.ObjectVersionChecker `oslo.versionedobjects.fixture.ObjectVersionChecker
<https://docs.openstack.org/developer/oslo.versionedobjects/api/fixture.html#oslo_versionedobjects.fixture.ObjectVersionChecker>`_ <https://docs.openstack.org/oslo.versionedobjects/latest/reference/fixture.html#objectversionchecker>`_
generates fingerprints of each object, which is a combination of the current generates fingerprints of each object, which is a combination of the current
version number of the object, along with a hash of the RPC-critical parts of version number of the object, along with a hash of the RPC-critical parts of
the object (fields and remotable methods). the object (fields and remotable methods).

View File

@ -18,7 +18,7 @@
This is the demo for Zun integrating with osprofiler. `Zun This is the demo for Zun integrating with osprofiler. `Zun
<https://wiki.openstack.org/wiki/Zun>`_ is a OpenStack container <https://wiki.openstack.org/wiki/Zun>`_ is a OpenStack container
management services, while `OSProfiler management services, while `OSProfiler
<https://docs.openstack.org/developer/osprofiler/>`_ provides <https://docs.openstack.org/osprofiler/latest/>`_ provides
a tiny but powerful library that is used by most OpenStack projects and a tiny but powerful library that is used by most OpenStack projects and
their python clients. their python clients.
@ -30,7 +30,7 @@ option without using ceilometer. Here just use Redis as an example, user
can choose mongodb, elasticsearch, and `etc can choose mongodb, elasticsearch, and `etc
<https://git.openstack.org/cgit/openstack/osprofiler/tree/osprofiler/drivers>`_. <https://git.openstack.org/cgit/openstack/osprofiler/tree/osprofiler/drivers>`_.
Install Redis as the `centralized collector Install Redis as the `centralized collector
<https://docs.openstack.org/developer/osprofiler/collectors.html>`_ <https://docs.openstack.org/osprofiler/latest/user/collectors.html>`_
Redis in container is easy to launch, `choose Redis Docker Redis in container is easy to launch, `choose Redis Docker
<https://hub.docker.com/_/redis/>`_ and run:: <https://hub.docker.com/_/redis/>`_ and run::

View File

@ -49,8 +49,8 @@ container which will support the network namespace for the capsule.
* Scheduler: Containers inside a capsule are scheduled as a unit, thus all * Scheduler: Containers inside a capsule are scheduled as a unit, thus all
containers inside a capsule is co-located. All containers inside a capsule containers inside a capsule is co-located. All containers inside a capsule
will be launched in one compute host. will be launched in one compute host.
* Network: Containers inside a capsule share the same network namespace, * Network: Containers inside a capsule share the same network namespace, so
so they share IP address(es) and can find each other via localhost by using they share IP address(es) and can find each other via localhost by using
different remapping network port. Capsule IP address(es) will re-use the different remapping network port. Capsule IP address(es) will re-use the
sandbox IP. Containers communication between different capsules will use sandbox IP. Containers communication between different capsules will use
capsules IP and port. capsules IP and port.
@ -58,19 +58,19 @@ capsules IP and port.
Starting: Capsule is created, but one or more container inside the capsule is Starting: Capsule is created, but one or more container inside the capsule is
being created. being created.
Running: Capsule is created, and all the containers are running. Running: Capsule is created, and all the containers are running.
Finished: All containers inside the capsule have successfully executed and Finished: All containers inside the capsule have successfully executed
exited. and exited.
Failed: Capsule creation is failed Failed: Capsule creation is failed
* Restart Policy: Capsule will have a restart policy just like container. The * Restart Policy: Capsule will have a restart policy just like container.
restart policy relies on container restart policy to execute. The restart policy relies on container restart policy to execute.
* Health checker: * Health checker:
In the first step of realization, container inside the capsule will send its In the first step of realization, container inside the capsule will send its
status to capsule when its status changed. status to capsule when its status changed.
* Upgrade and rollback: * Upgrade and rollback:
Upgrade: Support capsule update(different from zun update). That means the Upgrade: Support capsule update(different from zun update). That means the
container image will update, launch the new capsule from new image, then container image will update, launch the new capsule from new image, then
destroy the old capsule. The capsule IP address will change. For Volume, need destroy the old capsule. The capsule IP address will change. For Volume,
to clarify it after Cinder integration. need to clarify it after Cinder integration.
Rollback: When update failed, rollback to it origin status. Rollback: When update failed, rollback to it origin status.
* CPU and memory resources: Given that host resource allocation, cpu and memory * CPU and memory resources: Given that host resource allocation, cpu and memory
support will be implemented. support will be implemented.
@ -86,24 +86,25 @@ Implementation:
in a capsule should be scheduled to and spawned on the same host. Server in a capsule should be scheduled to and spawned on the same host. Server
side will keep the information in DB. side will keep the information in DB.
3. Add functions about yaml file parser in the CLI side. After parsing the 3. Add functions about yaml file parser in the CLI side. After parsing the
yaml, send the REST to API server side, scheduler will decide which host yaml, send the REST to API server side, scheduler will decide which host to
to run the capsule. run the capsule.
4. Introduce new REST API for capsule. The capsule creation workflow is: 4. Introduce new REST API for capsule. The capsule creation workflow is:
CLI Parsing capsule information from yaml file --> CLI Parsing capsule information from yaml file -->
API server do the CRUD operation, call scheduler to launch the capsule, from API server do the CRUD operation, call scheduler to launch the capsule,
Cinder to get volume, from Kuryr to get network support--> from Cinder to get volume, from Kuryr to get network support -->
Compute host launch the capsule, attach the volume --> Compute host launch the capsule, attach the volume -->
Send the status to API server, update the DB. Send the status to API server, update the DB.
5. Capsule creation will finally depend on the backend container driver. Now 5. Capsule creation will finally depend on the backend container driver.
choose Docker driver first. Now choose Docker driver first.
6. Define a yaml file structure for capsule. The yaml file will be compatible 6. Define a yaml file structure for capsule. The yaml file will be compatible
with Kubernetes pod yaml file, at the same time Zun will define the with Kubernetes pod yaml file, at the same time Zun will define the
available properties, metadata and template of the yaml file. In the first available properties, metadata and template of the yaml file. In the first
step, only essential properties will be defined. step, only essential properties will be defined.
The diagram below offers an overview of the architecture of ``capsule``: The diagram below offers an overview of the architecture of ``capsule``.
:: ::
+-----------------------------------------------------------+ +-----------------------------------------------------------+
| +-----------+ | | +-----------+ |
| | | | | | | |
@ -130,6 +131,7 @@ The diagram below offers an overview of the architecture of ``capsule``:
Yaml format for ``capsule``: Yaml format for ``capsule``:
Sample capsule: Sample capsule:
.. code-block:: yaml .. code-block:: yaml
apiVersion: beta apiVersion: beta
@ -228,11 +230,10 @@ Volumes fields:
* driver(string): volume drivers * driver(string): volume drivers
* driverOptions(string): options for volume driver * driverOptions(string): options for volume driver
* size(string): volume size * size(string): volume size
* volumeType(string): volume type that cinder need. by default is from * volumeType(string): volume type that cinder need. by default is from cinder
cinder config config
* image(string): cinder needed to boot from image * image(string): cinder needed to boot from image
Alternatives Alternatives
------------ ------------
1. Abstract all the information from yaml file and implement the capsule CRUD 1. Abstract all the information from yaml file and implement the capsule CRUD
@ -288,28 +289,27 @@ REST API impact
* Container API: Many container API will be extended to capsule. Here in this * Container API: Many container API will be extended to capsule. Here in this
section will define the API usage range. section will define the API usage range.
::
Capsule API: Capsule API:
list <List all the capsule, add parameters about list capsules list <List all the capsule, add parameters about list capsules with the same labels>
with the same labels>
create <-f yaml file><-f directory> create <-f yaml file><-f directory>
describe <display the details state of one or more resource> describe <display the details state of one or more resource>
delete <capsule name> delete
<capsule name>
<-l name=label-name> <-l name=label-name>
<all> <all>
run <--capsule ... container-image> run <--capsule ... container-image>
If "--capsule .." is set, the container will be created If "--capsule .." is set, the container will be created inside the capsule.
inside the capsule.
Otherwise, it will be created as normal. Otherwise, it will be created as normal.
Container API: Container API:
* show/list allow all containers * show/list allow all containers
* create/delete allow bare container only * create/delete allow bare container only (disallow in-capsule containers)
(disallow in-capsule containers)
* attach/cp/logs/top allow all containers * attach/cp/logs/top allow all containers
* start/stop/restart/kill/pause/unpause allow bare container only (disallow * start/stop/restart/kill/pause/unpause allow bare container only (disallow in-capsule containers)
in-capsule containers) * update for container in the capsule, need <--capsule> params.
* update for container in the capsule, need <--capsule> Bare container doesn't need.
params. Bare container doesn't need.
Security impact Security impact
--------------- ---------------
@ -395,5 +395,7 @@ A set of documentation for this new feature will be required.
References References
========== ==========
[1] https://kubernetes.io/ [1] https://kubernetes.io/
[2] https://docs.docker.com/compose/ [2] https://docs.docker.com/compose/
[3] https://etherpad.openstack.org/p/zun-container-composition [3] https://etherpad.openstack.org/p/zun-container-composition

View File

@ -22,16 +22,21 @@ taking a snapshot of a container.
Proposed change Proposed change
=============== ===============
1. Introduce a new CLI command to enable a user to take a snapshot of a running 1. Introduce a new CLI command to enable a user to take a snapshot of a running
container instance. container instance::
zun commit <container-name> <image-name>
# zun help commit $ zun commit <container-name> <image-name>
$ zun help commit
usage: zun commit <container-name> <image-name> usage: zun commit <container-name> <image-name>
Create a new image by taking a snapshot of a running container. Create a new image by taking a snapshot of a running container.
Positional arguments: Positional arguments:
<container-name> Name or ID of container. <container-name> Name or ID of container.
<image-name> Name of snapshot. <image-name> Name of snapshot.
2. Extend docker driver to enable “docker commit” command to create a 2. Extend docker driver to enable “docker commit” command to create a
new image. new image.
3. The new image should be accessable from other hosts. There are two 3. The new image should be accessable from other hosts. There are two
options to support this: options to support this:
a) upload the image to glance a) upload the image to glance
@ -66,11 +71,15 @@ also see the new image in the image back end that OpenStack Image service
manages. manages.
Preconditions: Preconditions:
1. The container must exist. 1. The container must exist.
2. User can only create a new image from the container when its status is 2. User can only create a new image from the container when its status is
Running, Stopped and Paused. Running, Stopped, and Paused.
3. The connection to the Image service is valid. 3. The connection to the Image service is valid.
::
POST /containers/<ID>/commit: commit a container POST /containers/<ID>/commit: commit a container
Example commit Example commit
@ -80,9 +89,9 @@ Example commit
Response: Response:
If successful, this method does not return content in the response body. If successful, this method does not return content in the response body.
Normal response codes: 202 - Normal response codes: 202
Error response codes: badRequest(400), unauthorized(401), forbidden(403), - Error response codes: BadRequest(400), Unauthorized(401), Forbidden(403),
itemNotFound(404) ItemNotFound(404)
Security impact Security impact
=============== ===============

View File

@ -61,9 +61,9 @@ cpusets requested for dedicated usage.
6. If this feature is being used with the zun scheduler, then the scheduler 6. If this feature is being used with the zun scheduler, then the scheduler
needs to be aware of the host capabilities to choose the right host. needs to be aware of the host capabilities to choose the right host.
For example: For example::
zun run -i -t --name test --cpu 4 --cpu-policy dedicated $ zun run -i -t --name test --cpu 4 --cpu-policy dedicated
We would try to support scheduling using both of these policies on the same We would try to support scheduling using both of these policies on the same
host. host.
@ -71,20 +71,21 @@ host.
How it works internally? How it works internally?
Once the user specifies the number of cpus, we would try to select a numa node Once the user specifies the number of cpus, we would try to select a numa node
that has the same or more number of cpusets unpinned that can satisfy the that has the same or more number of cpusets unpinned that can satisfy
request. the request.
Once the cpusets are determined by the scheduler and it's corresponding numa Once the cpusets are determined by the scheduler and it's corresponding numa
node, a driver method should be called for the actual provisoning of the node, a driver method should be called for the actual provisoning of the
request on the compute node. Corresponding updates would be made to the request on the compute node. Corresponding updates would be made to the
inventory table. inventory table.
In case of the docker driver - this can be achieved by a docker run equivalent: In case of the docker driver - this can be achieved by a docker run
equivalent::
docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3" $ docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3"
The cpuset-mems would allow the memory access for the cpusets to stay The cpuset-mems would allow the memory access for the cpusets to
localized. stay localized.
If the container is in paused/stopped state, the DB will still continue to If the container is in paused/stopped state, the DB will still continue to
block the pinset information for the container instead of releasing it. block the pinset information for the container instead of releasing it.

View File

@ -52,7 +52,7 @@ Proposed change
The typical workflow will be as following: The typical workflow will be as following:
1. Users call Zun APIs to create a container network by passing a name/uuid of 1. Users call Zun APIs to create a container network by passing a name/uuid of
a neutron network. a neutron network::
$ zun network-create --neutron-net private --name foo $ zun network-create --neutron-net private --name foo
@ -63,7 +63,7 @@ The typical workflow will be as following:
should only be one or two. If the number of subnets is two, they must be should only be one or two. If the number of subnets is two, they must be
a ipv4 subnet and a ipv6 subnet respectively. Zun will retrieve the a ipv4 subnet and a ipv6 subnet respectively. Zun will retrieve the
cidr/gateway/subnetpool of each subnet and pass these information to cidr/gateway/subnetpool of each subnet and pass these information to
Docker to create a Docker network. The API call will be similar to: Docker to create a Docker network. The API call will be similar to::
$ docker network create -d kuryr --ipam-driver=kuryr \ $ docker network create -d kuryr --ipam-driver=kuryr \
--subnet <ipv4_cidr> \ --subnet <ipv4_cidr> \
@ -84,12 +84,13 @@ This example assumed that the Neutron resources were pre-created by cloud
administrator (which should be the case at most of the clouds). If this is administrator (which should be the case at most of the clouds). If this is
not true, users need to manually create the resources. not true, users need to manually create the resources.
3. Users call Zun APIs to create a container from the container network 'foo'. 3. Users call Zun APIs to create a container from the container network 'foo'::
$ zun run --net=foo nginx $ zun run --net=foo nginx
4. Under the hood, Zun will perform several steps to configure the networking. 4. Under the hood, Zun will perform several steps to configure the networking.
First, call neutron API to create a port from the specified neutron network. First, call neutron API to create a port from the specified neutron
network::
$ neutron port-create private $ neutron port-create private
@ -97,7 +98,7 @@ not true, users need to manually create the resources.
its IP address(es). A port could have one or two IP addresses: a ipv4 its IP address(es). A port could have one or two IP addresses: a ipv4
address and/or a ipv6 address. Then, call Docker APIs to create the address and/or a ipv6 address. Then, call Docker APIs to create the
container by using the IP address(es) of the neutron port. This is container by using the IP address(es) of the neutron port. This is
equivalent to: equivalent to::
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> \ $ docker run --net=foo kubernetes/pause --ip <ipv4_address> \
--ip6 <ipv6_address> --ip6 <ipv6_address>
@ -109,12 +110,12 @@ This might include something like: create a veth pair, connect one end of the
veth pair to the container, connect the other end of the veth pair a veth pair to the container, connect the other end of the veth pair a
neutron-created bridge, etc. neutron-created bridge, etc.
6. Users calls Zun API to list/show the created network(s). 6. Users calls Zun API to list/show the created network(s)::
$ zun network-list $ zun network-list
$ zun network-show foo $ zun network-show foo
7. Upon completion, users calls Zun API to remove the container and network. 7. Upon completion, users calls Zun API to remove the container and network::
$ zun delete <container_id> $ zun delete <container_id>
$ zun network-delete foo $ zun network-delete foo
@ -215,6 +216,9 @@ A set of documentation for this new feature will be required.
References References
========== ==========
[1] https://git.openstack.org/cgit/openstack/kuryr-libnetwork [1] https://git.openstack.org/cgit/openstack/kuryr-libnetwork
[2] https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network [2] https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network
[3] https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnetpool [3] https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnetpool
[4] https://git.openstack.org/cgit/openstack/zun/tree/specs/container-sandbox.rst [4] https://git.openstack.org/cgit/openstack/zun/tree/specs/container-sandbox.rst

View File

@ -15,7 +15,7 @@
# It's based on oslo.i18n usage in OpenStack Keystone project and # It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from # recommendations from
# https://docs.openstack.org/developer/oslo.i18n/usage.html # https://docs.openstack.org/oslo.i18n/latest/user/usage.html
import oslo_i18n import oslo_i18n

View File

@ -48,7 +48,7 @@ def init(policy_file=None, rules=None,
""" """
global _ENFORCER global _ENFORCER
if not _ENFORCER: if not _ENFORCER:
# https://docs.openstack.org/developer/oslo.policy/usage.html # https://docs.openstack.org/oslo.policy/latest/user/usage.html
_ENFORCER = policy.Enforcer(CONF, _ENFORCER = policy.Enforcer(CONF,
policy_file=policy_file, policy_file=policy_file,
rules=rules, rules=rules,

View File

@ -13,7 +13,7 @@
# It's based on oslo.i18n usage in OpenStack Keystone project and # It's based on oslo.i18n usage in OpenStack Keystone project and
# recommendations from # recommendations from
# https://docs.openstack.org/developer/oslo.i18n/usage.html # https://docs.openstack.org/oslo.i18n/latest/user/usage.html
"""Utilities and helper functions.""" """Utilities and helper functions."""
import eventlet import eventlet