Add warning-is-error in setup.cfg

This patch adds the ``warning-is-error`` flag in setup.cfg
to build documents and also fix failure with the introduction
of this flag.

Change-Id: I3bfedc31361584526d6f528b74b0be3993f1ecba
Partial-Bug: #1703442
This commit is contained in:
Madhuri Kumari 2017-07-11 11:26:51 +05:30
parent 6e48d31a72
commit 4b489da4f7
9 changed files with 101 additions and 89 deletions

View File

@ -22,6 +22,7 @@ sys.path.insert(0, os.path.abspath('../..'))
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.graphviz',
'openstackdocstheme',
]

View File

@ -31,7 +31,7 @@ etc/apache2/zun.conf
The ``etc/apache2/zun.conf`` file contains example settings that
work with a copy of zun installed via devstack.
.. literalinclude:: ../../../etc/apache2/zun.conf
.. literalinclude:: ../../../etc/apache2/zun.conf.template
1. On deb-based systems copy or symlink the file to
``/etc/apache2/sites-available``. For rpm-based systems the file will go in

View File

@ -1,4 +1,4 @@
.. _dev-quickstart:
.. _quickstart:
=====================
Developer Quick-Start

View File

@ -1,7 +0,0 @@
=====
Usage
=====
To use zun in a project::
import zun

View File

@ -26,6 +26,7 @@ packages =
source-dir = doc/source
build-dir = doc/build
all_files = 1
warning-is-error = 1
[upload_sphinx]
upload-dir = doc/build/html

View File

@ -26,52 +26,54 @@ Problem description
===================
Currently running or deploying one container to do the operation is not a
very effective way in micro services, while multiple different containers run
as an integration has widely used in different scenarios, such as pod in Kubernetes.
The pod has the independent network, storage, while the compose has an easy way to defining
and running multi-container Docker applications. They are becoming the basic unit for
container application scenarios.
as an integration has widely used in different scenarios, such as pod in
Kubernetes. The pod has the independent network, storage, while the compose has
an easy way to defining and running multi-container Docker applications. They
are becoming the basic unit for container application scenarios.
Nowadays Zun doesn't support creating and running multiple containers as an
integration. So we will introduce the new Object ``capsule`` to realize this
function. ``capsule`` is the basic unit for zun to support service to external.
The ``capsule`` will be designed based on some similar concepts such as pod and compose.
For example, ``capsule`` can be specified in a yaml file that might be similar to the format
of k8s pod manifest. However, the specification of ``capsule`` will be exclusive to Zun. The
details will be showed in the following section.
The ``capsule`` will be designed based on some similar concepts such as pod and
compose. For example, ``capsule`` can be specified in a yaml file that might be
similar to the format of k8s pod manifest. However, the specification of
``capsule`` will be exclusive to Zun. The details will be showed in the
following section.
Proposed change
===============
A ``capsule`` has the following properties:
* Structure: It can contains one or multiple containers, and has a sandbox
container which will support the network namespace for the capsule.
container which will support the network namespace for the capsule.
* Scheduler: Containers inside a capsule are scheduled as a unit, thus all
containers inside a capsule is co-located. All containers inside a capsule
will be launched in one compute host.
* Network: Containers inside a capsule share the same network namespace, so they
share IP address(es) and can find each other via localhost by using different
remapping network port. Capsule IP address(es) will re-use the sandbox IP.
Containers communication between different capsules will use capsules IP and
port.
containers inside a capsule is co-located. All containers inside a capsule
will be launched in one compute host.
* Network: Containers inside a capsule share the same network namespace,
so they share IP address(es) and can find each other via localhost by using
different remapping network port. Capsule IP address(es) will re-use the
sandbox IP. Containers communication between different capsules will use
capsules IP and port.
* LifeCycle: Capsule has different status:
Starting: Capsule is created, but one or more container inside the capsule is
being created.
Running: Capsule is created, and all the containers are running.
Finished: All containers inside the capsule have successfully executed and exited.
Failed: Capsule creation is failed
* Restart Policy: Capsule will have a restart policy just like container. The restart
policy relies on container restart policy to execute.
Starting: Capsule is created, but one or more container inside the capsule is
being created.
Running: Capsule is created, and all the containers are running.
Finished: All containers inside the capsule have successfully executed and
exited.
Failed: Capsule creation is failed
* Restart Policy: Capsule will have a restart policy just like container. The
restart policy relies on container restart policy to execute.
* Health checker:
In the first step of realization, container inside the capsule will send its
status to capsule when its status changed.
In the first step of realization, container inside the capsule will send its
status to capsule when its status changed.
* Upgrade and rollback:
Upgrade: Support capsule update(different from zun update). That means the
container image will update, launch the new capsule from new image, then destroy
the old capsule. The capsule IP address will change. For Volume, need to clarify
it after Cinder integration.
Rollback: When update failed, rollback to it origin status.
Upgrade: Support capsule update(different from zun update). That means the
container image will update, launch the new capsule from new image, then
destroy the old capsule. The capsule IP address will change. For Volume, need
to clarify it after Cinder integration.
Rollback: When update failed, rollback to it origin status.
* CPU and memory resources: Given that host resource allocation, cpu and memory
support will be implemented.
support will be implemented.
Implementation:
@ -81,23 +83,23 @@ Implementation:
and cgroups.
2. Support the CRUD operations against capsule object, capsule should be a
basic unit for scheduling and spawning. To be more specific, all containers
in a capsule should be scheduled to and spawned on the same host. Server side
will keep the information in DB.
3. Add functions about yaml file parser in the CLI side. After parsing the yaml,
send the REST to API server side, scheduler will decide which host to run
the capsule.
in a capsule should be scheduled to and spawned on the same host. Server
side will keep the information in DB.
3. Add functions about yaml file parser in the CLI side. After parsing the
yaml, send the REST to API server side, scheduler will decide which host
to run the capsule.
4. Introduce new REST API for capsule. The capsule creation workflow is:
CLI Parsing capsule information from yaml file -->
API server do the CRUD operation, call scheduler to launch the capsule, from Cinder
to get volume, from Kuryr to get network support-->
API server do the CRUD operation, call scheduler to launch the capsule, from
Cinder to get volume, from Kuryr to get network support-->
Compute host launch the capsule, attach the volume-->
Send the status to API server, update the DB.
5. Capsule creation will finally depend on the backend container driver. Now choose
Docker driver first.
6. Define a yaml file structure for capsule. The yaml file will be compatible with
Kubernetes pod yaml file, at the same time Zun will define the available properties,
metadata and template of the yaml file. In the first step, only essential properties
will be defined.
5. Capsule creation will finally depend on the backend container driver. Now
choose Docker driver first.
6. Define a yaml file structure for capsule. The yaml file will be compatible
with Kubernetes pod yaml file, at the same time Zun will define the
available properties, metadata and template of the yaml file. In the first
step, only essential properties will be defined.
The diagram below offers an overview of the architecture of ``capsule``:
@ -129,6 +131,7 @@ Yaml format for ``capsule``:
Sample capsule:
.. code-block:: yaml
apiVersion: beta
kind: capsule
metadata:
@ -163,7 +166,7 @@ Sample capsule:
cpu: 1
memory: 2GB
volumes:
- name: volume1
- name: volume1
drivers: cinder
driverOptions: options
size: 5GB
@ -183,14 +186,16 @@ ObjectMeta fields:
* lables(dict, name: string): labels for capsule
CapsuleSpec fields:
* containers(Containers array): containers info array, one capsule have multiple containers
* containers(Containers array): containers info array, one capsule have
multiple containers
* volumes(Volumes array): volume information
Containers fields:
* name(string): name for container
* image(string): container image for container
* imagePullPolicy(string): [Always | Never | IfNotPresent]
* imageDriver(string): glance or dockerRegistory, by default is according to zun configuration
* imageDriver(string): glance or dockerRegistory, by default is according to
zun configuration
* command(string): container command when starting
* args(string): container args for the command
* workDir(string): workDir for the container
@ -223,20 +228,22 @@ Volumes fields:
* driver(string): volume drivers
* driverOptions(string): options for volume driver
* size(string): volume size
* volumeType(string): volume type that cinder need. by default is from cinder config
* volumeType(string): volume type that cinder need. by default is from
cinder config
* image(string): cinder needed to boot from image
Alternatives
------------
1. Abstract all the information from yaml file and implement the capsule CRUD in
client side.
1. Abstract all the information from yaml file and implement the capsule CRUD
in client side.
2. Implement the CRUD in server side.
Data model impact
-----------------
* Add a field to container to store the id of the capsule which include the container
* Add a field to container to store the id of the capsule which include the
container
* Create a 'capsule' table. Each entry in this table is a record of a capsule.
.. code-block:: python
@ -277,29 +284,32 @@ REST API impact
---------------
* Add a new API endpoint /capsule to the REST API interface.
* Capsule API: Capsule consider to support multiple operations as container
composition.
composition.
* Container API: Many container API will be extended to capsule. Here in this
section will define the API usage range.
Capsule API:
list <List all the capsule, add parameters about list capsules with the same labels>
list <List all the capsule, add parameters about list capsules
with the same labels>
create <-f yaml file><-f directory>
describe <display the details state of one or more resource>
delete
<capsule name>
delete <capsule name>
<-l name=label-name>
<all>
run <--capsule ... container-image>
If "--capsule .." is set, the container will be created inside the capsule.
If "--capsule .." is set, the container will be created
inside the capsule.
Otherwise, it will be created as normal.
Container API:
* show/list allow all containers
* create/delete allow bare container only (disallow in-capsule containers)
* create/delete allow bare container only
(disallow in-capsule containers)
* attach/cp/logs/top allow all containers
* start/stop/restart/kill/pause/unpause allow bare container only (disallow in-capsule containers)
* update for container in the capsule, need <--capsule> params.
Bare container doesn't need.
* start/stop/restart/kill/pause/unpause allow bare container only (disallow
in-capsule containers)
* update for container in the capsule, need <--capsule>
params. Bare container doesn't need.
Security impact
---------------

View File

@ -26,10 +26,10 @@ Proposed change
zun commit <container-name> <image-name>
# zun help commit
usage: zun commit <container-name> <image-name>
Create a new image by taking a snapshot of a running container.
Create a new image by taking a snapshot of a running container.
Positional arguments:
<container-name> Name or ID of container.
<image-name> Name of snapshot.
<container-name> Name or ID of container.
<image-name> Name of snapshot.
2. Extend docker driver to enable “docker commit” command to create a
new image.
3. The new image should be accessable from other hosts. There are two
@ -59,27 +59,30 @@ Creates an image from a container.
Specify the image name in the request body.
After making this request, a user typically must keep polling the status of the created image
from glance to determine whether the request succeeded.
If the operation succeeds, the created image has a status of active. User can also see the new
image in the image back end that OpenStack Image service manages.
After making this request, a user typically must keep polling the status of the
created image from glance to determine whether the request succeeded.
If the operation succeeds, the created image has a status of active. User can
also see the new image in the image back end that OpenStack Image service
manages.
Preconditions:
1. The container must exist.
2. User can only create a new image from the container when its status is Running, Stopped,
and Paused.
2. User can only create a new image from the container when its status is
Running, Stopped and Paused.
3. The connection to the Image service is valid.
POST /containers/<ID>/commit: commit a container
Example commit
{
"image-name" : "foo-image"
"image-name" : "foo-image"
}
Response:
If successful, this method does not return content in the response body. Normal response codes: 202
Error response codes: badRequest(400), unauthorized(401), forbidden(403), itemNotFound(404)
If successful, this method does not return content in the response body.
Normal response codes: 202
Error response codes: badRequest(400), unauthorized(401), forbidden(403),
itemNotFound(404)
Security impact
===============

View File

@ -71,20 +71,23 @@ host.
How it works internally?
Once the user specifies the number of cpus, we would try to select a numa node
that has the same or more number of cpusets unpinned that can satisfy the request.
that has the same or more number of cpusets unpinned that can satisfy the
request.
Once the cpusets are determined by the scheduler and it's corresponding numa node,
a driver method should be called for the actual provisoning of the request on the
compute node. Corresponding updates would be made to the inventory table.
Once the cpusets are determined by the scheduler and it's corresponding numa
node, a driver method should be called for the actual provisoning of the
request on the compute node. Corresponding updates would be made to the
inventory table.
In case of the docker driver - this can be achieved by a docker run equivalent:
docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3"
The cpuset-mems would allow the memory access for the cpusets to stay localized.
The cpuset-mems would allow the memory access for the cpusets to stay
localized.
If the container is in paused/stopped state, the DB will still continue to block
the pinset information for the container instead of releasing it.
If the container is in paused/stopped state, the DB will still continue to
block the pinset information for the container instead of releasing it.
Design Principles

View File

@ -99,7 +99,8 @@ not true, users need to manually create the resources.
container by using the IP address(es) of the neutron port. This is
equivalent to:
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> --ip6 <ipv6_address>
$ docker run --net=foo kubernetes/pause --ip <ipv4_address> \
--ip6 <ipv6_address>
NOTE: In this step, docker engine will make a call to Kuryr to setup the
networking of the container. After receiving the request from Docker, Kuryr