diff --git a/HACKING.rst b/HACKING.rst index f5558235c..40fc67d5d 100644 --- a/HACKING.rst +++ b/HACKING.rst @@ -1,7 +1,7 @@ Zun Style Commandments ====================== -Read the OpenStack Style Commandments https://docs.openstack.org/developer/hacking/ +Read the OpenStack Style Commandments https://docs.openstack.org/hacking/latest/ Zun Specific Commandments ------------------------- diff --git a/README.rst b/README.rst index 80ef7c70d..563441ff0 100644 --- a/README.rst +++ b/README.rst @@ -24,8 +24,8 @@ Note that this is a hard requirement. * Documentation: https://docs.openstack.org/zun/latest/ * Source: https://git.openstack.org/cgit/openstack/zun * Bugs: https://bugs.launchpad.net/zun -* Blueprints:** https://blueprints.launchpad.net/zun -* REST Client:** https://git.openstack.org/cgit/openstack/python-zunclient +* Blueprints: https://blueprints.launchpad.net/zun +* REST Client: https://git.openstack.org/cgit/openstack/python-zunclient Features -------- diff --git a/doc/source/contributor/objects.rst b/doc/source/contributor/objects.rst index eb8908854..54abb101b 100644 --- a/doc/source/contributor/objects.rst +++ b/doc/source/contributor/objects.rst @@ -16,7 +16,7 @@ Versioned Objects ================= Zun uses the `oslo.versionedobjects library -`_ to +`_ to construct an object model that can be communicated via RPC. These objects have a version history and functionality to convert from one version to a previous version. This allows for 2 different levels of the code to still pass objects @@ -28,7 +28,7 @@ Object Version Testing In order to ensure object versioning consistency is maintained, oslo.versionedobjects has a fixture to aid in testing object versioning. `oslo.versionedobjects.fixture.ObjectVersionChecker -`_ +`_ generates fingerprints of each object, which is a combination of the current version number of the object, along with a hash of the RPC-critical parts of the object (fields and remotable methods). diff --git a/doc/source/osprofiler.rst b/doc/source/osprofiler.rst index dc8d70e63..74eb8954a 100644 --- a/doc/source/osprofiler.rst +++ b/doc/source/osprofiler.rst @@ -18,7 +18,7 @@ This is the demo for Zun integrating with osprofiler. `Zun `_ is a OpenStack container management services, while `OSProfiler -`_ provides +`_ provides a tiny but powerful library that is used by most OpenStack projects and their python clients. @@ -30,7 +30,7 @@ option without using ceilometer. Here just use Redis as an example, user can choose mongodb, elasticsearch, and `etc `_. Install Redis as the `centralized collector -`_ +`_ Redis in container is easy to launch, `choose Redis Docker `_ and run:: diff --git a/specs/container-composition.rst b/specs/container-composition.rst index fe5045372..9d5836044 100644 --- a/specs/container-composition.rst +++ b/specs/container-composition.rst @@ -49,8 +49,8 @@ container which will support the network namespace for the capsule. * Scheduler: Containers inside a capsule are scheduled as a unit, thus all containers inside a capsule is co-located. All containers inside a capsule will be launched in one compute host. -* Network: Containers inside a capsule share the same network namespace, -so they share IP address(es) and can find each other via localhost by using +* Network: Containers inside a capsule share the same network namespace, so +they share IP address(es) and can find each other via localhost by using different remapping network port. Capsule IP address(es) will re-use the sandbox IP. Containers communication between different capsules will use capsules IP and port. @@ -58,19 +58,19 @@ capsules IP and port. Starting: Capsule is created, but one or more container inside the capsule is being created. Running: Capsule is created, and all the containers are running. -Finished: All containers inside the capsule have successfully executed and -exited. +Finished: All containers inside the capsule have successfully executed +and exited. Failed: Capsule creation is failed -* Restart Policy: Capsule will have a restart policy just like container. The -restart policy relies on container restart policy to execute. +* Restart Policy: Capsule will have a restart policy just like container. +The restart policy relies on container restart policy to execute. * Health checker: In the first step of realization, container inside the capsule will send its status to capsule when its status changed. * Upgrade and rollback: Upgrade: Support capsule update(different from zun update). That means the container image will update, launch the new capsule from new image, then -destroy the old capsule. The capsule IP address will change. For Volume, need -to clarify it after Cinder integration. +destroy the old capsule. The capsule IP address will change. For Volume, +need to clarify it after Cinder integration. Rollback: When update failed, rollback to it origin status. * CPU and memory resources: Given that host resource allocation, cpu and memory support will be implemented. @@ -86,50 +86,52 @@ Implementation: in a capsule should be scheduled to and spawned on the same host. Server side will keep the information in DB. 3. Add functions about yaml file parser in the CLI side. After parsing the - yaml, send the REST to API server side, scheduler will decide which host - to run the capsule. + yaml, send the REST to API server side, scheduler will decide which host to + run the capsule. 4. Introduce new REST API for capsule. The capsule creation workflow is: CLI Parsing capsule information from yaml file --> - API server do the CRUD operation, call scheduler to launch the capsule, from - Cinder to get volume, from Kuryr to get network support--> - Compute host launch the capsule, attach the volume--> + API server do the CRUD operation, call scheduler to launch the capsule, + from Cinder to get volume, from Kuryr to get network support --> + Compute host launch the capsule, attach the volume --> Send the status to API server, update the DB. -5. Capsule creation will finally depend on the backend container driver. Now - choose Docker driver first. +5. Capsule creation will finally depend on the backend container driver. + Now choose Docker driver first. 6. Define a yaml file structure for capsule. The yaml file will be compatible with Kubernetes pod yaml file, at the same time Zun will define the available properties, metadata and template of the yaml file. In the first step, only essential properties will be defined. -The diagram below offers an overview of the architecture of ``capsule``: +The diagram below offers an overview of the architecture of ``capsule``. :: - +-----------------------------------------------------------+ - | +-----------+ | - | | | | - | | Sandbox | | - | | | | - | +-----------+ | - | | - | | - | +-------------+ +-------------+ +-------------+ | - | | | | | | | | - | | Container | | Container | | Container | | - | | | | | | | | - | +-------------+ +-------------+ +-------------+ | - | | - | | - | +----------+ +----------+ | - | | | | | | - | | Volume | | Volume | | - | | | | | | - | +----------+ +----------+ | - | | - +-----------------------------------------------------------+ + + +-----------------------------------------------------------+ + | +-----------+ | + | | | | + | | Sandbox | | + | | | | + | +-----------+ | + | | + | | + | +-------------+ +-------------+ +-------------+ | + | | | | | | | | + | | Container | | Container | | Container | | + | | | | | | | | + | +-------------+ +-------------+ +-------------+ | + | | + | | + | +----------+ +----------+ | + | | | | | | + | | Volume | | Volume | | + | | | | | | + | +----------+ +----------+ | + | | + +-----------------------------------------------------------+ Yaml format for ``capsule``: Sample capsule: + .. code-block:: yaml apiVersion: beta @@ -228,11 +230,10 @@ Volumes fields: * driver(string): volume drivers * driverOptions(string): options for volume driver * size(string): volume size -* volumeType(string): volume type that cinder need. by default is from -cinder config +* volumeType(string): volume type that cinder need. by default is from cinder +config * image(string): cinder needed to boot from image - Alternatives ------------ 1. Abstract all the information from yaml file and implement the capsule CRUD @@ -286,30 +287,29 @@ REST API impact * Capsule API: Capsule consider to support multiple operations as container composition. * Container API: Many container API will be extended to capsule. Here in this -section will define the API usage range. + section will define the API usage range. -Capsule API: -list -create <-f yaml file><-f directory> -describe -delete - <-l name=label-name> - <–all> -run <--capsule ... container-image> - If "--capsule .." is set, the container will be created - inside the capsule. - Otherwise, it will be created as normal. +:: -Container API: -* show/list allow all containers -* create/delete allow bare container only - (disallow in-capsule containers) -* attach/cp/logs/top allow all containers -* start/stop/restart/kill/pause/unpause allow bare container only (disallow - in-capsule containers) -* update for container in the capsule, need <--capsule> - params. Bare container doesn't need. + Capsule API: + list + create <-f yaml file><-f directory> + describe + delete + + <-l name=label-name> + <–all> + run <--capsule ... container-image> + If "--capsule .." is set, the container will be created inside the capsule. + Otherwise, it will be created as normal. + + Container API: + * show/list allow all containers + * create/delete allow bare container only (disallow in-capsule containers) + * attach/cp/logs/top allow all containers + * start/stop/restart/kill/pause/unpause allow bare container only (disallow in-capsule containers) + * update for container in the capsule, need <--capsule> params. + Bare container doesn't need. Security impact --------------- @@ -395,5 +395,7 @@ A set of documentation for this new feature will be required. References ========== [1] https://kubernetes.io/ + [2] https://docs.docker.com/compose/ + [3] https://etherpad.openstack.org/p/zun-container-composition diff --git a/specs/container-snapshot.rst b/specs/container-snapshot.rst index 7298d10ad..b6617d601 100644 --- a/specs/container-snapshot.rst +++ b/specs/container-snapshot.rst @@ -22,16 +22,21 @@ taking a snapshot of a container. Proposed change =============== 1. Introduce a new CLI command to enable a user to take a snapshot of a running - container instance. - zun commit - # zun help commit - usage: zun commit - Create a new image by taking a snapshot of a running container. - Positional arguments: - Name or ID of container. - Name of snapshot. + container instance:: + + $ zun commit + + $ zun help commit + + usage: zun commit + Create a new image by taking a snapshot of a running container. + Positional arguments: + Name or ID of container. + Name of snapshot. + 2. Extend docker driver to enable “docker commit” command to create a new image. + 3. The new image should be accessable from other hosts. There are two options to support this: a) upload the image to glance @@ -66,23 +71,27 @@ also see the new image in the image back end that OpenStack Image service manages. Preconditions: + 1. The container must exist. + 2. User can only create a new image from the container when its status is -Running, Stopped and Paused. + Running, Stopped, and Paused. + 3. The connection to the Image service is valid. +:: -POST /containers//commit: commit a container -Example commit -{ -"image-name" : "foo-image" -} + POST /containers//commit: commit a container + Example commit + { + "image-name" : "foo-image" + } Response: If successful, this method does not return content in the response body. -Normal response codes: 202 -Error response codes: badRequest(400), unauthorized(401), forbidden(403), -itemNotFound(404) +- Normal response codes: 202 +- Error response codes: BadRequest(400), Unauthorized(401), Forbidden(403), +ItemNotFound(404) Security impact =============== diff --git a/specs/cpuset-container.rst b/specs/cpuset-container.rst index 3abb759ab..6b7f0b1d6 100644 --- a/specs/cpuset-container.rst +++ b/specs/cpuset-container.rst @@ -61,9 +61,9 @@ cpusets requested for dedicated usage. 6. If this feature is being used with the zun scheduler, then the scheduler needs to be aware of the host capabilities to choose the right host. -For example: +For example:: -zun run -i -t --name test --cpu 4 --cpu-policy dedicated + $ zun run -i -t --name test --cpu 4 --cpu-policy dedicated We would try to support scheduling using both of these policies on the same host. @@ -71,20 +71,21 @@ host. How it works internally? Once the user specifies the number of cpus, we would try to select a numa node -that has the same or more number of cpusets unpinned that can satisfy the -request. +that has the same or more number of cpusets unpinned that can satisfy +the request. Once the cpusets are determined by the scheduler and it's corresponding numa node, a driver method should be called for the actual provisoning of the request on the compute node. Corresponding updates would be made to the inventory table. -In case of the docker driver - this can be achieved by a docker run equivalent: +In case of the docker driver - this can be achieved by a docker run +equivalent:: -docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3" + $ docker run -d ubuntu --cpusets-cpu="1,3" --cpuset-mems="1,3" -The cpuset-mems would allow the memory access for the cpusets to stay -localized. +The cpuset-mems would allow the memory access for the cpusets to +stay localized. If the container is in paused/stopped state, the DB will still continue to block the pinset information for the container instead of releasing it. diff --git a/specs/kuryr-integration.rst b/specs/kuryr-integration.rst index 45d08d625..fe7ce8c01 100644 --- a/specs/kuryr-integration.rst +++ b/specs/kuryr-integration.rst @@ -52,9 +52,9 @@ Proposed change The typical workflow will be as following: 1. Users call Zun APIs to create a container network by passing a name/uuid of - a neutron network. + a neutron network:: - $ zun network-create --neutron-net private --name foo + $ zun network-create --neutron-net private --name foo 2. After receiving this request, Zun will make several API calls to Neutron to retrieve the necessary information about the specified network @@ -63,19 +63,19 @@ The typical workflow will be as following: should only be one or two. If the number of subnets is two, they must be a ipv4 subnet and a ipv6 subnet respectively. Zun will retrieve the cidr/gateway/subnetpool of each subnet and pass these information to - Docker to create a Docker network. The API call will be similar to: + Docker to create a Docker network. The API call will be similar to:: - $ docker network create -d kuryr --ipam-driver=kuryr \ - --subnet \ - --gateway \ - -ipv6 --subnet \ - --gateway \ - -o neutron.net.uuid= \ - -o neutron.pool.uuid= \ - --ipam-opt neutron.pool.uuid= \ - -o neutron.pool.v6.uuid= \ - --ipam-opt neutron.pool.v6.uuid= \ - foo + $ docker network create -d kuryr --ipam-driver=kuryr \ + --subnet \ + --gateway \ + -ipv6 --subnet \ + --gateway \ + -o neutron.net.uuid= \ + -o neutron.pool.uuid= \ + --ipam-opt neutron.pool.uuid= \ + -o neutron.pool.v6.uuid= \ + --ipam-opt neutron.pool.v6.uuid= \ + foo NOTE: In this step, docker engine will check the list of registered network plugin and find the API endpoint of Kuryr, then make a call to Kuryr to create @@ -84,23 +84,24 @@ This example assumed that the Neutron resources were pre-created by cloud administrator (which should be the case at most of the clouds). If this is not true, users need to manually create the resources. -3. Users call Zun APIs to create a container from the container network 'foo'. +3. Users call Zun APIs to create a container from the container network 'foo':: - $ zun run --net=foo nginx + $ zun run --net=foo nginx 4. Under the hood, Zun will perform several steps to configure the networking. - First, call neutron API to create a port from the specified neutron network. + First, call neutron API to create a port from the specified neutron + network:: - $ neutron port-create private + $ neutron port-create private 5. Then, Zun will retrieve information of the created neutron port and retrieve its IP address(es). A port could have one or two IP addresses: a ipv4 address and/or a ipv6 address. Then, call Docker APIs to create the container by using the IP address(es) of the neutron port. This is - equivalent to: + equivalent to:: - $ docker run --net=foo kubernetes/pause --ip \ - --ip6 + $ docker run --net=foo kubernetes/pause --ip \ + --ip6 NOTE: In this step, docker engine will make a call to Kuryr to setup the networking of the container. After receiving the request from Docker, Kuryr @@ -109,15 +110,15 @@ This might include something like: create a veth pair, connect one end of the veth pair to the container, connect the other end of the veth pair a neutron-created bridge, etc. -6. Users calls Zun API to list/show the created network(s). +6. Users calls Zun API to list/show the created network(s):: - $ zun network-list - $ zun network-show foo + $ zun network-list + $ zun network-show foo -7. Upon completion, users calls Zun API to remove the container and network. +7. Upon completion, users calls Zun API to remove the container and network:: - $ zun delete - $ zun network-delete foo + $ zun delete + $ zun network-delete foo Alternatives @@ -215,6 +216,9 @@ A set of documentation for this new feature will be required. References ========== [1] https://git.openstack.org/cgit/openstack/kuryr-libnetwork + [2] https://blueprints.launchpad.net/kuryr/+spec/existing-neutron-network + [3] https://blueprints.launchpad.net/kuryr-libnetwork/+spec/existing-subnetpool + [4] https://git.openstack.org/cgit/openstack/zun/tree/specs/container-sandbox.rst diff --git a/zun/common/i18n.py b/zun/common/i18n.py index c0b7b73b9..2d62885f9 100644 --- a/zun/common/i18n.py +++ b/zun/common/i18n.py @@ -15,7 +15,7 @@ # It's based on oslo.i18n usage in OpenStack Keystone project and # recommendations from -# https://docs.openstack.org/developer/oslo.i18n/usage.html +# https://docs.openstack.org/oslo.i18n/latest/user/usage.html import oslo_i18n diff --git a/zun/common/policy.py b/zun/common/policy.py index 1dae5d317..4a0c805a0 100644 --- a/zun/common/policy.py +++ b/zun/common/policy.py @@ -48,7 +48,7 @@ def init(policy_file=None, rules=None, """ global _ENFORCER if not _ENFORCER: - # https://docs.openstack.org/developer/oslo.policy/usage.html + # https://docs.openstack.org/oslo.policy/latest/user/usage.html _ENFORCER = policy.Enforcer(CONF, policy_file=policy_file, rules=rules, diff --git a/zun/common/utils.py b/zun/common/utils.py index a795b6a48..9ecef7712 100644 --- a/zun/common/utils.py +++ b/zun/common/utils.py @@ -13,7 +13,7 @@ # It's based on oslo.i18n usage in OpenStack Keystone project and # recommendations from -# https://docs.openstack.org/developer/oslo.i18n/usage.html +# https://docs.openstack.org/oslo.i18n/latest/user/usage.html """Utilities and helper functions.""" import eventlet