Use docutils AST for making smart checks

Trying to enable testing specs for 9.0, we've encountered different
unavoidable problems with existing checks. For example, it was
impossible to have code blocks with line length more than 80 characters.

This commit uses docutils and make those changes using AST, so we can
avoid some checks for code-blocks or long links.

Closes-Bug: #1569929
Change-Id: Ia501754922b3272acd1a865513d5dffa17981331
This commit is contained in:
Igor Kalnitsky 2016-04-14 13:09:18 +03:00
parent 8a7041babd
commit 6ec0b000dd
No known key found for this signature in database
GPG Key ID: F05067E18910196E
31 changed files with 384 additions and 159 deletions

View File

@ -133,4 +133,5 @@ Upgrade guide must be updated with new command line for unpacking of tarball.
References
==========
Discussion in openstack-dev: https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg32837.html
Discussion in openstack-dev:
https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg32837.html

View File

@ -35,7 +35,7 @@ contradicts with the main Fuel idea - keep as more business logic in Nailgun as
we can. Also, the only 'Cinder' role makes it hard to deploy VMDK and
LVM/Ceph backends in one environment simultaneously.
I want to cover the following use case within blueprint [5]
I want to cover the following use case within blueprint [5]_.
Be able to deploy Cinder VMDK backend simultaneously with any other Cinder
backends.
@ -153,7 +153,7 @@ Developer impact
Part of fuel-library, which deploys cinder-node will be reverted to state
before support of vmdk was enabled. New role deployment will be realized as an
independent task for granular deployment according to [4].
independent task for granular deployment according to [4]_.
Implementation
==============
@ -192,9 +192,9 @@ No strict dependencies.
Possible dependencies are:
* Granular deployment feature [1].
* VMware: Dual hypervisor support (vCenter and KVM in one environment) [2].
* VMware UI Settings Tab for FuelWeb [3].
* Granular deployment feature [1]_.
* VMware: Dual hypervisor support (vCenter and KVM in one environment) [2]_.
* VMware UI Settings Tab for FuelWeb [3]_.
Testing
@ -205,7 +205,7 @@ this tests depend on ostf tests, which know nothing about availability zones.
Therefore OSTF tests can't test how cinder works in multiple availability zones
environment. And surely tests, which based on OSTF, are also useless.
This problem will be fixed in blueprint [3]. When it happens, system tests
This problem will be fixed in blueprint [3]_. When it happens, system tests
should be changed for using with availability zones.
Before it the QA team may perform manual testing of declared features.
@ -221,13 +221,13 @@ There are several changes in Users' Guide:
References
==========
[1] Granular deployment feature
(https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks)
[2] VMware: Dual hypervisor support (vCenter and KVM in one environment)
(https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor)
[3] VMware UI Settings Tab for FuelWeb
(https://blueprints.launchpad.net/fuel/+spec/vmware-ui-setting)
[4] Modify Fuel Library to become more modular
(https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization)
[5] VMware: Add a separate role for Cinder with VMDK backend
(https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role)
.. [1] Granular deployment feature
(https://blueprints.launchpad.net/fuel/+spec/granular-deployment-based-on-tasks)
.. [2] VMware: Dual hypervisor support (vCenter and KVM in one environment)
(https://blueprints.launchpad.net/fuel/+spec/vmware-dual-hypervisor)
.. [3] VMware UI Settings Tab for FuelWeb
(https://blueprints.launchpad.net/fuel/+spec/vmware-ui-setting)
.. [4] Modify Fuel Library to become more modular
(https://blueprints.launchpad.net/fuel/+spec/fuel-library-modularization)
.. [5] VMware: Add a separate role for Cinder with VMDK backend
(https://blueprints.launchpad.net/fuel/+spec/cinder-vmdk-role)

View File

@ -207,7 +207,10 @@ mirrored into corresponding documentation.
References
==========
* initial discussion: https://www.mail-archive.com/fuel-dev%40lists.launchpad.net/msg01515.html;
* initial blueprint: https://blueprints.launchpad.net/fuel/+spec/upload-astute-yaml-only;
* related blueprint: https://blueprints.launchpad.net/fuel/+spec/blank-role-node.
* initial discussion:
https://www.mail-archive.com/fuel-dev%40lists.launchpad.net/msg01515.html;
* initial blueprint:
https://blueprints.launchpad.net/fuel/+spec/upload-astute-yaml-only;
* related blueprint:
https://blueprints.launchpad.net/fuel/+spec/blank-role-node.

View File

@ -43,7 +43,8 @@ intended result is the following breakdown of disk space:
* 1GB for Docker metadata*
* Greater of 8gb or 30% of remaining disk space for Docker main data*
(* Docker changes tracked in https://blueprints.launchpad.net/fuel/+spec/dedicated-docker-volume-on-master)
(* Docker changes tracked in
https://blueprints.launchpad.net/fuel/+spec/dedicated-docker-volume-on-master)
This change will not be made available to existing installations and will
not be applied during Fuel Master upgrade.
@ -131,7 +132,8 @@ Work Items
Dependencies
============
* Related blueprint https://blueprints.launchpad.net/fuel/+spec/dedicated-docker-volume-on-master
* Related blueprint
https://blueprints.launchpad.net/fuel/+spec/dedicated-docker-volume-on-master
Testing
=======

View File

@ -40,7 +40,7 @@ It should acts as other type tasks actions:
* the fact of the operation and its result should be logged in Astute
log, e.g.:
.. code-block:: none
.. code:: text
Run hook
---
@ -56,12 +56,12 @@ It should acts as other type tasks actions:
type: reboot
diagnostic_name: <plugin name>
.. code-block:: none
.. code-block:: text
Reboot command failed for nodes ["<node id>"]. Check debug output
for details
.. code-block:: none
.. code-block:: text
Time detection (<timeout> sec) for node reboot has expired
@ -277,4 +277,4 @@ References
* https://blueprints.launchpad.net/fuel/+spec/reboot-action-for-plugin
* Astute part: https://review.openstack.org/#/c/148355/
* Nailgun part: https://review.openstack.org/#/c/149297/
* Fuel plugin builder part: https://review.openstack.org/#/c/150316/
* Fuel plugin builder part: https://review.openstack.org/#/c/150316/

View File

@ -277,7 +277,9 @@ Work Items
connection on the storage interface parent / probbed interface
(OVS bridge/LB does not support RDMA).
4. Example code can be found in `Mellanox fuel-web fork <https://github.com/Mellanox/fuel-web/commit/3386c6cc787d2d0ae48a386023b8b5c1998c0eeb>`_ (serializers and UI code are not relevant in this link).
4. Example code can be found in
`Mellanox fuel-web fork <https://github.com/Mellanox/fuel-web/commit/3386c6cc787d2d0ae48a386023b8b5c1998c0eeb>`_
(serializers and UI code are not relevant in this link).
Dependencies
@ -313,7 +315,8 @@ Documentation Impact
#. Instructions for "Network drivers identification" will be added to the
User guide.
#. Instructions for "How to install Mirantis Openstack with Infiniband Network"
will be added to the Mellanox community, similarly to `this post <https://community.mellanox.com/docs/DOC-2036>`_
will be added to the Mellanox community, similarly to
`this post <https://community.mellanox.com/docs/DOC-2036>`_
that has been made to the 5.1 based Fuel IB POC.

View File

@ -175,5 +175,6 @@ References
* InternJS library - https://theintern.github.io
* ChaiJS assertion library - http://chaijs.com
* Leadfoot library for consistency with Selenium WebDriver API - https://theintern.github.io/leadfoot
* Leadfoot library for consistency with Selenium WebDriver API -
https://theintern.github.io/leadfoot
* Spec for UI unit-tests - https://review.openstack.org/#/c/195666

View File

@ -295,6 +295,7 @@ Commit changes for Ironic module sync and adapt should have DocImpact tag.
References
==========
1. Blueprint https://blueprints.launchpad.net/fuel/+spec/upgrade-openstack-puppet-modules
1. Blueprint
https://blueprints.launchpad.net/fuel/+spec/upgrade-openstack-puppet-modules
2. Trello board https://trello.com/b/epRiNHz6/mos-puppets
3. Etherpad https://etherpad.openstack.org/p/fuel_puppet_modules_upgrade

View File

@ -319,7 +319,8 @@ In order to verify the quality of new features, automatic system tests will be
expanded by the cases listed below:
1. Environment is deployed using slaves from non-default nodegroup as
controller nodes. See https://blueprints.launchpad.net/fuel/+spec/test-custom-nodegroup-controllers
controller nodes. See
https://blueprints.launchpad.net/fuel/+spec/test-custom-nodegroup-controllers
2. New nodegroup is added to operational environment.
See https://blueprints.launchpad.net/fuel/+spec/test-nodegroup-add
@ -328,16 +329,19 @@ See https://blueprints.launchpad.net/fuel/+spec/test-nodegroup-add
See https://blueprints.launchpad.net/fuel/+spec/test-custom-default-gw
4. Deploy environment with few nodegroups and shared network parameters between
them. See https://blueprints.launchpad.net/fuel/+spec/test-nodegroups-share-networks
them. See
https://blueprints.launchpad.net/fuel/+spec/test-nodegroups-share-networks
5. Default IP range is changed for admin/pxe network.
See https://bugs.launchpad.net/fuel/+bug/1513154
6. Slave nodes are bootstrapped and successfully deployed using non-eth0
interface for admin/pxe network. See https://bugs.launchpad.net/fuel/+bug/1513159
interface for admin/pxe network.
See https://bugs.launchpad.net/fuel/+bug/1513159
Also there is a need to align existing tests for multiple cluster networks with
new features. See https://blueprints.launchpad.net/fuel/+spec/align-nodegroups-tests
new features. See
https://blueprints.launchpad.net/fuel/+spec/align-nodegroups-tests
Acceptance criteria
===================

View File

@ -17,10 +17,10 @@ Support daemon resource control by means of cgroups kernel feature.
Problem description
--------------------
General OS doesn't activate any protection by default against taking all hardware's memory
or CPU. So there is a necessity to allocate resources between competing processes,
e.g. at the peak time CPU computing resources should be distributed by the
specified rules.
General OS doesn't activate any protection by default against taking all
hardware's memory or CPU. So there is a necessity to allocate resources
between competing processes, e.g. at the peak time CPU computing resources
should be distributed by the specified rules.
----------------
@ -41,8 +41,8 @@ Service set what is supposed to be moved under cgroups control:
- ceph services
User will be able to move all services described above under cgroups resources
control(we specified only openstack related set of services in provided list, but,
user is able to move any service what he want under the cgroup control).
control(we specified only openstack related set of services in provided list,
but, user is able to move any service what he want under the cgroup control).
User should prepare special configuration JSON string for each service
what supposed to be moved under the cgroups control(cgroups utils will be
installed even if no cgroup's settings are specified).
@ -96,13 +96,13 @@ None
Data model
----------
New hidden section `cgroups` should be added into openstack.yaml file under 'general' group
to make cgroups settings configurable after the cluster is deployed. User will be able to
download/upload cluster's settings file to override default cgroups settings(add new services
and settings).
New hidden section `cgroups` should be added into openstack.yaml file under
'general' group to make cgroups settings configurable after the cluster is
deployed. User will be able to download/upload cluster's settings file to
override default cgroups settings(add new services and settings).
Example of a new structure what's supposed to be added into openstack.yaml file by
( the nesting level - ['editable']['additional_components']):
Example of a new structure what's supposed to be added into openstack.yaml
file by (the nesting level - ['editable']['additional_components']):
.. code-block:: yaml
@ -143,8 +143,8 @@ Example of services what should be added under cgroups control:
value: '{"memory":{"memory.soft_limit_in_bytes":"%total, min, max"}}'
...
Cgroups limits per service will be described in json format into 'text' fields. Format will be
explicitly described in feature's documentation.
Cgroups limits per service will be described in json format into 'text' fields.
Format will be explicitly described in feature's documentation.
REST API
@ -179,10 +179,11 @@ None
Fuel Library
============
Cloud operator should add services that are supposed to be moved under cgroups control into
cluster's settings file via CLI(into cgroups section), data from corresponding section
will be included into node's astute yaml file automatically during the serialization
process.
Cloud operator should add services that are supposed to be moved under cgroups
control into cluster's settings file via CLI(into cgroups section), data from
corresponding section will be included into node's astute yaml file
automatically during the serialization process.
A new cgroups puppet module should be implemented which will be used by
main task to configure given limits for services on the cluster nodes.
Module should be able to get input data from hiera structure
@ -243,9 +244,10 @@ End user impact
---------------
User will be able to configure cgroups for set of services using:
* API - PUT api call - http://FUEL_IP:8000/api/v1/clusters/CLUSTER_ID/attributes
* CLI - download, introduce `cgroups` section and upload cluster's settings via
`fuel --env CLUSTER_ID settings -d/-u` command
* API - PUT api call -
http://FUEL_IP:8000/api/v1/clusters/CLUSTER_ID/attributes
* CLI - download, introduce `cgroups` section and upload cluster's
settings via `fuel --env CLUSTER_ID settings -d/-u` command
------------------

View File

@ -36,12 +36,12 @@ them:
.. code-block:: yaml
- name: hypervisor:vmware
compatible:
- name: hypervisor:libvirt:*
requires:
- name: network:neutron:NSX
- name: network:neutron:DVS
- name: hypervisor:vmware
compatible:
- name: hypervisor:libvirt:*
requires:
- name: network:neutron:NSX
- name: network:neutron:DVS
In this case both NSX and DVS are required for vmware, but vCenter needs
only one of them.

View File

@ -219,8 +219,9 @@ Graph external relation is cascade deleted when external model is removed or
graph is removed.
Every graph is related only to one external model when parent model is
removed, this graph is removed automatically. It is not possible to create graph shared
between different models due artificial limitation that could be removed in future.
removed, this graph is removed automatically. It is not possible to create
graph shared between different models due artificial limitation that could be
removed in future.
REST API
--------
@ -311,31 +312,31 @@ Operations with graph via different models
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* Get all graphs for release
`GET /releases/<release_id>/deployment_graphs/`
``GET /releases/<release_id>/deployment_graphs/``
* Operate specific type for Release
`GET/POST/PUT/PATCH/DELETE /releases/<release_id>/deployment_graphs/<graph_type>/`
``GET/POST/PUT/PATCH/DELETE /releases/<release_id>/deployment_graphs/<graph_type>/``
* Get deployment tasks for the Release
Existing `GET /releases/<release_id>/deployment_tasks/`
Existing ``GET /releases/<release_id>/deployment_tasks/``
Should be extended with `graph_type` parameter for the consistency with
cluster `/deployment_tasks` handler (see below)
cluster ``/deployment_tasks`` handler (see below)
* Get all graphs for Cluster
`GET /clusters/<cluster_id>/deployment_graphs/`
``GET /clusters/<cluster_id>/deployment_graphs/``
* Get merged tasks for the environment
Existing `GET /clusters/<cluster_id>/deployment_tasks/`
Should be extended with `graph_type` parameter
Existing ``GET /clusters/<cluster_id>/deployment_tasks/``
Should be extended with ``graph_type`` parameter
* Operate specific type related to Cluster
`GET/POST/PUT/PATCH/DELETE /clusters/<cluster_id>/deployment_graphs/<graph_type>/`
``GET/POST/PUT/PATCH/DELETE /clusters/<cluster_id>/deployment_graphs/<graph_type>/``
* Get all graphs for Plugin
`GET /plugins/<cluster_id>/deployment_graphs/`
``GET /plugins/<cluster_id>/deployment_graphs/``
* Operate specific type related to plugin
`GET/POST/PUT/PATCH/DELETE /plugins/<plugin_id>/deployment_graphs/<graph_type>/`
``GET/POST/PUT/PATCH/DELETE /plugins/<plugin_id>/deployment_graphs/<graph_type>/``
Run custom graph
@ -356,6 +357,12 @@ Other API changes
* Existing `GET /clusters/<cluster_id>/deploy_tasks/graph.gv`
Should be extended with `graph_type` parameter.
Orchestration
=============
None
RPC Protocol
------------

View File

@ -10,6 +10,7 @@ Provide api to download serialized graph
API for downloading serialized graph, that is used for task-based deployment,
can be usefull in next scenarios:
- Manual pre-deployment verification
- Consumption of fuel composition layer in 3rd party applications
@ -19,9 +20,10 @@ This specification is concerned with latter usage scenario.
Problem description
--------------------
In solar we want to regenerate fuel resource composition, and take into account -
role allocation, conditions based on fuel settings, and other misc logic that
are used to build deployment composition. And all of those actions are executed during graph compilation procedure.
In solar we want to regenerate fuel resource composition, and take into
account - role allocation, conditions based on fuel settings, and other misc
logic that are used to build deployment composition. And all of those actions
are executed during graph compilation procedure.
Instead of fetching deployment graph we could fetch other configuration
options exposed by fuel API, like role allocation and settings. And write
@ -33,11 +35,23 @@ component and nailgun.
Proposed changes
----------------
Web UI
======
None
Nailgun
=======
New handler that will expose already existing logic.
Data model
----------
None
REST API
--------
@ -52,16 +66,39 @@ On request it will use task_based_deployment.TaskSerializer.serialize method
with all provided by user parameters.
Additional validations provided by handler:
- If node is not present in cluster request will be invalidated with 400 Bad Request
- If node is not present in cluster request will be invalidated with
400 Bad Request
- Cluster or node is not found in database - 404 Not Found
- If task based deployment is not allowed - 400 Bad Request
Orchestration
=============
None
RPC Protocol
------------
None
Fuel Client
===========
Exposing handler data with fuel client is out of scope for this
specification.
Plugins
=======
None
Fuel Library
============
None
------------
Alternatives
------------
@ -81,6 +118,12 @@ Security impact
No impact
--------------------
Notifications impact
--------------------
No impact
---------------
End user impact
---------------
@ -163,3 +206,10 @@ exposed by TasksSerializer, including:
- choose cluster
- select subset of nodes in cluster
- select list of tasks that will be included in tasks serialization
----------
References
----------
None

View File

@ -132,6 +132,10 @@ Orchestration
None
RPC Protocol
------------
None
Fuel Client
===========
@ -162,8 +166,8 @@ provide parsed output from commands
`dmsetup info -c --nameprefixes --noheadings --rows -o name,uuid,blkdevname,blkdevs_used`
`udevadm info --query=property --export --name=#{device_name}`
as for discovered block devices. It should be enough to determingite the multipath
configuration on server side.
as for discovered block devices. It should be enough to determingite the
multipath configuration on server side.
New version of fuel-nailgun-agent report will look this:
@ -247,10 +251,10 @@ All others impact can be related only with FC HBA multipath system itself.
Deployment impact
-----------------
We propose to add possibility to attach disk via multipath and FC HBA for nodes.
Disks will be available on fuel ui, and normally processed like physical disks.
This feature don't have any impact on previous installations, only extend
disks support.
We propose to add possibility to attach disk via multipath and FC HBA for
nodes. Disks will be available on fuel ui, and normally processed like
physical disks. This feature don't have any impact on previous installations,
only extend disks support.
----------------
Developer impact
@ -305,7 +309,8 @@ Work Items
- extend fuel-ui to show multipath disks
- add packages related to multipath support into default ubuntu-bootstrap image
- add fuel-nailgun-agent support for correct multipath disk discovery
- add to nailgun support for correct serialization of disks delivered by multipath
- add to nailgun support for correct serialization of disks delivered by
multipath
Dependencies

View File

@ -251,7 +251,8 @@ Work Items
* Prepare Aodh packages
* Implement fuel modular manifests to deploy the Aodh services
* Implement migration script for migrating alarms from Ceilometer to Aodh storage.
* Implement migration script for migrating alarms from Ceilometer to Aodh
storage.
Dependencies
============
@ -283,4 +284,4 @@ References
.. [1] https://blueprints.launchpad.net/ceilometer/+spec/split-ceilometer-alarming
.. [2] http://docs.openstack.org/developer/aodh/webapi/v2.html#alarms-api
.. [3] https://wiki.openstack.org/wiki/Telemetry#Aodh
.. [4] https://github.com/openstack/aodh
.. [4] https://github.com/openstack/aodh

View File

@ -69,6 +69,11 @@ Data model
None
REST API
--------
None
Orchestration
=============

View File

@ -143,8 +143,8 @@ that are implemented in terms of Dockerfile. That makes it difficult
If you try to implement docker-free mode compatible with docker one,
you are likely to go through several test fix iterations
(including update the ISO on the test environment). Besides if you build
this new ISO with a patch and this patch will pass tests, other tests are likely
to become broken.
this new ISO with a patch and this patch will pass tests, other tests are
likely to become broken.
It also could be important from the deprecation perspective. Having this
separate module we have two working schemes at the same time. We just need

View File

@ -80,6 +80,11 @@ Nailgun
New node status 'stopped' is going to be introduced. Also, Nailgun rpc
receiver is going to be altered to support 'stopped' task status.
Data model
----------
None
REST API
--------
@ -149,6 +154,13 @@ Performance impact
None
-----------------
Deployment impact
-----------------
None
----------------
Developer impact
----------------

View File

@ -35,7 +35,7 @@ of network configuration objects.
Nailgun
=======
Data Model
Data model
----------
Database calls will all be moved into the appropriate object classes.
@ -125,11 +125,12 @@ None
Developer impact
----------------
In NetworkManager developers must use object methods instead of direct database queries.
In NetworkManager developers must use object methods instead of direct
database queries.
--------------------------------
Infrastructure/operations impact
--------------------------------
---------------------
Infrastructure impact
---------------------
None

View File

@ -81,6 +81,11 @@ Data model
None
REST API
--------
None
Orchestration
=============

View File

@ -191,11 +191,13 @@ Primary assignee:
Work Items
==========
* Remove CentOS bootstrap image selection from `fuel-menu <https://github.com/openstack/fuel-menu>`_
* Switch to Ubuntu bootstrap in `fuel-library <https://github.com/openstack/fuel-library>`_
* Remove CentOS bootstrap image selection from
`fuel-menu <https://github.com/openstack/fuel-menu>`_
* Switch to Ubuntu bootstrap in
`fuel-library <https://github.com/openstack/fuel-library>`_
* Remove fuel-bootstrap-image [2]_
* Remove related code from `fuel-qa <https://github.com/openstack/fuel-qa>`_ and
`fuel-devops <https://github.com/openstack/fuel-devops>`_
* Remove related code from `fuel-qa <https://github.com/openstack/fuel-qa>`_
and `fuel-devops <https://github.com/openstack/fuel-devops>`_
Dependencies
@ -208,8 +210,9 @@ None
Testing, QA
------------
Related changes should be made in `fuel-devops <https://github.com/openstack/fuel-devops>`_
and `fuel-qa <https://github.com/openstack/fuel-qa>`_ since `bootstrap.rsa`
Related changes should be made in
`fuel-devops <https://github.com/openstack/fuel-devops>`_ and
`fuel-qa <https://github.com/openstack/fuel-qa>`_ since `bootstrap.rsa`
key file will no longer exist

View File

@ -91,9 +91,9 @@ There will be GET and PUT(PATCH) requests for both single object and
collection and POST requests for collection only.
Only `ip_addr`, `vip_namespace` and `is_user_defined` fields can be changed via
PUT requests. It should be possible to pass full output of GET request to the input of
PUT request (as for other handlers). Check for read-only fields should be done
in API validator.
PUT requests. It should be possible to pass full output of GET request to the
input of PUT request (as for other handlers). Check for read-only fields should
be done in API validator.
Post requests will allow to create
(allocate) VIPs in data base with user defined IP. `ip_addr`, `vip_namespace`,

View File

@ -196,6 +196,12 @@ Fuel Library
None
------------
Alternatives
------------
None
--------------
Upgrade impact
--------------

View File

@ -39,8 +39,8 @@ improved libvirt driver [2]_.
Also, Huge Pages configuration can be applied per NUMA node, for more
description about support NUMA node take a look [3]_. Operator will have an
ability to specify configuration for whole compute node. Distribution of Huge Pages
on NUMA nodes will be processed by Nailgun.
ability to specify configuration for whole compute node. Distribution of
Huge Pages on NUMA nodes will be processed by Nailgun.
Enabling of Huge Pages requires:
@ -79,8 +79,8 @@ system possesses.
Data model
----------
`numa_topology` section of node.metadata will contain information about available
Huge Pages and RAM per NUMA node [3]_:
`numa_topology` section of node.metadata will contain information about
available Huge Pages and RAM per NUMA node [3]_:
Huge Pages User's configuration will be stored in node.attributes as:
@ -235,9 +235,9 @@ Developer impact
None
--------------------------------
Infrastructure/operations impact
--------------------------------
---------------------
Infrastructure impact
---------------------
None
@ -312,8 +312,8 @@ Testing, QA
Acceptance criteria
===================
* User is provided with interface (Web UI/CLI/API) to enable and set Huge Pages in Fuel
per compute node or compute NUMA node
* User is provided with interface (Web UI/CLI/API) to enable and set
Huge Pages in Fuel per compute node or compute NUMA node
* New test cases are executed succesfully
----------

View File

@ -50,7 +50,8 @@ On agent side (OVS):
Web UI
======
In Neutron Advanced Configuration section a checkbox will be added to enable QoS.
In Neutron Advanced Configuration section a checkbox will be added to enable
QoS.
Nailgun
=======

View File

@ -71,7 +71,8 @@ The following validation is needed in UI only:
or plugin. Fuel will just configure appropriate `pci_passthrough_whitelist`
option in nova.conf for such interface and physical networks
The proposed change to Node Interfaces configuration screen will look like this:
The proposed change to Node Interfaces configuration screen will look like
this:
.. image:: ../../images/9.0/support-sriov/sriov-ui.png
:scale: 75 %

View File

@ -108,15 +108,15 @@ For example, we faced the following issues while switching to CentOS-7.2:
issue but it is not. This issue is common for many distributions, there
are bugs in CentOS, RedHat, Ubuntu, Novell, some of them are several years
old and some of them even open. There is a workaround to solve this issue -
disable TSO offloading [2], and it looks suitable solution for master node.
disable TSO offloading [2]_, and it looks suitable solution for master node.
Another solution is to use virtio drivers, but it requires a bit more work
and significantly more testing.
* libxml2 regression [3] that prevents postgresql to be built.
* libxml2 regression [3]_ that prevents postgresql to be built.
* upstream docker images were updated with a delay that caused several
builds to fail because of transition from systemd-container-\* packages
to actual systemd [1].
to actual systemd [1]_.
--------------
@ -195,7 +195,7 @@ February:
* Critical - 1
So, 24 bug for just 2 months. For those who interested in details there is
an etherpad [0] with links to every bug I've counted here.
an etherpad [0]_ with links to every bug I've counted here.
--------------------
@ -340,7 +340,7 @@ Fuel ISO uses CentOS-7.2 when deploying master node.
References
----------
[0] https://etherpad.openstack.org/p/r.a7fe0b575d891ed81206765fa5be6630
[1] http://seven.centos.org/2015/12/fixing-centos-7-systemd-conflicts-with-docker/
[2] https://bugs.launchpad.net/mos/+bug/1534638
[3] https://review.openstack.org/#/c/285306/
.. [0] https://etherpad.openstack.org/p/r.a7fe0b575d891ed81206765fa5be6630
.. [1] http://seven.centos.org/2015/12/fixing-centos-7-systemd-conflicts-with-docker/
.. [2] https://bugs.launchpad.net/mos/+bug/1534638
.. [3] https://review.openstack.org/#/c/285306/

View File

@ -334,9 +334,8 @@ It should be enough to have simple unit and integration tests in Nailgun
to verify sanity of the feature as the main deployment scenarios output
will remain intact.
-------------------
Acceptance criteria
-------------------
===================
User should be able to specify a YAQL expression in any task field except for
id (or it subfields) and get this YAQL expression evaluated correctly with

View File

@ -271,6 +271,11 @@ Work Items
* Implement search by priority
Dependencies
============
None
------------
Testing, QA
------------

View File

@ -10,49 +10,156 @@
# License for the specific language governing permissions and limitations
# under the License.
import glob
import re
from __future__ import print_function
import docutils.core
import glob
import io
import os
import re
import sys
import docutils.parsers.rst
import docutils.nodes
import testtools
def _rst2ast(source, name):
parser = docutils.parsers.rst.Parser()
document = docutils.utils.new_document(name)
# unfortunately, those settings are mandatory to pass though
# we don't care about their values
document.settings.tab_width = 4
document.settings.pep_references = 1
document.settings.rfc_references = 1
document.settings.trim_footnote_reference_space = 0
document.settings.syntax_highlight = 0
try:
parser.parse(source, document)
except Exception as exc:
# we're interested in printing filename of reStructuredText document
# that's failed to be parsed
print(name, exc, file=sys.stderr)
raise
return document
class _RstSectionWrapper(object):
def __init__(self, node):
self._node = node
@property
def title(self):
# there could be only one title subnode
titles = filter(lambda n: n.tagname == 'title', self._node.children)
return titles[0].astext()
@property
def subsections(self):
sections = filter(
lambda n: n.tagname == 'section', self._node.children)
# wrapping subsections into this class would simplify further
# working flow
return [self.__class__(node) for node in sections]
def get_subsection(self, title):
for section in self.subsections:
if section.title == title:
return section
return None
class _CheckLinesWrapping(docutils.nodes.NodeVisitor):
"""docutils' NodeVisitor for checking lines wrapping.
Check that lines are wrapped into 79 characters. Exceptions are:
* references;
* code blocks;
* footnodes;
Usage example:
document.walk(_CheckLinesWrapping(document))
"""
def visit_title(self, node):
for line in node.rawsource.splitlines():
if len(line) >= 80:
self._fail(node)
def visit_footnote(self, node):
raise docutils.nodes.SkipChildren()
def visit_paragraph(self, node):
ok = True
for line in node.rawsource.splitlines():
if len(line) >= 80:
ok = False
# breaking style guide, let's check for exceptions
for child in node.traverse(include_self=False):
# references and code blocks are ok to be >= 80
if child.tagname in ('reference', 'literal'):
if len(child.rawsource) >= 80:
ok = True
break
break
if ok:
raise docutils.nodes.SkipChildren()
self._fail(node)
def unknown_visit(self, node):
pass
def _get_line_no(self, node):
line_no = node.line
if line_no is None and node.parent:
return self._get_line_no(node.parent)
return line_no
def _fail(self, node):
line_no = self._get_line_no(node)
raise ValueError(
"%s:%d: Line limited to a maximum of 79 characters." %
(node.source, line_no))
class BaseDocTest(testtools.TestCase):
def build_structure(self, spec):
section = {}
name = ''
root = os.path.join(os.path.abspath(os.path.dirname(__file__)), '..')
for node in spec:
if node.tagname == 'title':
name = node.rawsource
elif node.tagname == 'section':
subsection, subsection_name = self.build_structure(node)
section[subsection_name] = subsection
def check_structure(self, filename, root):
def do_check(filename, node, expected_node):
expected_titles = expected_node.keys()
real_titles = [section.title for section in node.subsections]
return section, name
for t in expected_titles:
self.assertIn(t, real_titles, filename)
def verify_structure(self, fname, struct,
expected_struct, supersection=None):
expected_titles = expected_struct.keys()
real_titles = struct.keys()
expected_sub = expected_node[t]
sub = node.get_subsection(t)
for t in expected_titles:
self.assertIn(t, real_titles)
if expected_sub is not None:
do_check(filename, sub, expected_sub)
substruct = expected_struct[t]
if substruct is not None:
self.verify_structure(fname, struct[t], substruct, t)
# Fuel Specs have only one top-level section, with the document
# content. So we can pick it up and pass it down as document
# root.
node = _RstSectionWrapper(root).subsections[0]
do_check(filename, node, self.expected_structure)
def check_lines_wrapping(self, tpl, raw):
for i, line in enumerate(raw.split('\n')):
if 'http://' in line or 'https://' in line:
continue
self.assertTrue(
len(line.decode("utf-8")) < 80,
msg="%s:%d: Line limited to a maximum of 79 characters." %
(tpl, i+1))
def check_lines_wrapping(self, filename, root):
root.walk(_CheckLinesWrapping(root))
def check_no_cr(self, tpl, raw):
matches = re.findall('\r', raw)
@ -73,11 +180,11 @@ class BaseDocTest(testtools.TestCase):
for filename in files:
self.assertTrue(filename.endswith('.rst'),
'Specification files must use .rst extensions.')
with open(filename) as f:
with io.open(filename, encoding='utf-8') as f:
data = f.read()
spec = docutils.core.publish_doctree(data)
document, name = self.build_structure(spec)
self.verify_structure(filename, document, self.expected_structure)
self.check_lines_wrapping(filename, data)
ast = _rst2ast(data, filename)
self.check_structure(filename, ast)
self.check_lines_wrapping(filename, ast)
self.check_no_cr(filename, data)

View File

@ -49,4 +49,4 @@ class TestTitles(base.BaseDocTest):
}
files = ['specs/template.rst']
versions = ('8.0',)
versions = ('8.0', '9.0', )