Retire Packaging Deb project repos

This commit is part of a series to retire the Packaging Deb
project. Step 2 is to remove all content from the project
repos, replacing it with a README notification where to find
ongoing work, and how to recover the repo if needed at some
future point (as in
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project).

Change-Id: Id5c758d0c81166d0f591d95d335d5bfefd77fc5e
This commit is contained in:
Tony Breeds 2017-09-12 15:45:33 -06:00
parent c3bf1aa507
commit 6cd93616d3
93 changed files with 14 additions and 6312 deletions

41
.gitignore vendored
View File

@ -1,41 +0,0 @@
.DS_Store
.venv
# Packages
*.py[cod]
*.egg
/.eggs
*.egg-info
dist
build
eggs
sdist
# Unit test / coverage reports
.cache
.coverage
.tox
.testrepository
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.bak
# Autohelp
autogenerate_config_docs/venv
autogenerate_config_docs/sources
autogenerate_config_docs/*-conf-changes-*.xml
# sitemap
sitemap/sitemap_docs.openstack.org.xml
# Files created by releasenotes build
releasenotes/build
# repo pulled down by doc-tools-update-cli-reference
openstack-manuals

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/openstack-doc-tools.git

View File

@ -1,4 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>
<guoyingc@cn.ibm.com> <daisy.ycguo@gmail.com>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,32 +0,0 @@
Our community welcomes all people interested in open source cloud computing,
and encourages you to join the `OpenStack Foundation <http://www.openstack.org/join>`_.
The best way to get involved with the community is to talk with others online
or at a meetup and offer contributions through our processes, the `OpenStack
wiki <http://wiki.openstack.org>`_, blogs, or on IRC at ``#openstack``
on ``irc.freenode.net``.
We welcome all types of contributions, from blueprint designs to documentation
to testing to deployment scripts.
If you would like to contribute to the development,
you must follow the steps in this page:
http://docs.openstack.org/infra/manual/developers.html
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://docs.openstack.org/infra/manual/developers.html#development-workflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/openstack-manuals
.. note::
To be able to run ``"tox -e py27"`` successfully locally, add
``jinja2`` and ``markupsafe`` to your local ``test-requirements.txt``
file so the two get installed in your local virtual environment.

View File

@ -1,25 +0,0 @@
openstack-doc-tools style commandments
======================================
- Step 1: Read the OpenStack Style Commandments
http://docs.openstack.org/developer/hacking/
- Step 2: Read on
Running tests
-------------
So far there are some tests included with the package.
The openstack-indexpage tool is used while building the OpenStack
documentation repositories, test building of these repositories with
any changes done here.
Testing can be done with simply a local install of
openstack-doc-tools, then checking out the repositories and
running: ``tox`` inside of each.
The repositories using openstack-doc-tools include:
* api-site
* openstack-manuals
* security-doc

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,12 +0,0 @@
include README.rst
include AUTHORS
include LICENSE
include RELEASE_NOTES.rst
recursive-include bin *
recursive-include doc *
recursive-include tools *
recursive-include sitemap *
recursive-include os_doc_tools *
recursive-exclude * .gitignore
exclude .gitreview

14
README Normal file
View File

@ -0,0 +1,14 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For ongoing work on maintaining OpenStack packages in the Debian
distribution, please see the Debian OpenStack packaging team at
https://wiki.debian.org/OpenStack/.
For any further questions, please email
openstack-dev@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,67 +0,0 @@
========================
Team and repository tags
========================
.. image:: http://governance.openstack.org/badges/openstack-doc-tools.svg
:target: http://governance.openstack.org/reference/tags/index.html
.. Change things from this point on
OpenStack Doc Tools
~~~~~~~~~~~~~~~~~~~
This repository contains tools used by the OpenStack Documentation
project.
For more details, see the `OpenStack Documentation Contributor Guide
<http://docs.openstack.org/contributor-guide/>`_.
* License: Apache License, Version 2.0
* Source: https://git.openstack.org/cgit/openstack/openstack-doc-tools
* Bugs: https://bugs.launchpad.net/openstack-doc-tools
Prerequisites
-------------
You need to have Python 2.7 installed for using the tools.
This package needs a few external dependencies including lxml. If you
do not have lxml installed, you can either install python-lxml or have
it installed automatically and build from sources. To build lxml from
sources, you need a C compiler and the xml and xslt development
packages installed.
To install python-lxml, execute the following based on your
distribution.
On Fedora, RHEL 7, and CentOS 7::
$ yum install python-lxml
On openSUSE::
$ zypper in python-lxml
On Ubuntu::
$ apt-get install python-lxml
For building from source, install the dependencies of lxml.
On Fedora, RHEL 7, and CentOS 7::
$ yum install python-devel libxml2-devel libxslt-devel
On openSUSE::
$ zypper in libxslt-devel
On Ubuntu::
$ apt-get install libxml2-dev libxslt-dev
Regenerating config option tables
---------------------------------
See :ref:`autogenerate_config_docs`.

View File

@ -1,374 +0,0 @@
=============
Release notes
=============
Current release notes
~~~~~~~~~~~~~~~~~~~~~
Note that this file is now obsolete, we use reno for release note
management and publish them at
https://docs.openstack.org/releasenotes/openstack-doc-tools/ .
Add notes to releasenotes/notes directory following the
documentation in
https://docs.openstack.org/developer/reno/usage.html#creating-new-release-notes.
Older release notes
~~~~~~~~~~~~~~~~~~~
0.32.0
------
* Removed a virtual build and test environment based on Vagrant.
* Update ``autohelp-wrapper`` to support RST output.
0.31.0
------
* Add ``doc-tools-build-rst`` to build RST guides.
* Update sitemap to silence warnings.
* Enhance ``openstack-auto-commands`` for new commands and releases.
* Update ``autohelp.py`` for Liberty release.
0.30.1
------
* ``openstack-auto-commands``: Fix option parsing (bug#1488505)
* ``doc-tools-check-languages``: Fix RST Debian Install Guide.
0.30.0
------
* ``openstack-doc-test``: Always built index page in checkbuild.
* ``openstack-auto-commands``: Add support for murano.
* Remove ``dn2osdbk`` and the ``hotref`` sphinx extension.
* ``autohelp.py``: Can now find options for a project in multiple python
packages.
* ``doc-tools-check-languages``: Handle RST Install Guide and FirstApp.
0.29.1
------
* ``doc-tools-check-languages``: Fix building of translated RST guides.
0.29.0
------
* ``doc-tools-check-languages``: Handle common-rst directory, update
for User Guides and firstapp.
* ``autohelp.py``: Suport generation of RST tables, fixes for
extensions.
0.28
----
* ``openstack-doc-test``: Sort entries in index.html file.
* ``diff_branches.py``: Add options containing DEPRECATED in their help
string to the deprecation list.
* ``doc-tools-check-languages``: Fix bugs in RST handling that broke
handling of user-guide and user-guide-admin.
0.27
----
* ``openstack-doc-test``: Do not build Debian Install Guide by
default, built it only if the parameter ``--enable-debian-install``
is passed. Fix index.html file and remove
www files that came in by accident.
0.26
----
* Fix ``doc-tools-check-languages`` handling of RST guides and
publishing to translated draft guides.
* Improve ``openstack-auto-commands``: bash-completion support for
python-glanceclient, new commands for python-swiftclient, new command
for trove-manage, automatically identify deprecated subcommands,
move client definitions into a YAML resource file, support of the
complete subcommand, support for new clients (barbican, designate, manila,
magnetodb, manila, mistral, tuskar).
0.25
----
* Enhance ``doc-tools-check-languages`` to handle translation of RST
guides and publishing of draft guides to /draft/.
* ``autohelp.py``: lookup configuration options in more oslo libraries.
* ``autohelp.py``: add a hook for neutron plugins
* ``autohelp-wrapper``: improve reliability by building a virtual env per
project, rather than a common virtual env.
* ``autohelp-wrapper``: define the custom dependencies for each project in
their own requirements files.
0.24
----
* Added ``doc-tools-update-cli-reference``, a wrapper script to update
CLI references in the ``openstack-manuals`` repository.
* Handle guides that published without a content/ sub directory.
* Various fixes for auto generating commands and options.
* Handle translation of RST guides.
0.23
----
* ``openstack-doc-test``: Don't build all books if only RST files are
changed.
0.22
----
* ``openstack-doc-test``: New niceness check to avoid specific unicode
characters; new option --ignore-book to not build a book.
0.21.1
------
* ``jsoncheck``: have formatted JSON files end with a newline (lp:bug 1403159)
0.21
----
* ``openstack-doc-test``: New option ``--url-exception`` to ignore
URLs in link check. Use jsoncheck in tests for more better tests and
output.
* ``openstack-auto-commands``: Update list of supported commands to
include ironic, sahara
* ``openstack-dn2osdbk``: Various fixes.
0.20
----
* ``openstack-doc-test``: Check for a ``\n`` in the last line of a file.
* ``openstack-dn2osdbk``: Properly handle internal references.
0.19
----
* ``openstack-doc-test``: Optimize translation imports, improve output
messages.
* ``autohelp.py``: Improve sanitizer, better support for i18n in
projects, allow setting of title name for tables.
* ``autohelp-wrapper``: Smarter handling of the manuals repo and environment
setup, add support for the ``create`` subcommand.
* ``autohelp-wrapper``: Add support for offline/fast operation.
* ``autohelp-wrapper``: Add a module blacklisting mechanism.
* ``diff_branches.py``: Updated output format.
* Provide a ``hotref`` extension for sphinx, to automate the creation of
references to the HOT resources documentation.
* ``openstack-auto-commands``: Handle python-openstackclient, handle
python-glanceclient and python-cinderclient v2 commands.
0.18.1
------
* Fix ``doc-tools-check-languages`` to handle all repositories and
setups.
0.18
----
* ``openstack-doc-test``: Don't always build the HOT guide, add new
option --check-links to check for valid URLs.
* ``openstack-dn2osdbk``: Allow single files as source.
* Imported and improved ``doc-tools-check-languages`` (recently known
as ``tools/test-languages.sh`` in the documentation repositories).
* Added a virtual build and test environment based on Vagrant.
0.17
----
* Added support for ``*-manage`` CLI doc generation.
* ``openstack-dn2osdbk``: Converts Docutils Native XML to docbook.
* ``openstack-doc-test``: Handle the upcoming HOT guide.
* ``autohelp.py``: Provide our own sanitizer.
* ``autohelp.py``: Use the oslo sample_default if available.
* ``openstack-doc-test``: Correctly handle SIGINT.
* Various smaller fixes and improvements.
0.16.1
------
* Fix includes of rackbook.rng to unbreak syntax checking.
0.16
----
* ``openstack-doc-test``: Fix handling of ignore-dir parameter.
* ``autohelp-wrapper``: New tool to simplify the setup of an autohelp.py
environment.
* ``diff_branches.py``: Generates a listing of the configuration options
changes that occurred between 2 openstack releases.
* ``autohelp.py``: Add the 'dump' subcommand, include swift.
* ``jsoncheck.py``: Add public API.
* Added tool to generate a sitemap.xml file.
* Added script to prettify HTML and XML syntax.
0.15
----
* ``openstack-doc-test``: Output information about tested patch,
special case entity files for book building. Remove special handling
for high-availability-guide, it is not using asciidoc anymore.
* New script in cleanup/retf for spell checking using the RETF rules.
patch.
* Fix entity handling in ``openstack-generate-docbook``.
0.14
----
* ``openstack-auto-commands``: Improved screen generation and swift
subcommand xml output.
* ``openstack-doc-test``: Warn about non-breaking space, enhance
-v output, special case building of localized high-availability
guide, fix for building changed identity-api repository.
* New command ``openstack-jsoncheck`` to check for niceness of JSON
files and reformat them.
* ``openstack-autohelp``: Update the default parameters. The tables
are generated in the doc/common/tables/ dir by default, and the git
repository for the project being worked on is looked at in a sources/
dir by default.
0.13
----
* ``extract_swift_flags``: Correctly parses existing tables and
improve the output to ease the tables edition.
* ``openstack-generate-docbook`` handles now the api-site project:
Parameter --root gives root directory to use.
* Remove obsoleted commands ``generatedocbook`` and
``generatepot``. They have been obsoleted in 0.7.
0.12
----
* ``openstack-doc-test``: Handle changes in api-site project, new
option --print-unused-files.
* ``openstack-autohelp``: Handle keystone_authtoken options.
0.11
----
* Add ``--publish`` option to ``openstack-doc-test`` that does not
publish the www directory to the wrong location.
* Improvements for generation of option tables.
0.10
----
* Fix ``openstack-doc-test`` to handle changes in ``api-site`` repository:
Do not publish wadls directory, ``*.fo`` files and add api-ref-guides
PDF files to index file for docs-draft.
* Many improvements for generation of option tables.
* Improvements for ``openstack-auto-commands``: handle ironic, sahara;
improve generated output.
0.9
---
Fixes for openstack-doc-test:
* openstack-doc-test now validates JSON files for well-formed-ness and
whitespace.
* Create proper chapter title for markdown files.
* Ignore publish-docs directory completely.
* Do not check for xml:ids in wadl resource.
* New option build_file_exception to ignore invalid XML files for
dependency checking in build and syntax checks.
Fixes for autodoc-tools to sanitize values and handle projects.
Client version number is output by openstack-auto-commands.
0.8.2
-----
Fixes for openstack-doc-test:
* Fix error handling, now really abort if an error occurs.
* Avoid races in initial maven setup that broke build.
* Add --parallel/noparallel flags to disable parallel building.
0.8.1
-----
* Fix openstack-doc-test building of image-api.
* Fix publishing of api-ref.
* Improve markdown conversion.
0.8
---
* Improved openstack-auto-commands output
* Fix script invocation in openstack-doc-test.
0.7.1
-----
* Fix openstack-doc-test niceness and syntax checks that always
failed in api projects.
* Fix building of image-api-v2
0.7
---
* openstack-doc-test:
- Fix building of identity-api and image-api books.
- Add option --debug.
- Generate log file for each build.
- Do not install build-ha-guide.sh and markdown-docbook.sh in
/usr/bin, use special scripts dir instead.
- Allow to configure the directory used under publish-doc
* generatedocbook and generatepot have been merged into a single
file, the command has been renamed to
openstack-generate-docbook/openstack-generate-pot. For
compatibility, wrapper scripts are installed that will be removed
in version 0.8.
0.6
---
* Fix python packaging bugs that prevented sitepackages usage and
installed .gitignore in packages
0.5
---
* Test that resources in wadl files have an xml:id (lp:bug 1275007).
* Improve formatting of python command line clients (lp:bug 1274699).
* Copy all generated books to directory publish-docs in the git
top-level (lp:blueprint draft-docs-on-docs-draft).
* Requires now a config file in top-level git directory named
doc-test.conf.
* Allow building of translated manuals, these need to be setup first
with "generatedocbook -l LANGUAGE -b BOOK".
0.4
---
* New option --exceptions-file to pass list of files to ignore
completely.
* Major improvements for automatic generation of option tables.
* New tool openstack-auto-commands to document python
command line clients.
0.3
---
* Fixes path for automated translation toolchain to fix lp:bug 1216153.
* Validates .xsd .xsl and.xjb files in addition to .xml.
* Fixes validation of WADL files to validate properly against XML schema.
0.2
---
* Enables local copies of RNG schema for validation.
* Enables ignoring directories when checking.
0.1
---
Initial release.

View File

@ -1,121 +0,0 @@
.. _autogenerate_config_docs:
autogenerate_config_docs
========================
Automatically generate configuration tables to document OpenStack.
Using the wrapper
-----------------
``autohelp-wrapper`` is the recommended tool to generate the configuration
tables. Don't bother using ``autohelp.py`` manually.
The ``autohelp-wrapper`` script installs a virtual environment and all the
needed dependencies, clones or updates the projects and manuals repositories,
then runs the ``autohelp.py`` script in the virtual environment.
New and updated flagmappings are generated in the ``openstack-manuals``
repository (``tools/autogenerate-config-flagmappings/`` directory).
Prior to running the following commands, you need to install several development
packages.
On Ubuntu:
.. code-block:: console
$ sudo apt-get install python-dev python-pip python-virtualenv \
libxml2-dev libxslt1-dev zlib1g-dev \
libmysqlclient-dev libpq-dev libffi-dev \
libsqlite3-dev libldap2-dev libsasl2-dev \
libjpeg-dev
On RHEL 7 and CentOS 7:
.. code-block:: console
$ sudo yum install https://www.rdoproject.org/repos/rdo-release.rpm
$ sudo yum update
$ sudo yum install python-devel python-pip python-virtualenv \
libxml2-devel libxslt-devel zlib-devel \
mariadb-devel postgresql-devel libffi-devel \
sqlite-devel openldap-devel cyrus-sasl-devel \
libjpeg-turbo-devel gcc git
.. note::
* libjpeg is needed for ironic
The workflow is:
.. code-block:: console
$ pip install -rrequirements.txt
$ ./autohelp-wrapper update
$ $EDITOR sources/openstack-manuals/tools/autogenerate-config-flagmappings/*.flagmappings
$ ./autohelp-wrapper rst
$ # check the results in sources/openstack-manuals
This will generate the tables for all the known projects.
Note for neutron project: If the driver/plugin resides outside the neutron
repository, then the driver/plugin has to be explicitly installed within the
virtual environment to generate the configuration options.
To generate the mappings and tables for a subset of projects, use the code
names as arguments:
.. code-block:: console
$ ./autohelp-wrapper update cinder heat
$ # edit the mappings files
$ ./autohelp-wrapper rst cinder heat
Flagmappings files
------------------
The tool uses flagmapping files to map options to custom categories. Flag
mapping files can be found in the ``tools/autogenerate-config-flagmappings``
folder of the openstack-manuals project. Not all projects use flagmapping
files, as those that do not will be disabled by the presence of a
``$project.disable`` file in that folder. For those that do, however, the files
use the following format::
OPTION_SECTION/OPTION_NAME group1 [group2, ...]
Groups need to be defined manually to organize the configuration tables.
The group values can only contain alphanumeric characters, _ and - (they will
be used as document IDs).
To make the table titles more user friendly, create or edit the PROJECT.headers
file in the manuals repository. Each line of this file is of the form:
::
GROUP A Nice Title
Working with branches
---------------------
``autohelp-wrapper`` works on the master branch by default, but you can tell it
to work on another branch:
.. code-block:: console
$ ./autohelp-wrapper -b stable/liberty update
.. note::
The ``-b`` switch doesn't apply to the ``openstack-manuals`` repository
which will be left untouched (no ``git branch``, no ``git update``).
Generate configuration difference
---------------------------------
To generate "New, updated, and deprecated options" for each service,
run ``diff_branches.py``. For example:
.. code-block:: console
$ ./diff_branches.py stable/liberty stable/mitaka nova

View File

@ -1,18 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo('openstack-doc-tools').version_string()

View File

@ -1,273 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
set -e
HERE=$(pwd)
VENVDIR=$HERE/venv
SOURCESDIR=$HERE/sources
MANUALSREPO=$SOURCESDIR/openstack-manuals
MAPPINGS_DIR=$MANUALSREPO/tools/autogenerate-config-flagmappings
AUTOHELP="python $HERE/autohelp.py"
GITBASE=${GITBASE:-git://git.openstack.org/openstack}
GITPROJ=${GITPROJ:-git://git.openstack.org/openstack}
PROJECTS="aodh ceilometer cinder glance heat ironic keystone manila \
murano neutron nova sahara senlin trove zaqar"
MANUALS_PROJECTS="openstack-manuals"
BRANCH=master
FAST=0
QUIET=0
CLONE_MANUALS=1
usage() {
echo "Wrapper for autohelp.py"
echo "Usage:"
echo " $(basename $0) [ OPTIONS ] dump|update|rst|setup [ project... ]"
echo
echo "Subcommands:"
echo " dump: Dumps the list of options with their attributes"
echo " update: Update or create the flagmapping files"
echo " rst: Generate the options tables in RST format"
echo " setup: Install the environment only"
echo
echo "Options:"
echo " -b BRANCH: Work on this branch (defaults to master)"
echo " -g GITPROJ: Use this location for the project git repos "
echo " (defaults to git://git.openstack.org/openstack)"
echo " -c: Recreate the virtual environment"
echo " -f: Work offline: Do not change environment or sources"
echo " -e PATH: Create the virtualenv in PATH"
echo " -v LEVEL: Verbose message (1 or 2)"
echo " (check various python modules imported or not)"
echo " -o OUTDIR: Path to output openstack-manuals directory "
echo " (defaults to ./sources/openstack-manuals)"
}
setup_venv() {
project=$1
if [ ! -e $VENVDIR/$project/bin/activate ]; then
mkdir -p $VENVDIR/$project
virtualenv $VENVDIR/$project
fi
activate_venv $project
}
activate_venv() {
project=$1
. $VENVDIR/$project/bin/activate
pip install --upgrade pip setuptools
}
get_project() {
project=$1
git_url=$GITPROJ
if [ ! -e $SOURCESDIR/$project ]; then
if [[ $MANUALS_PROJECTS =~ (^| )$project($| ) ]]; then
git_url=$GITBASE
fi
git clone $git_url/$project $SOURCESDIR/$project
if [ -e $MAPPINGS_DIR/$project.extra_repos ]; then
while read extra; do
git clone $git_url/$extra $SOURCESDIR/$extra
done < $MAPPINGS_DIR/$project.extra_repos
fi
else
if [ $project != openstack-manuals ]; then
(cd $SOURCESDIR/$project && git pull)
fi
if [ -e $MAPPINGS_DIR/$project.extra_repos ]; then
while read extra; do
(cd $SOURCESDIR/$extra && git pull)
done < $MAPPINGS_DIR/$project.extra_repos
fi
fi
}
setup_tools() {
pip install -rrequirements.txt
}
while getopts :b:g:e:o:v:cfq opt; do
case $opt in
b)
BRANCH=$OPTARG
;;
g)
GITPROJ=$OPTARG
;;
c)
rm -rf $VENVDIR
;;
e)
VENVDIR=$OPTARG
;;
o)
MANUALSREPO=$OPTARG
MAPPINGS_DIR=$OPTARG/tools/autogenerate-config-flagmappings
CLONE_MANUALS=0
;;
f)
FAST=1
;;
q)
QUIET=1
;;
v)
AUTOOPT="-v"
if [ $OPTARG = 2 ]; then
AUTOOPT="-vv"
fi
;;
\?)
usage
exit 1
;;
esac
done
shift $(($OPTIND - 1))
if [ $# -lt 1 ]; then
usage
exit 1
fi
ACTION=$1
shift
if [ $QUIET -eq 1 ]; then
exec 3>&1 >/dev/null
exec 4>&2 2>/dev/null
fi
case $ACTION in
update|rst|dump|setup) ;;
*)
usage
exit 1
;;
esac
if [ ! -e autohelp.py ]; then
echo "Execute this script in the autogenerate_config_docs directory."
exit 1
fi
[ $# != 0 ] && PROJECTS="$*"
RELEASE=$(echo $BRANCH | sed 's,^stable.,,')
if [ "$FAST" -eq 0 ] ; then
if [ "$CLONE_MANUALS" -eq 1 ] ; then
get_project openstack-manuals
fi
for project in $PROJECTS; do
setup_venv $project
setup_tools
if [ -e $MAPPINGS_DIR/$project.requirements ]; then
pip install -r $MAPPINGS_DIR/$project.requirements \
--allow-all-external
fi
get_project $project
(
pushd $SOURCESDIR/$project
module=$(echo $project | tr - _ )
find $module -name "*.pyc" -delete
GIT_CMD="git show-ref --verify --quiet refs/heads/$BRANCH"
if $GIT_CMD; then
git checkout $BRANCH
else
git checkout -b $BRANCH remotes/origin/$BRANCH
fi
pip install -rrequirements.txt
[ -e "test-requirements.txt" ] && \
pip install -rtest-requirements.txt
python setup.py install
popd
if [ -e $MAPPINGS_DIR/$project.extra_repos ]; then
while read extra; do
(
cd $SOURCESDIR/$extra
pip install -rrequirements.txt
[ -e "test-requirements.txt" ] && \
pip install -rtest-requirements.txt
python setup.py install
)
done < $MAPPINGS_DIR/$project.extra_repos
fi
)
done
fi
for project in $PROJECTS; do
echo "Working on $project..."
activate_venv $project
if [ "$ACTION" = "setup" ]; then
break
fi
if [ -e $MAPPINGS_DIR/$project.extra_repos ]; then
extra_flags=
while read extra; do
package=$(echo $extra | tr - _)
if [ $package = "networking_midonet" ]; then
package="midonet"
fi
if [ $package = "networking_hyperv" ]; then
package="hyperv"
fi
if [ $package = "networking_edge_vpn" ]; then
package="networking-edge-vpn"
fi
if [ $package = "networking_zvm" ]; then
package="neutron"
cp -r $SOURCESDIR/networking-zvm/neutron/plugins/zvm \
$SOURCESDIR/neutron/neutron/plugins
cp -r \
$SOURCESDIR/networking-zvm/neutron/plugins/ml2/drivers/zvm\
$SOURCESDIR/neutron/neutron/plugins/ml2/drivers
fi
extra_flags="$extra_flags -i $SOURCESDIR/$extra/$package"
done < $MAPPINGS_DIR/$project.extra_repos
fi
cd $MAPPINGS_DIR
case $ACTION in
update)
$AUTOHELP update $project -i $SOURCESDIR/$project/$project \
$extra_flags $AUTOOPT
mv $project.flagmappings.new $project.flagmappings
;;
rst)
$AUTOHELP rst $project -i $SOURCESDIR/$project/$project \
$extra_flags $AUTOOPT
;;
dump)
if [ $QUIET -eq 1 ]; then
exec 1>&3
exec 2>&4
fi
$AUTOHELP dump $project -i $SOURCESDIR/$project/$project \
$extra_flags $AUTOOPT
;;
esac
done

View File

@ -1,696 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A collection of tools for working with flags from OpenStack
# packages and documentation.
#
# For an example of usage, run this program with the -h switch.
#
# Must import this before argparse
from oslo_config import cfg
import argparse
import importlib
import os
import pickle
import re
import sys
import jinja2
import stevedore
try:
from sqlalchemy import exc
except Exception:
pass
sys.path.insert(0, '.')
try:
from hooks import HOOKS # noqa
except ImportError:
pass
EXTENSIONS = ['oslo.cache',
'oslo.concurrency',
'oslo.db',
'oslo.log',
'oslo.messaging',
'oslo.middleware',
'oslo.policy',
'oslo.service']
_TYPE_DESCRIPTIONS = {
cfg.StrOpt: 'String',
cfg.BoolOpt: 'Boolean',
cfg.IntOpt: 'Integer',
cfg.FloatOpt: 'Floating point',
cfg.ListOpt: 'List',
cfg.DictOpt: 'Dict',
cfg.MultiStrOpt: 'Multi-valued',
cfg.IPOpt: 'IP address',
cfg.PortOpt: 'Port number',
cfg.HostnameOpt: 'Hostname',
cfg.URIOpt: 'URI',
cfg.HostAddressOpt: 'Host address',
cfg._ConfigFileOpt: 'List of filenames',
cfg._ConfigDirOpt: 'List of directory names',
}
register_re = re.compile(r'''^ +.*\.register_opts\((?P<opts>[^,)]+)'''
r'''(, (group=)?["'](?P<group>.*)["'])?\)''')
def import_modules(repo_location, package_name, verbose=0):
"""Import modules.
Loops through the repository, importing module by module to
populate the configuration object (cfg.CONF) created from Oslo.
"""
with open('ignore.list') as fd:
ignore_list = [l for l in fd.read().split('\n')
if l and (l[0] != '#')]
pkg_location = os.path.join(repo_location, package_name)
for root, dirs, files in os.walk(pkg_location):
skipdir = False
for excludedir in ('tests', 'locale',
os.path.join('db', 'migration'), 'transfer'):
if ((os.path.sep + excludedir + os.path.sep) in root or (
root.endswith(os.path.sep + excludedir))):
skipdir = True
break
if skipdir:
continue
for pyfile in files:
if pyfile.endswith('.py'):
abs_path = os.path.join(root, pyfile)
modfile = abs_path.split(repo_location, 1)[1]
modname = os.path.splitext(modfile)[0].split(os.path.sep)
modname = [m for m in modname if m != '']
modname = '.'.join(modname)
if modname.endswith('.__init__'):
modname = modname[:modname.rfind(".")]
if modname in ignore_list:
continue
try:
module = importlib.import_module(modname)
if verbose >= 1:
print("imported %s" % modname)
except ImportError as e:
"""
work around modules that don't like being imported in
this way FIXME This could probably be better, but does
not affect the configuration options found at this stage
"""
if verbose >= 2:
print("Failed to import: %s (%s)" % (modname, e))
continue
except cfg.DuplicateOptError as e:
"""
oslo.cfg doesn't allow redefinition of a config option, but
we don't mind. Don't fail if this happens.
"""
if verbose >= 2:
print(e)
continue
except cfg.NoSuchGroupError as e:
"""
If a group doesn't exist, we ignore the import.
"""
if verbose >= 2:
print(e)
continue
except exc.InvalidRequestError as e:
if verbose >= 2:
print(e)
continue
except Exception as e:
print("Impossible to import module %s" % modname)
raise
_register_runtime_opts(module, abs_path, verbose)
_run_hook(modname)
# All the components provide keystone token authentication, usually using a
# pipeline. Since the auth_token options can only be discovered at runtime
# in this configuration, we force their discovery by importing the module.
try:
import keystonemiddleware.auth_token # noqa
except cfg.DuplicateOptError:
pass
def _run_hook(modname):
try:
HOOKS[modname]()
except KeyError:
pass
def _register_runtime_opts(module, abs_path, verbose):
"""Handle options not registered on module import.
This function parses the .py files to discover calls to register_opts in
functions and methods. It then explicitly call cfg.register_opt on each
option to register (most of) them.
"""
with open(abs_path) as fd:
for line in fd:
m = register_re.search(line)
if not m:
continue
opts_var = m.group('opts')
opts_group = m.group('group')
# Get the object (an options list) from the opts_var string.
# This requires parsing the string which can be of the form
# 'foo.bar'. We treat each element as an attribute of the previous.
register = True
obj = module
for item in opts_var.split('.'):
try:
obj = getattr(obj, item)
except AttributeError:
# FIXME(gpocentek): AttributeError is raised when a part of
# the opts_var string is not an actual attribute. This will
# need more parsing tricks.
register = False
if verbose >= 2:
print("Ignoring %(obj)s in %(module)s" %
{'obj': opts_var, 'module': module})
break
if register and isinstance(obj, list):
for opt in obj:
if not isinstance(opt, cfg.Opt):
continue
try:
cfg.CONF.register_opt(opt, opts_group)
except cfg.DuplicateOptError:
# ignore options that have already been registered
pass
def _sanitize_default(opt):
"""Adapts unrealistic default values."""
# If the Oslo version is recent enough, we can use the 'sample_default'
# attribute
if (hasattr(opt, 'sample_default') and opt.sample_default is not None):
return str(opt.sample_default)
if ((type(opt).__name__ == "ListOpt") and (type(opt.default) == list)):
return ", ".join(str(item) for item in opt.default)
default = str(opt.default)
if default == os.uname()[1]:
return 'localhost'
if opt.name == 'bindir':
return '/usr/local/bin'
if opt.name == 'my_ip':
return '10.0.0.1'
if isinstance(opt, cfg.StrOpt) and default.strip() != default:
return '"%s"' % default
for pathelm in sys.path:
if pathelm in ('.', ''):
continue
if pathelm.endswith('/'):
pathelm = pathelm[:-1]
if pathelm in default:
default = re.sub(r'%s(/sources)?' % pathelm,
'/usr/lib/python/site-packages', default)
return default
def _get_overrides(package_name):
overrides_file = '%s.overrides' % package_name
if not os.path.exists(overrides_file):
return {}
overrides = {}
with open(overrides_file) as fd:
for line in fd:
if line == '#':
continue
try:
opt, sections = line.strip().split(' ', 1)
sections = [x.strip() for x in sections.split(' ')]
except ValueError:
continue
overrides[opt] = sections
return overrides
class OptionsCache(object):
def __init__(self, overrides={}, verbose=0):
self._verbose = verbose
self._opts_by_name = {}
self._opts_by_group = {}
self._opt_names = []
self._overrides = overrides
for optname in cfg.CONF._opts:
opt = cfg.CONF._opts[optname]['opt']
# We ignore some CLI opts by excluding SubCommandOpt objects
if not isinstance(opt, cfg.SubCommandOpt):
self._add_opt(optname, 'DEFAULT', opt)
for group in cfg.CONF._groups:
for optname in cfg.CONF._groups[group]._opts:
self._add_opt(group + '/' + optname, group,
cfg.CONF._groups[group]._opts[optname]['opt'])
self._opt_names.sort(OptionsCache._cmpopts)
def _add_opt(self, optname, group, opt):
if optname in self._opts_by_name:
if self._verbose >= 2:
print("Duplicate option name %s" % optname)
return
opt.default = _sanitize_default(opt)
def fill(optname, group, opt):
if optname in self._opts_by_name:
return
self._opts_by_name[optname] = (group, opt)
self._opt_names.append(optname)
if group not in self._opts_by_group:
self._opts_by_group[group] = []
self._opts_by_group[group].append(opt)
if optname in self._overrides:
for new_group in self._overrides[optname]:
if new_group == 'DEFAULT':
new_optname = opt.name
else:
new_optname = new_group + '/' + opt.name
fill(new_optname, new_group, opt)
else:
fill(optname, group, opt)
def __len__(self):
return len(self._opt_names)
def load_extension_options(self, module):
# Note that options loaded this way aren't added to _opts_by_module
loader = stevedore.named.NamedExtensionManager(
'oslo.config.opts',
names=(module,),
invoke_on_load=False
)
for ext in loader:
for group, opts in ext.plugin():
for opt in opts:
if group is None:
self._add_opt(opt.dest, 'DEFAULT', opt)
else:
self._add_opt(group + '/' + opt.dest, group, opt)
self._opt_names.sort(OptionsCache._cmpopts)
def maybe_load_extensions(self, repositories):
# Use the requirements.txt of the project to guess if an oslo module
# needs to be imported
needed_exts = set()
for repo in repositories:
base_path = os.path.dirname(repo)
for ext in EXTENSIONS:
requirements = os.path.join(base_path, 'requirements.txt')
with open(requirements) as fd:
for line in fd:
if line.startswith(ext):
needed_exts.add(ext)
for ext in needed_exts:
self.load_extension_options(ext)
def get_group_names(self):
return self._opts_by_group.keys()
def get_option_names(self):
return self._opt_names
def get_group(self, name):
return self._opts_by_group[name]
def get_option(self, name):
return self._opts_by_name[name]
def dump(self):
"""Dumps the list of options with their attributes.
This output is consumed by the diff_branches script.
"""
for name, (group, option) in self._opts_by_name.items():
deprecated_opts = [{'group': deprecated.group,
'name': deprecated.name}
for deprecated in option.deprecated_opts]
help_str = option.help.strip() if option.help else "None"
new_option = {
'default': option.default,
'help': help_str,
'deprecated_opts': deprecated_opts,
'type': option.__class__.__name__.split('.')[-1]
}
self._opts_by_name[name] = (group, new_option)
print(pickle.dumps(self._opts_by_name))
@staticmethod
def _cmpopts(x, y):
if '/' in x and '/' in y:
prex = x[:x.find('/')]
prey = y[:y.find('/')]
if prex != prey:
return cmp(prex, prey)
return cmp(x, y)
elif '/' in x:
return 1
elif '/' in y:
return -1
else:
return cmp(x, y)
def _use_categories(package_name):
return not os.path.isfile(package_name + '.disable')
def _get_options_by_cat(package_name):
options_by_cat = {}
with open(package_name + '.flagmappings') as f:
for line in f:
if not line.strip() or line.startswith('#'):
continue
opt, categories = line.split(' ', 1)
for category in categories.split():
options_by_cat.setdefault(category, []).append(opt)
return options_by_cat
def _get_category_names(package_name):
package_headers = package_name + '.headers'
category_names = {}
for headers_file in ('shared.headers', package_headers):
try:
with open(headers_file) as f:
for line in f:
if not line.strip() or line.startswith('#'):
continue
cat, nice_name = line.split(' ', 1)
category_names[cat] = nice_name.strip()
except IOError:
print("Cannot open %s (ignored)" % headers_file)
return category_names
def _format_opt(option):
def _remove_prefix(text, prefix):
if text.startswith(prefix):
return text[len(prefix):].lstrip(':')
return text
def _reflow_text(text):
text = re.sub(r'\n+\s*\* ', '$sentinal$* ', text)
text = text.replace('\n\n', '$sentinal$')
text = text.replace('\n', ' ')
text = ' '.join(text.split())
return text.split('$sentinal$')
def _strip_indentation(text):
return ' '.join([x.strip() for x in text.split('\n')]).strip()
help_text = option.help or "No help text available for this option."
help_text = _remove_prefix(help_text.strip(), 'DEPRECATED')
help_text = _reflow_text(help_text)
deprecated_text = (option.deprecated_reason or
'No deprecation reason provided for this option.')
deprecated_text = _strip_indentation(deprecated_text)
opt_type = _TYPE_DESCRIPTIONS.get(type(option), 'Unknown')
flags = []
if (option.deprecated_for_removal or
(option.help and option.help.startswith('DEPRECATED'))):
flags.append(('Deprecated', deprecated_text))
if option.mutable:
flags.append(('Mutable', 'This option can be changed without'
' restarting.'))
return {
'name': option.dest,
'type': opt_type,
'default': _sanitize_default(option),
'help': help_text,
'flags': flags
}
def write_files(package_name, options, target):
"""Write tables.
Some projects make use of flagmapping files, while others make use
of oslo.config's OptGroup to do the same in code. The function will
handle both, printing a list of options by either category or group.
"""
target = target or '../../doc/config-reference/source/tables'
if not os.path.isdir(target):
os.makedirs(target)
if _use_categories(package_name):
_write_files_by_category(package_name, options, target)
else:
_write_files_by_group(package_name, options, target)
def _write_files_by_category(package_name, options, target):
options_by_cat = _get_options_by_cat(package_name)
category_names = _get_category_names(package_name)
for cat in options_by_cat.keys():
env = {
'pkg': package_name,
'cat': cat,
'label': '-'.join([package_name, cat]),
'groups': [],
'items': [],
}
# Skip the options that is explicitly marked as disabled,
# which is used for common configuration options.
if cat == 'disable':
continue
if cat in category_names:
env['nice_cat'] = category_names[cat]
else:
env['nice_cat'] = cat
print("No nicename for %s" % cat)
curgroup = None
items = None
for optname in options_by_cat[cat]:
group, option = options.get_option(optname)
if group != curgroup:
if group is not None:
curgroup = group
env['groups'].append(group)
if items is not None:
env['items'].append(items)
items = []
items.append(_format_opt(option))
env['items'].append(items)
file_path = ("%(target)s/%(package_name)s-%(cat)s.rst" %
{'target': target, 'package_name': package_name,
'cat': cat})
tmpl_file = os.path.join(os.path.dirname(__file__),
'templates/autohelp-category.rst.j2')
_write_template(tmpl_file, file_path, env)
def _write_files_by_group(package_name, options, target):
for group in options.get_group_names():
env = {
'pkg': package_name,
'group': group,
'label': '-'.join([package_name, group]),
'items': [],
}
for option in options.get_group(group):
env['items'].append(_format_opt(option))
file_path = ("%(target)s/%(package_name)s-%(group)s.rst" %
{'target': target, 'package_name': package_name,
'group': group})
tmpl_file = os.path.join(os.path.dirname(__file__),
'templates/autohelp-group.rst.j2')
_write_template(tmpl_file, file_path, env)
def _write_template(template_path, output_path, env):
with open(template_path) as fd:
template = jinja2.Template(fd.read(), trim_blocks=True)
output = template.render(filename=output_path, **env)
with open(output_path, 'w') as fd:
fd.write(output)
def update_flagmappings(package_name, options, verbose=0):
"""Update flagmappings file.
Update a flagmappings file, adding or removing entries as needed.
This will create a new file $package_name.flagmappings.new with
category information merged from the existing $package_name.flagmappings.
"""
if not _use_categories:
print("This project does not use flagmappings. Nothing to update.")
return
original_flags = {}
try:
with open(package_name + '.flagmappings') as f:
for line in f:
try:
flag, category = line.split(' ', 1)
except ValueError:
flag = line.strip()
category = "Unknown"
original_flags.setdefault(flag, []).append(category.strip())
except IOError:
# If the flags file doesn't exist we'll create it
pass
updated_flags = []
for opt in options.get_option_names():
if len(original_flags.get(opt, [])) == 1:
updated_flags.append((opt, original_flags[opt][0]))
continue
updated_flags.append((opt, 'Unknown'))
with open(package_name + '.flagmappings.new', 'w') as f:
for flag, category in updated_flags:
f.write(flag + ' ' + category + '\n')
if verbose >= 1:
removed_flags = (set(original_flags.keys()) -
set([x[0] for x in updated_flags]))
added_flags = (set([x[0] for x in updated_flags]) -
set(original_flags.keys()))
print("\nRemoved Flags\n")
for line in sorted(removed_flags, OptionsCache._cmpopts):
print(line)
print("\nAdded Flags\n")
for line in sorted(added_flags, OptionsCache._cmpopts):
print(line)
def main():
parser = argparse.ArgumentParser(
description='Manage flag files, to aid in updating documentation.',
usage='%(prog)s <cmd> <package> [options]')
parser.add_argument('subcommand',
help='Action (update, rst, dump).',
choices=['update', 'rst', 'dump'])
parser.add_argument('package',
help='Name of the top-level package.')
parser.add_argument('-v', '--verbose',
action='count',
default=0,
dest='verbose',
required=False,)
parser.add_argument('-i', '--input',
dest='repos',
help='Path to a python package in which options '
'should be discoverd. Can be used multiple '
'times.',
required=False,
type=str,
action='append')
parser.add_argument('-o', '--output',
dest='target',
help='Directory or file in which data will be saved.\n'
'Defaults to ../../doc/common/tables/ '
'for "rst".\n'
'Defaults to stdout for "dump"',
required=False,
type=str,)
args = parser.parse_args()
if args.repos is None:
args.repos = ['./sources/%s/%s' % args.package]
for repository in args.repos:
package_name = os.path.basename(repository)
base_path = os.path.dirname(repository)
sys.path.insert(0, base_path)
try:
__import__(package_name)
except ImportError as e:
if args.verbose >= 1:
print(str(e))
print("Failed to import: %s (%s)" % (package_name, e))
import_modules(base_path, package_name, verbose=args.verbose)
sys.path.pop(0)
overrides = _get_overrides(package_name)
options = OptionsCache(overrides, verbose=args.verbose)
options.maybe_load_extensions(args.repos)
if args.verbose > 0:
print("%s options imported from package %s." % (len(options),
str(package_name)))
if args.subcommand == 'update':
update_flagmappings(args.package, options, verbose=args.verbose)
elif args.subcommand == 'rst':
write_files(args.package, options, args.target)
elif args.subcommand == 'dump':
options.dump()
if __name__ == "__main__":
main()

View File

@ -1,289 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# A collection of tools for working with flags from OpenStack
# packages and documentation.
#
# For an example of usage, run this program with the -h switch.
#
import argparse
import os
import pickle
import subprocess
import sys
import jinja2
PROJECTS = ['aodh', 'ceilometer', 'cinder', 'glance', 'heat', 'ironic',
'keystone', 'manila', 'neutron', 'nova', 'sahara', 'trove']
MASTER_RELEASE = 'Ocata'
CODENAME_TITLE = {'aodh': 'Alarming',
'ceilometer': 'Telemetry',
'cinder': 'Block Storage',
'glance': 'Image service',
'heat': 'Orchestration',
'ironic': 'Bare Metal service',
'keystone': 'Identity service',
'manila': 'Shared File Systems service',
'murano': 'Application Catalog service',
'neutron': 'Networking',
'nova': 'Compute',
'sahara': 'Data Processing service',
'senlin': 'Clustering service',
'trove': 'Database service',
'zaqar': 'Message service'}
def setup_venv(projects, branch, novenvupdate):
"""Setup a virtual environment for `branch`."""
dirname = os.path.join('venv', branch.replace('/', '_'))
if novenvupdate and os.path.exists(dirname):
return
if not os.path.exists('venv'):
os.mkdir('venv')
args = ["./autohelp-wrapper", "-b", branch, "-e", dirname, "setup"]
args.extend(projects)
if subprocess.call(args) != 0:
print("Impossible to create the %s environment." % branch)
sys.exit(1)
def _get_packages(project, branch):
release = branch if '/' not in branch else branch.split('/')[1]
packages = [project]
try:
with open('extra_repos/%s-%s.txt' % (project, release)) as f:
packages.extend([p.strip() for p in f])
except IOError:
pass
return packages
def get_options(project, branch):
"""Get the list of known options for a project."""
print("Working on %(project)s (%(branch)s)" % {'project': project,
'branch': branch})
# And run autohelp script to get a serialized dict of the discovered
# options
dirname = os.path.join('venv', branch.replace('/', '_'))
args = ["./autohelp-wrapper", "-q", "-b", branch, "-e", dirname,
"dump", project]
path = os.environ.get("PATH")
bin_path = os.path.abspath(os.path.join(dirname, "bin"))
path = "%s:%s" % (bin_path, path)
serialized = subprocess.check_output(args,
env={'VIRTUAL_ENV': dirname,
'PATH': path})
ret = pickle.loads(serialized)
return ret
def _cmpopts(x, y):
"""Compare to option names.
The options can be of 2 forms: option_name or group/option_name. Options
without a group always comes first. Options are sorted alphabetically
inside a group.
"""
if '/' in x and '/' in y:
prex = x[:x.find('/')]
prey = y[:y.find('/')]
if prex != prey:
return cmp(prex, prey)
return cmp(x, y)
elif '/' in x:
return 1
elif '/' in y:
return -1
else:
return cmp(x, y)
def diff(old_list, new_list):
"""Compare the old and new lists of options."""
new_opts = []
new_defaults = []
deprecated_opts = []
for name, (group, option) in new_list.items():
# Find the new options
if name not in old_list.viewkeys():
new_opts.append(name)
# Find the options for which the default value has changed
elif option['default'] != old_list[name][1]['default']:
new_defaults.append(name)
# Find options that have been deprecated in the new release.
# If an option name is a key in the old_list dict, it means that it
# wasn't deprecated.
# Some options are deprecated, but not replaced with a new option.
# These options usually contain 'DEPRECATED' in their help string.
if 'DEPRECATED' in option['help']:
deprecated_opts.append((name, None))
for deprecated in option['deprecated_opts']:
# deprecated_opts is a list which always holds at least 1 invalid
# dict. Forget it.
if deprecated['name'] is None:
continue
if deprecated['group'] in [None, 'DEFAULT']:
full_name = deprecated['name']
else:
full_name = deprecated['group'] + '/' + deprecated['name']
if full_name in old_list.viewkeys():
deprecated_opts.append((full_name, name))
return new_opts, new_defaults, deprecated_opts
def format_option_name(name):
"""Return a formatted string for the option path."""
if name is None:
return "None"
try:
section, name = name.split('/')
except ValueError:
# name without a section ('log_dir')
return "[DEFAULT] %s" % name
return "[%s] %s" % (section, name)
def release_from_branch(branch):
if branch == 'master':
return MASTER_RELEASE
else:
return branch.replace('stable/', '').title()
def get_env(project, new_branch, old_list, new_list):
"""Generate the jinja2 environment for the defined branch and project."""
new_opts, new_defaults, deprecated_opts = diff(old_list, new_list)
release = release_from_branch(new_branch)
env = {
'release': release,
'project': project,
'codename': CODENAME_TITLE[project],
'new_opts': [],
'new_defaults': [],
'deprecated_opts': []
}
# New options
if new_opts:
for name in sorted(new_opts, _cmpopts):
opt = new_list[name][1]
name = format_option_name(name)
helptext = opt['help'].strip().replace('\n', ' ')
helptext = ' '.join(helptext.split())
cells = (("%(name)s = %(default)s" %
{'name': name,
'default': opt['default']}).strip(),
"(%(type)s) %(help)s" % {'type': opt['type'],
'help': helptext})
env['new_opts'].append(cells)
# New defaults
if new_defaults:
for name in sorted(new_defaults, _cmpopts):
old_default = old_list[name][1]['default']
new_default = new_list[name][1]['default']
if isinstance(old_default, list):
old_default = ", ".join(old_default)
if isinstance(new_default, list):
new_default = ", ".join(new_default)
name = format_option_name(name)
cells = (name, old_default, new_default)
env['new_defaults'].append(cells)
# Deprecated options
if deprecated_opts:
for deprecated, new in sorted(deprecated_opts, cmp=_cmpopts,
key=lambda tup: tup[0]):
deprecated = format_option_name(deprecated)
new = format_option_name(new)
env['deprecated_opts'].append((deprecated, new))
return env
def main():
parser = argparse.ArgumentParser(
description='Generate a summary of configuration option changes.',
usage='%(prog)s [options] <old_branch> <new_branch> [projects]')
parser.add_argument('old_branch',
help='Name of the old branch.')
parser.add_argument('new_branch',
help='Name of the new branch.')
parser.add_argument('projects',
help='List of projects to work on.',
nargs='*',
default=PROJECTS)
parser.add_argument('-i', '--input',
dest='sources',
help='Path to a folder containing the git '
'repositories.',
required=False,
default='./sources',
type=str,)
parser.add_argument('-o', '--output',
dest='target',
help='Directory or file in which data will be saved.\n'
'Defaults to "."',
required=False,
default='.',
type=str,)
parser.add_argument('-n', '--no-venv-update',
dest='novenvupdate',
help='Don\'t update the virtual envs.',
required=False,
action='store_true',
default=False,)
args = parser.parse_args()
setup_venv(args.projects, args.old_branch, args.novenvupdate)
setup_venv(args.projects, args.new_branch, args.novenvupdate)
for project in args.projects:
old_list = get_options(project, args.old_branch)
new_list = get_options(project, args.new_branch)
release = args.new_branch.replace('stable/', '')
env = get_env(project, release, old_list, new_list)
filename = ("%(project)s-conf-changes.rst" %
{'project': project})
tmpl_file = 'templates/changes.rst.j2'
if not os.path.exists(args.target):
os.makedirs(args.target)
dest = os.path.join(args.target, filename)
with open(tmpl_file) as fd:
template = jinja2.Template(fd.read(), trim_blocks=True)
output = template.render(**env)
with open(dest, 'w') as fd:
fd.write(output)
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,5 +0,0 @@
docutils
jinja2
lxml
oslo.config
oslo.i18n

View File

@ -1,45 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _{{ label }}:
.. list-table:: Description of {{ nice_cat }} configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
{% for group in groups %}
* - **[{{ group }}]**
-
{% for item in items[loop.index0] %}
{% if item['default'] is equalto '' %}
* - ``{{ item['name'] }}`` =
{% else %}
* - ``{{ item['name'] }}`` = ``{{ item['default'] }}``
{% endif %}
{% for paragraph in item['help'] %}
{% if loop.first %}
- ({{ item['type'] }}) {{ paragraph }}
{% else %}
{{ paragraph }}
{% endif %}
{% endfor %}
{% for flagname, flagdesc in item['flags'] %}
- **{{ flagname }}**
{{ flagdesc }}
{% endfor %}
{% endfor %}
{% endfor %}

View File

@ -1,40 +0,0 @@
..
Warning: Do not edit this file. It is automatically generated from the
software project's code and your changes will be overwritten.
The tool to generate this file lives in openstack-doc-tools repository.
Please make any changes needed in the code, then run the
autogenerate-config-doc tool from the openstack-doc-tools repository, or
ask for help on the documentation mailing list, IRC channel or meeting.
.. _{{ label }}:
.. list-table:: Description of {{ group }} configuration options
:header-rows: 1
:class: config-ref-table
* - Configuration option = Default value
- Description
{% for item in items %}
{% if item['default'] is equalto '' %}
* - ``{{ item['name'] }}`` =
{% else %}
* - ``{{ item['name'] }}`` = ``{{ item['default'] }}``
{% endif %}
{% for paragraph in item['help'] %}
{% if loop.first %}
- ({{ item['type'] }}) {{ paragraph }}
{% else %}
{{ paragraph }}
{% endif %}
{% endfor %}
{% for flagname, flagdesc in item['flags'] %}
- **{{ flagname }}**
{{ flagdesc }}
{% endfor %}
{% endfor %}

View File

@ -1,61 +0,0 @@
New, updated, and deprecated options in {{ release }} for {{ codename }}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{{ '~' * release|length }}~~~~~{{ '~' * codename|length }}
..
Warning: Do not edit this file. It is automatically generated and your
changes will be overwritten. The tool to do so lives in the
openstack-doc-tools repository.
{% if new_opts %}
.. list-table:: New options
:header-rows: 1
:class: config-ref-table
* - Option = default value
- (Type) Help string
{% for cells in new_opts %}
* - ``{{ cells[0] }}``
- {{ cells[1] }}
{% endfor %}
{% endif %}
{% if new_defaults %}
.. list-table:: New default values
:header-rows: 1
:class: config-ref-table
* - Option
- Previous default value
- New default value
{% for cells in new_defaults %}
* - ``{{ cells[0] }}``
{% if cells[1] is equalto '' %}
-
{% else %}
- ``{{ cells[1] }}``
{% endif %}
{% if cells[2] is equalto '' %}
-
{% else %}
- ``{{ cells[2] }}``
{% endif %}
{% endfor %}
{% endif %}
{% if deprecated_opts %}
.. list-table:: Deprecated options
:header-rows: 1
:class: config-ref-table
* - Deprecated option
- New Option
{% for cells in deprecated_opts %}
* - ``{{ cells[0] }}``
- ``{{ cells[1] }}``
{% endfor %}
{% endif %}
{% if not new_opts and not new_defaults and not deprecated_opts %}
There are no new, updated, and deprecated options
in {{ release }} for {{ codename }}.
{% endif %}

View File

@ -1,113 +0,0 @@
#!/bin/bash -e
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
DIRECTORY=$1
if [ -z "$DIRECTORY" ] ; then
echo "usage $0 DIRECTORY options"
echo "Options are:"
echo "--tag TAG: Use given tag for building"
echo "--target TARGET: Copy files to publish-docs/$TARGET"
echo "--build BUILD: Name of build directory"
echo "--linkcheck: Check validity of links instead of building"
echo "--pdf: PDF file generation"
exit 1
fi
TARGET=""
TAG=""
TAG_OPT=""
BUILD=""
LINKCHECK=""
PDF=""
while [[ $# > 0 ]] ; do
option="$1"
case $option in
--build)
BUILD="$2"
shift
;;
--linkcheck)
LINKCHECK=1
;;
--tag)
TAG="$2"
TAG_OPT="-t $2"
shift
;;
--target)
TARGET="$2"
shift
;;
--pdf)
PDF=1
;;
esac
shift
done
if [ -z "$BUILD" ] ; then
if [ -z "$TAG" ] ; then
BUILD_DIR="$DIRECTORY/build/html"
BUILD_DIR_PDF="$DIRECTORY/build/pdf"
else
BUILD_DIR="$DIRECTORY/build-${TAG}/html"
BUILD_DIR_PDF="$DIRECTORY/build-${TAG}/pdf"
fi
else
BUILD_DIR="$DIRECTORY/$BUILD/html"
BUILD_DIR_PDF="$DIRECTORY/$BUILD/pdf"
fi
DOCTREES="${BUILD_DIR}.doctrees"
if [ -z "$TAG" ] ; then
echo "Checking $DIRECTORY..."
else
echo "Checking $DIRECTORY with tag $TAG..."
fi
if [ "$LINKCHECK" = "1" ] ; then
# Show sphinx-build invocation for easy reproduction
set -x
sphinx-build -E -W -d $DOCTREES -b linkcheck \
$TAG_OPT $DIRECTORY/source $BUILD_DIR
set +x
else
# Show sphinx-build invocation for easy reproduction
set -x
sphinx-build -E -W -d $DOCTREES -b html \
$TAG_OPT $DIRECTORY/source $BUILD_DIR
set +x
# PDF generation
if [ "$PDF" = "1" ] ; then
set -x
sphinx-build -E -W -d $DOCTREES -b latex \
$TAG_OPT $DIRECTORY/source $BUILD_DIR_PDF
make -C $BUILD_DIR_PDF
cp $BUILD_DIR_PDF/*.pdf $BUILD_DIR/
set +x
fi
# Copy RST (and PDF)
if [ "$TARGET" != "" ] ; then
mkdir -p publish-docs/$TARGET
rsync -a $BUILD_DIR/ publish-docs/$TARGET/
# Remove unneeded build artefact
rm -f publish-docs/$TARGET/.buildinfo
fi
fi

View File

@ -1,337 +0,0 @@
#!/bin/bash -xe
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
INSTALL_TAGS="obs rdo ubuntu debian debconf"
FIRSTAPP_TAGS="libcloud dotnet fog openstacksdk pkgcloud shade"
# This marker is needed for Infra publishing and needs to go into the
# root directory of each translated manual as file ".root-marker".
MARKER_TEXT="Project: $ZUUL_PROJECT Ref: $ZUUL_REFNAME Build: $ZUUL_UUID Revision: $ZUUL_NEWREV"
function build_rst {
language=$1
book=$2
local ret
# First build all the single po files
# Note that we need to run inside a venv since the venv we are run in
# uses SitePackages=True and we have to install Sphinx in the venv
# together with openstackdocstheme. With SitePackages, the global Sphinx
# is used and that will not work with a local openstackdocstheme installed.
TAG=""
# We need to extract all strings, so add all supported tags
if [ ${book} = "firstapp" ] ; then
TAG="-t libcloud -t fog -t dotnet -t openstacksdk -t pkgcloud -t shade"
fi
if [ ${book} = "install-guide" ] ; then
TAG="-t obs -t rdo -t ubuntu -t debian"
fi
COMMON="common"
LOCALE_DIR="${DOC_DIR}${book}/source/locale/"
COMMON_DIR="${DOC_DIR}${COMMON}/source/locale/"
tox -evenv -- sphinx-build -q -E -W -b gettext $TAG \
${DOC_DIR}${book}/source/ ${LOCALE_DIR}
# Merge the common po file
if [[ -e ${COMMON_DIR}${language}/LC_MESSAGES/${COMMON}.po ]] ; then
msgcat --use-first -o ${LOCALE_DIR}${language}/${book}.po \
${LOCALE_DIR}${language}/LC_MESSAGES/${book}.po \
${COMMON_DIR}${language}/LC_MESSAGES/${COMMON}.po
mv -f ${LOCALE_DIR}${language}/${book}.po \
${LOCALE_DIR}${language}/LC_MESSAGES/${book}.po
fi
# Now run msgmerge on all files
for f in ${LOCALE_DIR}*.pot ; do
# Skip the master file
if [ $f = "${LOCALE_DIR}${book}.pot" ] ; then
continue
fi
bf=$(basename $f)
# Remove .pot
bfname=${bf%.pot}
msgmerge --silent \
-o ${LOCALE_DIR}${language}/LC_MESSAGES/${bfname}.po \
${LOCALE_DIR}${language}/LC_MESSAGES/${book}.po \
${LOCALE_DIR}${bf}
msgfmt ${LOCALE_DIR}${language}/LC_MESSAGES/${bfname}.po \
-o ${LOCALE_DIR}${language}/LC_MESSAGES/${bfname}.mo
done
# Set the bug project to I18n project
set +e
grep 'bug_project' ${DOC_DIR}${book}/source/conf.py > /dev/null
ret=$?
set -e
if [ "$ret" -eq 0 ] ; then
# Replace the existing "bug_project" html context
sed -i -e \
's/"bug_project" *: *[^ ,}]*/"bug_project": "openstack-i18n"/' \
${DOC_DIR}${book}/source/conf.py
else
# Add the "bug_project" html context
sed -i -e \
's/html_context *= *{/html_context = { \
"bug_project": "openstack-i18n", /' \
${DOC_DIR}${book}/source/conf.py
fi
# Build all books
if [ ${book} = "firstapp" ] ; then
# Firstapp has several variations, build all of them
for tag in $FIRSTAPP_TAGS ; do
BUILD_DIR="${DOC_DIR}${book}/build-${tag}/html"
DOCTREES="${BUILD_DIR}.doctrees"
tox -evenv -- sphinx-build -q -E \
-t $tag -D language=${language} \
-d ${DOCTREES} \
${DOC_DIR}${book}/source/ \
${BUILD_DIR}
PUBLISH_DIR=publish-docs/${language}/${book}-${tag}
mkdir -p ${PUBLISH_DIR}
rsync -a ${DOC_DIR}${book}/build-${tag}/html/ ${PUBLISH_DIR}
echo $MARKER_TEXT > ${PUBLISH_DIR}/.root-marker
done
elif [ ${book} = "install-guide" ] ; then
# Install Guide has several variations, build all of them
INDEX=${DOC_DIR}${book}/source/index.rst
# For translation work, we should have only one index file,
# because our tools generate translation resources from
# only one index file.
# Therefore, this tool uses one combined index file
# while processing title and toctree for each distribution.
# Save and restore the index file
cp -f ${INDEX} ${INDEX}.save
trap "mv -f ${INDEX}.save ${INDEX}" EXIT
for tag in $INSTALL_TAGS; do
if [[ "$tag" == "debconf" ]]; then
# Not all branches have this directory
if [[ -d ${DOC_DIR}${book}-${tag}/source ]] ; then
# Build the guide with debconf
# To use debian only contents, use "debian" tag.
BUILD_DIR="${DOC_DIR}${book}-${tag}/build-${tag}/html"
DOCTREES="${BUILD_DIR}.doctrees"
tox -evenv -- sphinx-build -q -E -t debian \
-D language=${language} \
-d ${DOCTREES} \
${DOC_DIR}${book}-${tag}/source/ \
${BUILD_DIR}
PUBLISH_DIR=publish-docs/${language}/${book}-${tag}
mkdir -p ${PUBLISH_DIR}
rsync -a ${DOC_DIR}${book}-${tag}/build-${tag}/html/ \
${PUBLISH_DIR}
echo $MARKER_TEXT > ${PUBLISH_DIR}/.root-marker
fi
else
##
# Because Sphinx uses the first heading as title regardless of
# only directive, replace title directive with the proper title
# for each distribution to set the title explicitly.
title=$(grep -m 1 -A 5 "^.. only:: ${tag}" ${INDEX} | \
sed -n 4p | sed -e 's/^ *//g')
sed -i -e "s/\.\. title::.*/.. title:: ${title}/" ${INDEX}
##
# Sphinx builds the navigation before processing directives,
# so the conditional toctree does not work.
# We need to prepare toctree depending on distribution
# only with one toctree before exectuing sphinx-build.
# Build the guide
BUILD_DIR="${DOC_DIR}${book}/build-${tag}/html"
DOCTREES="${BUILD_DIR}.doctrees"
tox -evenv -- sphinx-build -q -E -t $tag \
-D language=${language} \
-d ${DOCTREES} \
${DOC_DIR}${book}/source/ \
${BUILD_DIR}
PUBLISH_DIR=publish-docs/${language}/${book}-${tag}
mkdir -p ${PUBLISH_DIR}
rsync -a ${DOC_DIR}${book}/build-${tag}/html/ \
${PUBLISH_DIR}
echo $MARKER_TEXT > ${PUBLISH_DIR}/.root-marker
fi
done
else
BUILD_DIR="${DOC_DIR}${book}/build/html"
DOCTREES="${BUILD_DIR}.doctrees"
tox -evenv -- sphinx-build \
-q -E -D language=${language} \
-d ${DOCTREES} \
${DOC_DIR}${book}/source/ \
${BUILD_DIR}
PUBLISH_DIR=publish-docs/${language}/${book}/
mkdir -p ${PUBLISH_DIR}
rsync -a ${DOC_DIR}${book}/build/html/ ${PUBLISH_DIR}
echo $MARKER_TEXT > ${PUBLISH_DIR}/.root-marker
fi
# Remove newly created files
git clean -f -q ${LOCALE_DIR}${language}/LC_MESSAGES/*.po
git clean -f -x -q ${LOCALE_DIR}${language}/LC_MESSAGES/*.mo
git clean -f -q ${LOCALE_DIR}*.pot
# Revert changes to po file
git reset -q ${LOCALE_DIR}${language}/LC_MESSAGES/${book}.po
git checkout -- ${LOCALE_DIR}${language}/LC_MESSAGES/${book}.po
# Revert changes to conf.py
git reset -q ${DOC_DIR}${book}/source/conf.py
git checkout -- ${DOC_DIR}${book}/source/conf.py
}
function test_language {
language=$1
echo
echo "Building for language $language"
echo
args=("-v")
if [[ $PURPOSE -eq "publish" ]]; then
args+=("--publish")
fi
args+=("--check-build" "-l $language")
for book in ${BOOKS["$language"]}; do
if [ ${SPECIAL_BOOKS[$book]+_} ] ; then
if [ ${SPECIAL_BOOKS[$book]} = "RST" ] ; then
echo "Building translated RST book $book for $language"
build_rst $language $book
continue
fi
fi
done
}
function handle_draft_language {
language=$1
echo
echo "Moving drafts for language $language"
echo
mkdir -p publish-docs/draft/$language
for book in ${DRAFTS["$language"]}; do
case "${book}" in
config-reference)
mv publish-docs/$language/draft/$book \
publish-docs/draft/$language/$book
rmdir --ignore-fail-on-non-empty publish-docs/$language/draft
;;
firstapp)
for tag in $FIRSTAPP_TAGS; do
mv publish-docs/$language/$book-${tag} \
publish-docs/draft/$language/$book-${tag}
done
rmdir --ignore-fail-on-non-empty publish-docs/$language/
;;
install-guide)
for tag in $INSTALL_TAGS ; do
# Not all tags might be build on all branches
if [[ -d publish-docs/$language/$book-${tag} ]] ; then
mv publish-docs/$language/$book-${tag} \
publish-docs/draft/$language/$book-${tag}
fi
done
rmdir --ignore-fail-on-non-empty publish-docs/$language/
;;
*)
mv publish-docs/$language/$book \
publish-docs/draft/$language/$book
;;
esac
done
}
function usage {
echo "usage: $0 CONF_FILE PURPOSE LANGUAGE1 LANGUAGE2 ..."
echo
echo "CONF_FILE is the path to the configuration file."
echo
echo "PURPOSE is either 'test' or 'publish'."
echo
echo "LANGUAGE is either 'all' or 'LANG'."
echo "LANG is a language code like 'fr' or 'ja'."
}
# Declare in case it's not in the file
declare -A SPECIAL_BOOKS
declare -A DRAFTS
CONF_FILE=$1
shift
if [[ -z $CONF_FILE ]]; then
usage
exit 1
fi
if [[ ! -e $CONF_FILE ]]; then
echo "Error: the configuration file '$CONF_FILE' does not exist"
exit 1
fi
source $CONF_FILE
if [[ -z $(declare -p BOOKS 2> /dev/null | grep 'declare -A BOOKS') || \
-z $(declare -p DIRECTORIES 2> /dev/null | \
grep 'declare -A DIRECTORIES') || \
-z $DOC_DIR ]]; then
echo "Error: the configuration file '$CONF_FILE' is invalid"
exit 1
fi
case "$1" in
test|publish)
PURPOSE=$1
shift
;;
*)
usage
exit 1
;;
esac
for language in "$@" ; do
case "$language" in
all)
for language in "${!BOOKS[@]}"; do
test_language $language
done
# Move draft language guides
for language in "${!DRAFTS[@]}"; do
handle_draft_language $language
done
;;
*)
if [[ -n ${BOOKS[$language]} ]]; then
test_language $language
if [ ${DRAFTS["${language}"]+_} ] ; then
handle_draft_language $language
fi
else
echo "Error: language $language not handled"
fi
;;
esac
done
exit 0

View File

@ -1,36 +0,0 @@
# Example configuration for the languages 'ja' and 'fr'.
# Directories to set up
declare -A DIRECTORIES=(
["ja"]="common glossary high-availability-guide image-guide install-guide user-guide"
["fr"]="common glossary user-guide"
)
# Books to build
declare -A BOOKS=(
["ja"]="high-availability-guide image-guide install-guide user-guide"
["fr"]="user-guide"
)
# draft books
declare -A DRAFTS=(
["fr"]="image-guide"
["ja"]="install-guide"
["pt_BR"]="install-guide"
["zh_CN"]="install-guide"
)
# Where does the top-level pom live?
# Set to empty to not copy it.
POM_FILE=doc/pom.xml
# Location of doc dir
DOC_DIR="doc/"
# Books with special handling
# Values need to match content in project-config/jenkins/scripts/common_translation_update.sh
declare -A SPECIAL_BOOKS
SPECIAL_BOOKS=(
["user-guides"]="RST"
["networking-guide"]="skip"
)

View File

@ -1,89 +0,0 @@
#!/bin/bash
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
if [[ $# -ne 1 ]]; then
echo "usage: $0 PROJECT"
echo
echo "PROJECT = something like nova, neutron or glance"
exit 1
fi
project=$1
# checks command exist or not
function does_exist {
which $@ > /dev/null 2>&1
local status=$?
if [[ $status -ne 0 ]]; then
echo "error: $1 not installed"
exit 1
fi
}
does_exist virtualenv
does_exist pip
does_exist git
if [[ ! -e $HOME/.gitconfig ]]; then
echo "note: ~/.gitconfig does not exist"
fi
if [[ ! -e .venv ]]; then
virtualenv -p python2.7 .venv
fi
source .venv/bin/activate
pip install --upgrade openstack-doc-tools
pip install --upgrade pbr
# Cliff can optionally output HTML - if cliff-tablib is installed.
pip install --upgrade cliff-tablib
# OSProfiler is an OpenStack cross-project profiling library.
pip install --upgrade osprofiler
if [[ $project == 'aodh' ]]; then
pip install --upgrade ${project}client
elif [[ $project == 'gnocchi' ]]; then
pip install --upgrade ${project}client
else
pip install --upgrade python-${project}client
fi
rm -rf output
mkdir output
# Work around until python-cinderclient use 3.latest as a default
OS_VOLUME_API_VERSION=3.latest \
openstack-auto-commands --output-dir output $project
if [[ ! -e openstack-manuals ]]; then
git clone git://git.openstack.org/openstack/openstack-manuals
fi
cd openstack-manuals
( git remote -v | grep -q gerrit ) || git review -s
git checkout master
git pull
branch=cli-reference
git branch --list $branch && git branch -D $branch
git checkout -b $branch
mv ../output/${project}.rst "doc/cli-reference/source"
rm -rf ../output
version=$($project --version 2>&1)
version=${version##*\)}
git commit -a -m "[cli-ref] Update python-${project}client to ${version##* }"

View File

@ -1,9 +0,0 @@
# This is a cross-platform list tracking distribution packages needed by tests;
# see http://docs.openstack.org/infra/bindep/ for additional information.
libssl-dev [platform:dpkg]
openssl-devel [platform:rpm]
python-dev [platform:dpkg]
python3-all [platform:dpkg !platform:ubuntu-precise]
python3-all-dev [platform:dpkg !platform:ubuntu-precise]
python3-devel [platform:fedora]

View File

@ -1,72 +0,0 @@
#!/usr/bin/env python
"""A script to prettify HTML and XML syntax.
Some examples of the prettified syntax are available
in the following changes:
* https://review.openstack.org/#/c/98652/
* https://review.openstack.org/#/c/98653/
* https://review.openstack.org/#/c/98655/
"""
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import print_function
import argparse
import sys
from bs4 import BeautifulSoup
def parse_command_line_arguments():
"""Parse the command line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument("--write-changes", action="store_true", default=False,
help="Write prettified XML or HTML syntax "
"back to file.")
parser.add_argument("file", type=str, default=None,
help="A XML or HTML File to prettify.")
return parser.parse_args()
def main():
"""Entry point for this script."""
args = parse_command_line_arguments()
try:
soup = BeautifulSoup(open(args.file))
except IOError as exception:
print("ERROR: File '%s' could not be parsed: %s"
% (args.file, exception))
return 1
if args.write_changes:
try:
with open(args.file, 'wb') as output:
prettified = soup.prettify(encoding="utf8")
output.write(prettified)
except IOError as exception:
print("ERROR: File '%s' could not be written: %s"
% (args.file, exception))
return 1
else:
prettified = soup.prettify(encoding="utf8")
print(prettified)
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@ -1,24 +0,0 @@
#!/bin/sh
## copyright: B1 Systems GmbH <info@b1-systems.de>, 2013.
## author: Christian Berendt <berendt@b1-systems.de>, 2013.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Call ./tools/cleanup/remove_trailing_whitespaces.sh in the
# root of openstack-manuals.
files=$(find doc -name *.xml -not -name pom.xml)
for file in $files; do
sed -i -e 's/[[:space:]]*$//' $file
done

View File

@ -1,69 +0,0 @@
#!/usr/bin/env python
# copyright: B1 Systems GmbH <info@b1-systems.de>, 2013.
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Call ./tools/cleanup/remove_unnecessary_spaces.py in the
# root of openstack-manuals.
import os
import re
import shutil
import tempfile
# should be the same like in tools/validate.py
FILE_EXCEPTIONS = ['ha-guide-docinfo.xml',
'bk001-ch003-associate-general.xml']
elements = [
'listitem',
'para',
'td',
'th',
'command',
'literal',
'title',
'caption',
'filename',
'userinput',
'programlisting'
]
checks = []
for element in elements:
checks.append(re.compile("(.*<%s>)\s+([\w\-().:!?{}\[\]]+.*\n)"
% element)),
checks.append(re.compile("(.*[\w\-().:!?{}\[\]]+)\s+(<\/%s>.*\n)"
% element))
for root, dirs, files in os.walk('doc/'):
for f in files:
if (not (f.endswith('.xml') and
f != 'pom.xml' and
f not in FILE_EXCEPTIONS)):
continue
docfile = os.path.abspath(os.path.join(root, f))
tmpfile = tempfile.mkstemp()
tmpfd = os.fdopen(tmpfile[0], "w")
match = False
for line in open(docfile, 'r'):
for check in checks:
if check.match(line):
line = check.sub(r"\1\2", line)
match = True
tmpfd.write(line)
tmpfd.close()
if match:
shutil.copyfile(tmpfile[1], docfile)
os.unlink(tmpfile[1])

View File

@ -1,69 +0,0 @@
# retf.py
This script applies a set of regular expressions onto a set of files
to automatically identify and fix typographical errors.
## What does RETF mean?
RETF means RegExTypoFix or Regular Expression Typographical error Fixer
and is a set of regular expressions to find and fix common misspellings
and grammatical errors.
The regular expressions are available at
https://en.wikipedia.org/wiki/Wikipedia:AutoWikiBrowser/Typos.
## Usage
There are two ways to define the set of files. First you can simply add
single files using the parameter ```--file```.
```$ ./retf.py --file path/to/file1 path/to/file2 path/to/file3```
Also you can specify paths using the parameter ```--path``` that should be
scanned for files.
```$ ./retf.py --path path/with/files/1 path/with/files/2```
To not use all files inside the specified paths it's possible to filter
by the file extension.
```$ ./retf.py --path path/with/files --extension xml txt rst```
It's possible to use the parameters ```--path``` and ```--file``` together.
By default the script will only check for findings in all specified files.
To automatically write back resolved findings add the parameter
```--write-changes```. Findings will then be written to a copy with
the ending ```.retf```.
To fix findings directly in the files add the parameter
```--in-place```. Findings will than be fixed directly in the files. A backup file
with the ending ```.orig``` will be created. To disable backups add the
paramter ```--no-backup```.
To only check if there are findings inside the defined set of files add
To download the latest RETF rules from Wikipedia use the parameter ```--download```.
## Needed Python modules
* beautifulsoup4 / bs4 (https://pypi.python.org/pypi/beautifulsoup4)
* glob2 (https://pypi.python.org/pypi/glob2)
* pyyaml (https://pypi.python.org/pypi/pyaml)
* regex (https://pypi.python.org/pypi/regex)
* six (https://pypi.python.org/pypi/six)
To install the needed modules you can use pip or the package management system included
in your distribution. When using the package management system maybe the name of the
packages differ. When using pip it's maybe necessary to install some development packages.
For example on Ubuntu 14.04 LTS you have to install ```libyaml-dev``` for ```pyyaml```
and ```python-dev``` for ```regex```.
```
$ pip install beautifulsoup4
$ pip install glob2
$ pip install pyyaml
$ pip install regex
$ pip install six
```

View File

@ -1 +0,0 @@
---

View File

@ -1,307 +0,0 @@
#!/usr/bin/env python
"""This script applies a set of regular expressions onto a set of files
to automatically identify and fix typographical errors.
"""
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Based on the idea of 'Topy' written by Marti Raudsepp <marti@juffo.org>.
# Topy is available on Github at https://github.com/intgr/topy.
import argparse
import logging
import os
import shutil
import sys
from six.moves.urllib import error as urlerr
from six.moves.urllib import request as urlreq
from bs4 import BeautifulSoup
import glob2
import regex
import six
import yaml
class DownloadRetfListingFailed(Exception):
"""Exception for failed downloads of the RETF listing.
Exception will be raised when the download of the RETF
listing failed or the destination file could not be written.
"""
pass
def download_listing(dest):
"""Download the latest RETF listing from Wikipedia."""
logger = logging.getLogger('retf')
try:
url = ('https://en.wikipedia.org/wiki/Wikipedia:AutoWikiBrowser/'
'Typos?action=raw')
logger.debug("Downloading latest RETF listing from %s into %s.",
url, dest)
response = urlreq.urlopen(url)
data = response.read()
logger.info("Downloading latest RETF listing from %s succeeded.", url)
except urlerr.HTTPError as ex:
raise DownloadRetfListingFailed(six.text_type(ex))
except urlerr.URLError as ex:
raise DownloadRetfListingFailed(six.text_type(ex))
try:
with open(dest, 'w+') as write:
write.write(data)
logger.info("Writing RETF listing to file %s succeeded.", dest)
except IOError as ex:
raise DownloadRetfListingFailed(six.text_type(ex))
def soupify_listing(src):
"""Parse a RETF listing."""
return BeautifulSoup(open(src))
def generate_listing(src):
"""Compile all regular expressions in a RETF listing."""
logger = logging.getLogger('retf')
result = []
soup = soupify_listing(src)
for typo in soup.findAll('typo'):
try:
word = typo.attrs.get('word').encode('utf8')
find = typo.attrs.get('find').encode('utf8')
replace = typo.attrs.get('replace').encode('utf8')
replace = replace.replace(b'$', b'\\')
except AttributeError:
continue
# pylint: disable=W0703
try:
logger.debug("Compiling regular expression: %s.", find)
compiled = regex.compile(find, flags=regex.V1)
except Exception:
logger.error("Compilation of regular expression %f failed.", find)
continue
# pylint: enable=W0703
entry = {
'description': word,
'find': find,
'replace': replace,
'regex': compiled
}
result.append(entry)
logger.debug("Compiled %d regular expression(s).", len(result))
return result
def load_text_from_file(src):
"""Load content from a file."""
logger = logging.getLogger('retf')
logger.debug("Loading text from file %s.", src)
with open(src, 'rb') as fpointer:
text = fpointer.read()
return text
def write_text_to_file(dest, text, no_backup, in_place):
"""Write content into a file."""
logger = logging.getLogger('retf')
if not no_backup:
logger.debug("Copying %s to backup file %s.orig.", dest, dest)
shutil.copy2(dest, "%s.orig" % dest)
if not in_place:
dest = "%s.retf" % dest
logger.debug("Writing text to file %s.", dest)
with open(dest, 'wb') as fpointer:
fpointer.write(text)
def initialize_logging(debug, less_verbose):
"""Initialize the Logger."""
logger = logging.getLogger(name='retf')
formatter = logging.Formatter('%(asctime)s %(levelname)-8s %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
if less_verbose:
logger.setLevel(logging.WARN)
if debug:
logger.setLevel(logging.DEBUG)
return logging.getLogger('retf')
def parse_command_line_arguments():
"""Parse the command line arguments."""
parser = argparse.ArgumentParser()
parser.add_argument("--debug", help="Print debugging messages.",
action="store_true", default=False)
parser.add_argument("--download", help="Download the latest RETF listing.",
action="store_true", default=False)
parser.add_argument("--less-verbose", help="Be less verbose.",
action="store_true", default=False)
parser.add_argument("--no-backup", help="Don't backup files.",
action="store_true", default=False)
parser.add_argument("--in-place", help="Resolve found errors in place.",
action="store_true", default=False)
parser.add_argument("--write-changes", action="store_true", default=False,
help="Write resolved findings back to files.")
parser.add_argument("--disabled", type=str, default=None,
help="File containing the disabled rules.")
parser.add_argument("--listing", help="File containing the RETF listing.",
type=str, default=os.path.join(
os.path.dirname(os.path.realpath(__file__)),
'retf.lst'))
parser.add_argument("--path", type=str, nargs='*', default=[],
help="Path(s) that should be checked.")
parser.add_argument("--extension", type=str, nargs='*', default=[],
help="Only check files with specified extension(s).")
parser.add_argument("--file", nargs='*', type=str, default=[],
help="File(s) to check for typographical errors.")
return (parser, parser.parse_args())
def load_disabled_rules(src):
"""Load disabled rules from YAML file."""
logger = logging.getLogger('retf')
listing = []
if src:
try:
listing = yaml.safe_load(open(src))
for rule in listing:
logger.debug("Rule '%s' is disabled.", rule)
except IOError:
logger.error("loading disabled rules from file %s failed", src)
return listing
def get_file_listing(paths, files, extensions):
"""Generate listing with all files that should be check."""
result = []
if files:
result += files
# pylint: disable=E1101
for path in paths:
if extensions:
for extension in extensions:
result += glob2.glob("%s/**/*.%s" % (path, extension))
else:
result += glob2.glob("%s/**/*" % path)
# pylint: enable=E1101
return result
def check_file(src, rules, disabled):
"""Applies a set of rules on a file."""
logger = logging.getLogger('retf')
logger.info("Checking file %s for typographical errors.", src)
content = load_text_from_file(src)
findings = 0
for rule in rules:
if rule.get('description') in disabled:
continue
logger.debug("%s: checking rule '%s'.", src,
rule.get('description'))
logger.debug(rule.get('find'))
newcontent, count = rule.get('regex').subn(
rule.get('replace'), content
)
if count > 0:
logger.warning("%d match(s) in file %s : %s.", count, src,
rule.get('description'))
findings += count
content = newcontent
return (findings, content)
def main():
"""Entry point for this script."""
parser, args = parse_command_line_arguments()
logger = initialize_logging(args.debug, args.less_verbose)
result = 0
if args.download:
try:
download_listing(args.listing)
except DownloadRetfListingFailed as ex:
logger.error("Downloading latest RETF listing failed: %s.", ex)
result = 1
if not args.path and not args.file and not args.download:
parser.print_help()
result = 2
if not result and not os.path.isfile(args.listing):
logger.error("RETF listing not found at %s.", args.listing)
logger.info("Please download the RETF listing first by using the "
"parameter --download.")
result = 1
if not result:
files = get_file_listing(args.path, args.file, args.extension)
rules = generate_listing(args.listing)
disabled = load_disabled_rules(args.disabled)
all_findings = 0
for check in files:
if not os.path.isfile(check):
continue
(findings, content) = check_file(check, rules, disabled)
if findings > 0:
all_findings += findings
logger.warning("%s finding(s) in file %s.", findings, check)
if findings > 0 and args.write_changes:
write_text_to_file(check, content, args.no_backup,
args.in_place)
if all_findings > 0:
logger.warning("%s finding(s) in all checked files.", all_findings)
result = 1
return result
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,21 +0,0 @@
[DEFAULT]
repo_name = openstack-doc-tools
api_site=True
# From api-ref/src/wadls/object-api/src/
file_exception=os-object-api-1.0.wadl
ignore_dir=incubation
ignore_dir=openstack-compute-api-1.0
# These two (or three) options need to come as pairs/triplets.
# Either add publish_pair for each book/target_dir pair or not at all.
# If publish_dir is not specified, book is used as publish_dir.
book = api-quick-start
target_dir = target/docbkx/webhelp/api-quick-start-onepager-external
#publish_dir = api-quick-start
book = api-ref
target_dir = target/docbkx/html
#publish_dir = api-ref

View File

@ -1 +0,0 @@
.. include:: ../../autogenerate_config_docs/README.rst

View File

@ -1,94 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'openstackdocstheme'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
repository_name = 'openstack/openstack-doc-tools'
bug_tag = u'openstack-doc-tools'
project = u'OpenStack-doc-tools'
copyright = u'2017, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = [openstackdocstheme.get_html_theme_path()]
# html_static_path = ['static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# So that we can enable "log-a-bug" links from each output HTML page, this
# variable must be set to a format that includes year, month, day, hours and
# minutes.
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
# intersphinx_mapping = {'http://docs.python.org/': None}
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = []
# If true, show URL addresses after external links.
# man_show_urls = False

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,21 +0,0 @@
==============================================
Welcome to openstack-doc-tool's documentation!
==============================================
Contents:
.. toctree::
:maxdepth: 2
doc-tools-readme
installation
usage
autogenerate_config_docs
man/openstack-doc-test
sitemap-readme
release_notes
Search
~~~~~~
* :ref:`search`

View File

@ -1,16 +0,0 @@
============
Installation
============
At the command line:
.. code-block:: console
$ pip install openstack-doc-tools
Or, if you have virtualenvwrapper installed:
.. code-block:: console
$ mkvirtualenv openstack-doc-tools
$ pip install openstack-doc-tools

View File

@ -1,117 +0,0 @@
==================
openstack-doc-test
==================
------------------------------------------------------
OpenStack Validation tool
------------------------------------------------------
SYNOPSIS
========
openstack-doc-test [options]
DESCRIPTION
===========
openstack-doc-test allows to test the validity of the OpenStack
documentation content.
OPTIONS
=======
**General options**
**--api-site**
Special handling for api-site and other API repositories
to handle WADL.
**--build-file-exception BUILD_FILE_EXCEPTION**
File that will be skipped during delete and build checks to
generate dependencies. This should be done for invalid XML files
only.
**--check-build**
Try to build books using modified files.
**--check-deletions**
Check that deleted files are not used.
**--check-links**
Check that linked URLs are valid and reachable.
**--check-niceness**
Check the niceness of files, for example whitespace.
**--check-syntax**
Check the syntax of modified files.
**--check-all**
Run all checks (default if no arguments are given).
**--config-file PATH**
Path to a config file to use. Multiple config files can be
specified, with values in later files taking precedence.
**--debug**
Enable debug code.
**--file-exception FILE_EXCEPTION**
File that will be skipped during niceness and syntax validation.
**--force**
Force the validation of all files and build all books.
**-h, --help**
Show help message and exit.
**--ignore-dir IGNORE_DIR**
Directory to ignore for building of manuals. The parameter can
be passed multiple times to add several directories.
**--language LANGUAGE, -l LANGUAGE**
Build translated manual for language in path generate/$LANGUAGE .
**--only-book ONLY_BOOK**
Build each specified manual.
**--parallel**
Build books in parallel (default).
**--print-unused-files**
Print list of files that are not included anywhere as part of
check-build.
**--publish**
Setup content in publish-docs directory for publishing to
external website.
**--verbose**
Verbose execution.
**--version**
Output version number.
FILES
=====
Reads the file `doc-test.conf` in the top-level directory of the git
repository for option processing.
Building of books will generate in the top-level directory of the git
repository:
* a directory `publish-docs` with a copy of the build results.
* for each book build a log file named `build-${book}.log.gz`.
SEE ALSO
========
* `OpenStack Documentation <http://wiki.openstack.org/wiki/Documentation>`__
Bugs
====
* openstack-doc-tools is hosted on Launchpad so you can view current
bugs at
`Bugs : openstack-doc-tools <https://bugs.launchpad.net/openstack-doc-tools/>`__

View File

@ -1 +0,0 @@
.. include:: ../../RELEASE_NOTES.rst

View File

@ -1 +0,0 @@
.. include:: ../../sitemap/README.rst

View File

@ -1,9 +0,0 @@
=====
Usage
=====
To use openstack-doc-tools in a project:
.. code-block:: python
import os_doc_tools

View File

@ -1,8 +0,0 @@
[DEFAULT]
# The list of modules to copy from oslo-incubator.git
module=log
# The base module to hold the copy of openstack.common
base=os_doc_tools

View File

@ -1,18 +0,0 @@
# Copyright 2012 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo('openstack-doc-tools').version_string()

View File

@ -1,772 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import os
import subprocess
import sys
import yaml
import os_doc_tools
DEVNULL = open(os.devnull, 'wb')
MAXLINELENGTH = 78
def use_help_flag(os_command):
"""Use --help flag (instead of help keyword)
Returns true if the command requires a --help flag instead
of a help keyword.
"""
return os_command == "swift" or "-manage" in os_command
def quote_rst(line):
"""Convert special characters for RST output."""
line = line.replace('\\', '\\\\').replace('`', '\\`').replace('*', '\\*')
if 'DEPRECATED!' in line:
line = line.replace('DEPRECATED!', '**DEPRECATED!**')
elif 'DEPRECATED' in line:
line = line.replace('DEPRECATED', '**DEPRECATED**')
if 'env[' in line:
line = line.replace('env[', '``env[').replace(']', ']``')
# work around for "Default=env[...]" at cinder
line = line.replace('=``', '= ``')
return line
def generate_heading(os_command, api_name, title,
output_dir, os_filename, continue_on_error):
"""Write RST file header.
:param os_command: client command to document
:param api_name: string description of the API of os_command
:param output_dir: directory to write output file to
:param os_filename: name to create current output file as
:param continue_on_error: continue even if there's an error
"""
try:
version = subprocess.check_output([os_command, "--version"],
universal_newlines=True,
stderr=subprocess.STDOUT)
except OSError as e:
if e.errno == os.errno.ENOENT:
action = 'skipping' if continue_on_error else 'aborting'
print("Command %s not found, %s." % (os_command, action))
if continue_on_error:
return
else:
sys.exit(1)
# Extract version from "swift 0.3"
version = version.splitlines()[-1].strip().rpartition(' ')[2]
print("Documenting '%s help (version %s)'" % (os_command, version))
os_file = open(os.path.join(output_dir, os_filename), 'w')
os_file.write(".. ###################################################\n")
os_file.write(".. ## WARNING ######################################\n")
os_file.write(".. ############## WARNING ##########################\n")
os_file.write(".. ########################## WARNING ##############\n")
os_file.write(".. ###################################### WARNING ##\n")
os_file.write(".. ###################################################\n")
os_file.write(".. ###################################################\n")
os_file.write(".. ##\n")
os_file.write(".. This file is tool-generated. Do not edit manually.\n")
os_file.write(".. http://docs.openstack.org/contributor-guide/\n")
os_file.write(".. doc-tools/cli-reference.html\n")
os_file.write(".. ##\n")
os_file.write(".. ## WARNING ######################################\n")
os_file.write(".. ############## WARNING ##########################\n")
os_file.write(".. ########################## WARNING ##############\n")
os_file.write(".. ###################################### WARNING ##\n")
os_file.write(".. ###################################################\n\n")
format_heading(title, 1, os_file)
if os_command == "heat":
os_file.write(".. warning::\n\n")
os_file.write(" The " + os_command + " CLI is deprecated\n")
os_file.write(" in favor of python-openstackclient.\n\n")
os_file.write("The " + os_command + " client is the command-line ")
os_file.write("interface (CLI) for the\n")
os_file.write(api_name + "\n")
os_file.write("and its extensions.\n\n")
os_file.write("This chapter documents :command:`" + os_command + "` ")
os_file.write("version ``" + version + "``.\n\n")
os_file.write("For help on a specific :command:`" + os_command + "` ")
os_file.write("command, enter:\n\n")
os_file.write(".. code-block:: console\n\n")
if use_help_flag(os_command):
os_file.write(" $ " + os_command + " COMMAND --help\n\n")
else:
os_file.write(" $ " + os_command + " help COMMAND\n\n")
os_file.write(".. _" + os_command + "_command_usage:\n\n")
format_heading(os_command + " usage", 2, os_file)
return os_file
def is_option(string):
"""Returns True if string specifies an argument."""
for x in string:
if not (x.isupper() or x == '_' or x == ','):
return False
if string.startswith('DEPRECATED'):
return False
return True
def extract_options(line):
"""Extract command or option from line."""
# We have a command or parameter to handle
# Differentiate:
# 1. --version
# 2. --timeout <seconds>
# 3. --service <service>, --service-id <service>
# 4. -v, --verbose
# 5. -p PORT, --port PORT
# 6. <backup> ID of the backup to restore.
# 7. --alarm-action <Webhook URL>
# 8. <NAME or ID> Name or ID of stack to resume.
# 9. --json JSON JSON representation of node group template.
# 10. --id <cluster_id> ID of the cluster to show.
# 11. --instance "<opt=value,opt=value,...>"
split_line = line.split(None, 2)
if split_line[0].startswith("-"):
last_was_option = True
else:
last_was_option = False
if (len(split_line) > 1 and
('<' in split_line[0] or
'<' in split_line[1] or
'--' in split_line[1] or
split_line[1].startswith(("-", '<', '{', '[')) or
is_option(split_line[1]))):
words = line.split(None)
i = 0
while i < len(words) - 1:
if (('<' in words[i] and
'>' not in words[i]) or
('[' in words[i] and
']' not in words[i])):
words[i] += ' ' + words[i + 1]
del words[i + 1]
else:
i += 1
skip_is_option = False
while len(words) > 1:
if words[1].startswith('DEPRECATED'):
break
if last_was_option:
if (words[1].startswith(("-", '<', '{', '[', '"')) or
(is_option(words[1]) and skip_is_option is False)):
skip_is_option = False
if words[1].isupper() or words[1].startswith('<'):
skip_is_option = True
words[0] = words[0] + ' ' + words[1]
del words[1]
else:
break
else:
if words[1].startswith("-"):
words[0] = words[0] + ' ' + words[1]
del words[1]
else:
break
w0 = words[0]
del words[0]
w1 = ''
if words:
w1 = words[0]
del words[0]
for w in words:
w1 += " " + w
if not w1:
split_line = [w0]
else:
split_line = [w0, w1]
else:
split_line = line.split(None, 1)
return split_line
def format_heading(heading, level, os_file):
"""Nicely print heading.
:param heading: heading strings
:param level: heading level
:param os_file: open filehandle for output of RST file
"""
if level == 1:
os_file.write("=" * len(heading) + "\n")
os_file.write(heading + "\n")
if level == 1:
os_file.write("=" * len(heading) + "\n\n")
elif level == 2:
os_file.write("~" * len(heading) + "\n\n")
elif level == 3:
os_file.write("-" * len(heading) + "\n\n")
else:
os_file.write("\n")
return
def format_help(title, lines, os_file):
"""Nicely print section of lines.
:param title: help title, if exist
:param lines: strings to format
:param os_file: open filehandle for output of RST file
"""
close_entry = False
if title:
os_file.write("**" + title + ":**" + "\n\n")
continued_line = ''
for line in lines:
if not line or line[0] != ' ':
break
# We have to handle these cases:
# 1. command Explanation
# 2. command
# Explanation on next line
# 3. command Explanation continued
# on next line
# If there are more than 8 spaces, let's treat it as
# explanation.
if line.startswith(' '):
# Explanation
xline = continued_line + quote_rst(line.lstrip(' '))
continued_line = ''
# Concatenate the command options with "-"
# For example:
# see 'glance image-
# show'
if xline.endswith('-'):
continued_line = xline
continue
# check niceness
if len(xline) > (MAXLINELENGTH - 2):
xline = xline.replace(' ', '\n ')
os_file.write(" " + xline + "\n")
continue
# Now we have a command or parameter to handle
split_line = extract_options(line)
if not close_entry:
close_entry = True
else:
os_file.write("\n")
xline = split_line[0]
# check niceness work around for long option name, glance
xline = xline.replace('[<RESOURCE_TYPE_ASSOCIATIONS> ...]',
'[...]')
os_file.write("``" + xline + "``\n")
if len(split_line) > 1:
# Explanation
xline = continued_line + quote_rst(split_line[1])
continued_line = ''
# Concatenate the command options with "-"
# For example:
# see 'glance image-
# show'
if xline.endswith('-'):
continued_line = xline
continue
# check niceness
if len(xline) > (MAXLINELENGTH - 2):
# check niceness
xline = xline.replace(' ', '\n ')
os_file.write(" " + xline + "\n")
os_file.write("\n")
return
def generate_command(os_command, os_file):
"""Convert os_command --help to RST.
:param os_command: client command to document
:param os_file: open filehandle for output of RST file
"""
if use_help_flag(os_command):
help_lines = subprocess.check_output([os_command, "--help"],
universal_newlines=True,
stderr=DEVNULL).split('\n')
else:
help_lines = subprocess.check_output([os_command, "help"],
universal_newlines=True,
stderr=DEVNULL).split('\n')
ignore_next_lines = False
next_line_screen = True
line_index = -1
in_screen = False
subcommands = 'complete'
for line in help_lines:
line_index += 1
if line and line[0] != ' ':
# XXX: Might have whitespace before!!
if '<subcommands>' in line:
ignore_next_lines = False
continue
if 'Positional arguments' in line:
ignore_next_lines = True
next_line_screen = True
os_file.write("\n\n")
in_screen = False
if os_command != "glance":
format_help('Subcommands',
help_lines[line_index + 2:], os_file)
continue
if line.startswith(('Optional arguments:', 'Optional:',
'Options:', 'optional arguments')):
if in_screen:
os_file.write("\n\n")
in_screen = False
os_file.write(".. _" + os_command + "_command_options:\n\n")
format_heading(os_command + " optional arguments", 2, os_file)
format_help('', help_lines[line_index + 1:], os_file)
next_line_screen = True
ignore_next_lines = True
continue
# magnum
if line.startswith('Common auth options'):
if in_screen:
os_file.write("\n\n")
in_screen = False
os_file.write("\n")
os_file.write(os_command)
os_file.write(".. _" + os_command + "_common_auth:\n\n")
format_heading(os_command + " common authentication arguments",
2, os_file)
format_help('', help_lines[line_index + 1:], os_file)
next_line_screen = True
ignore_next_lines = True
continue
# neutron
if line.startswith('Commands for API v2.0:'):
if in_screen:
os_file.write("\n\n")
in_screen = False
os_file.write(".. _" + os_command + "_common_api_v2:\n\n")
format_heading(os_command + " API v2.0 commands", 2, os_file)
format_help('', help_lines[line_index + 1:], os_file)
next_line_screen = True
ignore_next_lines = True
continue
# swift
if line.startswith('Examples:'):
os_file.write(".. _" + os_command + "_examples:\n\n")
format_heading(os_command + " examples", 2, os_file)
next_line_screen = True
ignore_next_lines = False
continue
# all
if not line.startswith('usage'):
continue
if not ignore_next_lines:
if next_line_screen:
os_file.write(".. code-block:: console\n\n")
os_file.write(" " + line)
next_line_screen = False
in_screen = True
elif line:
os_file.write("\n " + line.rstrip())
# subcommands (select bash-completion, complete for bash-completion)
if 'bash-completion' in line:
subcommands = 'bash-completion'
if in_screen:
os_file.write("\n\n")
return subcommands
def generate_subcommand(os_command, os_subcommand, os_file, extra_params,
suffix, title_suffix):
"""Convert os_command help os_subcommand to RST.
:param os_command: client command to document
:param os_subcommand: client subcommand to document
:param os_file: open filehandle for output of RST file
:param extra_params: Extra parameter to pass to os_command
:param suffix: Extra suffix to add to link ID
:param title_suffix: Extra suffix for title
"""
print("Documenting subcommand '%s'..." % os_subcommand)
args = [os_command]
if extra_params:
args.extend(extra_params)
if use_help_flag(os_command):
args.append(os_subcommand)
args.append("--help")
else:
args.append("help")
args.append(os_subcommand)
help_lines = subprocess.check_output(args,
universal_newlines=True,
stderr=DEVNULL)
help_lines_lower = help_lines.lower()
if 'positional arguments' in help_lines_lower:
index = help_lines_lower.index('positional arguments')
elif 'optional arguments' in help_lines_lower:
index = help_lines_lower.index('optional arguments')
else:
index = len(help_lines_lower)
if 'deprecated' in (help_lines_lower[0:index]):
print("Subcommand '%s' is deprecated, skipping." % os_subcommand)
return
help_lines = help_lines.split('\n')
os_subcommandid = os_subcommand.replace(' ', '_')
os_file.write(".. _" + os_command + "_" + os_subcommandid + suffix)
os_file.write(":\n\n")
format_heading(os_command + " " + os_subcommand + title_suffix, 3, os_file)
if os_command == "swift":
next_line_screen = False
os_file.write(".. code-block:: console\n\n")
os_file.write("Usage: swift " + os_subcommand + "\n\n")
in_para = True
else:
next_line_screen = True
in_para = False
if extra_params:
extra_paramstr = ' '.join(extra_params)
help_lines[0] = help_lines[0].replace(os_command, "%s %s" %
(os_command, extra_paramstr))
line_index = -1
# Content is:
# usage...
#
# Description
#
# Arguments
skip_lines = False
for line in help_lines:
line_index += 1
if line.startswith('Usage:') and os_command == "swift":
line = line[len("Usage: "):]
if line.startswith(('Arguments:',
'Positional arguments:', 'positional arguments',
'Optional arguments', 'optional arguments',
'Required arguments', 'required arguments')):
if in_para:
in_para = False
os_file.write("\n")
if line.startswith(('Positional arguments',
'positional arguments')):
format_help('Positional arguments',
help_lines[line_index + 1:], os_file)
skip_lines = True
continue
elif line.startswith(('Optional arguments:',
'optional arguments')):
format_help('Optional arguments',
help_lines[line_index + 1:], os_file)
skip_lines = True
continue
elif line.startswith(('Required arguments:',
'required arguments')):
format_help('Required arguments',
help_lines[line_index + 1:], os_file)
skip_lines = True
continue
else:
format_help('Arguments', help_lines[line_index + 1:], os_file)
break
if skip_lines:
continue
if not line:
if not in_para:
os_file.write("\n")
in_para = True
continue
if next_line_screen:
os_file.write(".. code-block:: console\n\n")
os_file.write(" " + line + "\n")
next_line_screen = False
elif line.startswith(' '):
# ceilometer alarm-gnocchi-aggregation-by-metrics-threshold-create
# has 7 white space indentation
if not line.isspace():
# skip blank line, such as "trove help cluster-grow" command.
os_file.write(" " + line + "\n")
else:
xline = quote_rst(line)
if (len(xline) > MAXLINELENGTH):
# check niceness
xline = xline.replace(' ', '\n')
os_file.write(xline + "\n")
if in_para:
os_file.write("\n")
def discover_subcommands(os_command, subcommands, extra_params):
"""Discover all help subcommands for the given command"
:param os_command: client command whose subcommands need to be discovered
:param subcommands: list or type ('complete' or 'bash-completion')
of subcommands to document
:param extra_params: Extra parameter to pass to os_command.
:return: the list of subcommands discovered
:rtype: list(str)
"""
if extra_params is None:
extra_params = ''
print(("Discovering subcommands of '%s' %s ..."
% (os_command, extra_params)))
blacklist = ['bash-completion', 'complete', 'help']
if type(subcommands) is str:
args = [os_command]
if extra_params:
args.extend(extra_params)
if subcommands == 'complete':
subcommands = []
args.append('complete')
lines = subprocess.check_output(
args, universal_newlines=True, stderr=DEVNULL).split('\n')
delim = ' '
# if the cmds= line contains '-' then use that as a delim
for line in lines:
if '-' in line and 'cmds=' in line:
delim = '-'
break
for line in [x.strip() for x in lines
if x.strip().startswith('cmds_') and '-' in x]:
subcommand, _ = line.split('=')
subcommand = subcommand.replace(
'cmds_', '').replace('_', delim)
subcommands.append(subcommand)
else:
args.append('bash-completion')
subcommands = subprocess.check_output(
args,
universal_newlines=True).strip().split('\n')[-1].split()
subcommands = sorted([o for o in subcommands if not (o.startswith('-') or
o in blacklist)])
print("%d subcommands discovered." % len(subcommands))
return subcommands
def generate_subcommands(os_command, os_file, subcommands, extra_params,
suffix, title_suffix):
"""Convert os_command help subcommands for all subcommands to RST.
:param os_command: client command to document
:param os_file: open filehandle for output of RST file
:param subcommands: list or type ('complete' or 'bash-completion')
of subcommands to document
:param extra_params: Extra parameter to pass to os_command.
:param suffix: Extra suffix to add to link ID
:param title_suffix: Extra suffix for title
"""
for subcommand in subcommands:
generate_subcommand(os_command, subcommand, os_file, extra_params,
suffix, title_suffix)
print("%d subcommands documented." % len(subcommands))
def discover_and_generate_subcommands(os_command, os_file, subcommands,
extra_params, suffix, title_suffix):
"""Convert os_command help subcommands for all subcommands to RST.
:param os_command: client command to document
:param os_file: open filehandle for output of RST file
:param subcommands: list or type ('complete' or 'bash-completion')
of subcommands to document
:param extra_params: Extra parameter to pass to os_command.
:param suffix: Extra suffix to add to link ID
:param title_suffix: Extra suffix for title
"""
subcommands = discover_subcommands(os_command, subcommands, extra_params)
generate_subcommands(os_command, os_file, subcommands, extra_params,
suffix, title_suffix)
def _get_clients_filename():
return os.path.join(os.path.dirname(__file__),
'resources/clients.yaml')
def get_clients():
"""Load client definitions from the resource file."""
fname = _get_clients_filename()
clients = yaml.load(open(fname, 'r'))
return clients
def document_single_project(os_command, output_dir, continue_on_error):
"""Create documentation for os_command."""
clients = get_clients()
if os_command not in clients:
print("'%s' command not yet handled" % os_command)
print("(Command must be defined in '%s')" % _get_clients_filename())
if continue_on_error:
return False
else:
sys.exit(-1)
print("Documenting '%s'" % os_command)
data = clients[os_command]
if 'name' in data:
api_name = "%s API" % data['name']
title = "%s command-line client" % data.get('title', data['name'])
else:
api_name = ''
title = data.get('title', '')
out_filename = os_command + ".rst"
out_file = generate_heading(os_command, api_name, title,
output_dir, out_filename,
continue_on_error)
if not out_file:
if continue_on_error:
return False
else:
sys.exit(-1)
subcommands = generate_command(os_command, out_file)
if subcommands == 'complete' and data.get('subcommands'):
subcommands = data.get('subcommands')
discover_and_generate_subcommands(os_command, out_file, subcommands,
None, "", "")
print("Finished.\n")
out_file.close()
return True
def main():
clients = get_clients()
api_clients = sorted([x for x in clients if not x.endswith('-manage')])
manage_clients = sorted([x for x in clients if x.endswith('-manage')])
all_clients = api_clients + manage_clients
parser = argparse.ArgumentParser(description="Generate RST files "
"to document python-PROJECTclients.")
parser.add_argument('clients', metavar='client', nargs='*',
help="OpenStack command to document. Specify "
"multiple times to generate documentation for "
"multiple clients. One of: " +
", ".join(all_clients) + ".")
parser.add_argument("--all", help="Document all clients. "
"Namely " + ", ".join(all_clients) + ".",
action="store_true")
parser.add_argument("--all-api", help="Document all API clients. "
"Namely " + ", ".join(clients.keys()) + ".",
action="store_true")
parser.add_argument("--all-manage", help="Document all manage clients. "
"Namely " + ", ".join(manage_clients) + ".",
action="store_true")
parser.add_argument("--output-dir", default=".",
help="Directory to write generated files to")
parser.add_argument("--continue-on-error", default=False,
help="Continue with remaining clients even if an "
"error occurs generating a client file.",
action="store_true")
parser.add_argument("--version", default=False,
help="Show program's version number and exit.",
action="store_true")
prog_args = parser.parse_args()
client_list = []
if prog_args.all or prog_args.all_api or prog_args.all_manage:
if prog_args.all or prog_args.all_api:
client_list = api_clients
if prog_args.all or prog_args.all_manage:
client_list.extend(manage_clients)
elif prog_args.clients:
client_list = prog_args.clients
if not prog_args or 'help' in [client.lower() for client in client_list]:
parser.print_help()
sys.exit(0)
elif prog_args.version:
print(os_doc_tools.__version__)
sys.exit(0)
if not client_list:
parser.print_help()
sys.exit(1)
print("OpenStack Auto Documenting of Commands (using "
"openstack-doc-tools version %s)\n"
% os_doc_tools.__version__)
success_list = []
error_list = []
for client in client_list:
if document_single_project(
client, prog_args.output_dir, prog_args.continue_on_error):
success_list.append(client)
else:
error_list.append(client)
if success_list:
print("Generated documentation for: %s" % ", ".join(success_list))
if error_list:
print("Generation failed for: %s" % ", ".join(error_list))
sys.exit(1)
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,92 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import argparse
import glob
import os
import sys
def get_pdf_link(root, publish_path):
p = '%s/*.pdf' % root
re = glob.glob(p)
if len(re) == 0:
return ''
filename = os.path.basename(re[0])
path = os.path.relpath(root, publish_path)
return ' <a href="%s/%s">(pdf)</a>' % (path, filename)
def generate_index_file(publish_path):
"""Generate index.html file in publish_path."""
if not os.path.isdir(publish_path):
os.mkdir(publish_path)
index_file = open(os.path.join(publish_path, 'index.html'), 'w')
index_file.write(
'<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"\n'
'"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">\n'
'<html lang="en" xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">\n'
'<body>\n'
'<h1>Generated documents</h1>\n')
links = {}
for root, dirs, files in os.walk(publish_path):
dirs[:] = [d for d in dirs if d not in ['common', 'webapp', 'content',
'www', 'samples']]
# Ignore top-level index.html files
if root == publish_path:
continue
pdf_link = get_pdf_link(root, publish_path)
if os.path.isfile(os.path.join(root, 'index.html')):
path = os.path.relpath(root, publish_path)
links[path] = ('<a href="%s/index.html">%s</a>%s\n' %
(path, path.replace('draft/', ''), pdf_link))
for entry in sorted([s for s in links if not s.startswith('draft/')]):
index_file.write(links[entry])
index_file.write('<br/>\n')
drafts = [s for s in links if s.startswith('draft/')]
if drafts:
index_file.write('<h2>Draft guides</h2>\n')
for entry in sorted(drafts):
index_file.write(links[entry])
index_file.write('<br/>\n')
if os.path.isfile(os.path.join(publish_path, 'www-index.html')):
index_file.write('<h2>WWW index pages</h2>\n')
index_file.write('<a href="www-index.html">List of generated '
'WWW pages</a>\n')
index_file.write('</body>\n'
'</html>\n')
index_file.close()
def main():
parser = argparse.ArgumentParser(description="Generate index file.")
parser.add_argument('directory', metavar='DIR',
help="Directory to search.")
args = parser.parse_args()
generate_index_file(args.directory)
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,162 +0,0 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Usage:
jsoncheck.py [-f] FILES
Checks JSON syntax and optionally reformats (pretty-prints) the valid files.
Optional:
- demjson Python library (better diagnostics for invalid JSON synax)
"""
from __future__ import print_function
import argparse
import collections
import json
import sys
import textwrap
try:
import demjson
except ImportError:
demjson = None
sys.stderr.write("Cannot import the demjson Python module. Diagnostics "
"for invalid JSON files\nwill be limited.\n")
# -----------------------------------------------------------------------------
# Public interface
# -----------------------------------------------------------------------------
def check_syntax(path):
"""Check syntax of one JSON file."""
_process_file(path)
def check_formatting(path):
"""Check formatting of one JSON file."""
_process_file(path, formatting='check')
def fix_formatting(path, verbose=False):
"""Fix formatting of one JSON file."""
_process_file(path, formatting='fix', verbose=verbose)
# -----------------------------------------------------------------------------
# Implementation details
# -----------------------------------------------------------------------------
def _indent_note(note):
"""Indents and wraps a string."""
indented_note = []
# Split into single lines in case the argument is pre-formatted.
for line in note.splitlines():
indented_note.append(textwrap.fill(line, initial_indent=4 * ' ',
subsequent_indent=12 * ' ',
width=80))
return "\n".join(indented_note)
def _get_demjson_diagnostics(raw):
"""Get diagnostics string for invalid JSON files from demjson."""
errstr = None
try:
demjson.decode(raw, strict=True)
except demjson.JSONError as err:
errstr = err.pretty_description()
return errstr
class ParserException(Exception):
pass
def _parse_json(raw):
"""Parse raw JSON file."""
try:
parsed = json.loads(raw, object_pairs_hook=collections.OrderedDict)
except ValueError as err:
note = str(err)
# if demjson is available, print its diagnostic string as well
if demjson:
demerr = _get_demjson_diagnostics(raw)
if demerr:
note += "\n" + demerr
raise ParserException(note)
else:
return parsed
def _format_parsed_json(parsed):
"""Pretty-print JSON file content while retaining key order."""
return json.dumps(parsed, sort_keys=False, separators=(',', ': '),
indent=4) + "\n"
def _process_file(path, formatting=None, verbose=False):
"""Check syntax/formatting and fix formatting of a JSON file.
:param formatting: one of 'check' or 'fix' (default: None)
Raises ValueError if JSON syntax is invalid or reformatting needed.
"""
with open(path, 'r') as infile:
raw = infile.read()
try:
parsed = _parse_json(raw)
except ParserException as err:
raise ValueError(err)
else:
if formatting in ('check', 'fix'):
formatted = _format_parsed_json(parsed)
if formatted != raw:
if formatting == "fix":
with open(path, 'w') as outfile:
outfile.write(formatted)
if verbose:
print("%s\n%s" % (path,
_indent_note("Reformatted")))
else:
raise ValueError("Reformatting needed")
elif formatting is not None:
# for the benefit of external callers
raise ValueError("Called with invalid formatting value.")
def main():
parser = argparse.ArgumentParser(description="Validate and reformat JSON"
"files.")
parser.add_argument('files', metavar='FILES', nargs='+')
parser.add_argument('-f', '--formatting', choices=['check', 'fix'],
help='check or fix formatting of JSON files')
args = parser.parse_args()
exit_status = 0
for path in args.files:
try:
_process_file(path, args.formatting, verbose=True)
except ValueError as err:
print("%s\n%s" % (path, _indent_note(str(err))))
exit_status = 1
return exit_status
if __name__ == "__main__":
sys.exit(main())

View File

@ -1,134 +0,0 @@
---
aodh:
name: Telemetry Alarming service (aodh)
subcommands:
- alarm create
- alarm delete
- alarm list
- alarm show
- alarm state get
- alarm state set
- alarm update
- alarm-history search
- alarm-history show
- capabilities list
barbican:
name: Key Manager service (barbican)
subcommands: complete
ceilometer:
name: Telemetry Data Collection service (ceilometer)
cinder:
name: Block Storage service (cinder)
cloudkitty:
name: Rating service (cloudkitty)
congress:
name: Governance service (congress)
designate:
name: DNS service (designate)
subcommands: complete
freezer:
name: Backup, Restore, and Disaster Recovery service (freezer)
glance:
name: Image service (glance)
gnocchi:
name: A time series storage and resources index service (gnocchi)
subcommands:
- archive-policy create
- archive-policy delete
- archive-policy list
- archive-policy show
- archive-policy update
- archive-policy-rule create
- archive-policy-rule delete
- archive-policy-rule list
- archive-policy-rule show
- benchmark measures add
- benchmark measures show
- benchmark metric create
- benchmark metric show
- capabilities list
- complete
- help
- measures add
- measures aggregation
- measures batch-metrics
- measures batch-resources-metrics
- measures show
- metric create
- metric delete
- metric list
- metric show
- resource batch delete
- resource create
- resource delete
- resource history
- resource list
- resource show
- resource update
- resource-type create
- resource-type delete
- resource-type list
- resource-type show
- resource-type update
- status
heat:
name: Orchestration service (heat)
ironic:
name: Bare Metal service (ironic)
kite:
name: A service for managing and distributing message encryption keys (kite)
magnum:
name: Container Infrastructure Management service (magnum)
manila:
name: Shared File Systems service (manila)
mistral:
name: Workflow service (mistral)
subcommands: complete
monasca:
name: Monitoring (monasca)
murano:
name: Application Catalog service (murano)
neutron:
name: Networking service (neutron)
nova:
name: Compute service (nova)
senlin:
name: Clustering service (senlin)
solum:
name: Software Development Lifecycle Automation service (solum)
swift:
name: Object Storage service (swift)
subcommands:
- auth
- capabilities
- delete
- download
- list
- post
- stat
- tempurl
- upload
tacker:
name: NFV Orchestration service (tacker)
tripleo:
name: Deployment service (tripleo)
trove:
name: Database service (trove)
trove-manage:
name: Database service management utility
subcommands:
- datastore_update
- datastore_version_flavor_add
- datastore_version_flavor_delete
- datastore_version_update
- db_downgrade
- db_load_datastore_config_parameters
- db_recreate
- db_sync
- db_upgrade
vitrage:
name: RCA (Root Cause Analysis) service (vitrage)
watcher:
name: Infrastructure Optimization service (watcher)
zaqar:
name: Message service (zaqar)

View File

@ -1,17 +0,0 @@
# Copyright 2015 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('openstack-doc-tools')

View File

@ -1,31 +0,0 @@
# The format of this file isn't really documented; just use --generate-rcfile
[Messages Control]
# C0111: Don't require docstrings on every method
# W0511: TODOs in code comments are fine.
# W0142: *args and **kwargs are fine.
# W0622: Redefining id is fine.
disable=C0111,W0511,W0142,W0622
[Basic]
# Variable names can be 1 to 31 characters long, with lowercase and underscores
variable-rgx=[a-z_][a-z0-9_]{0,30}$
# Argument names can be 2 to 31 characters long, with lowercase and underscores
argument-rgx=[a-z_][a-z0-9_]{1,30}$
# Method names should be at least 3 characters long
# and be lowercased with underscores
method-rgx=([a-z_][a-z0-9_]{2,50}|setUp|tearDown)$
[Design]
max-public-methods=100
min-public-methods=0
max-args=6
[Variables]
# List of additional names supposed to be defined in builtins. Remember that
# you should avoid to define new builtins when possible.
# _ is used by our localization
additional-builtins=_

View File

@ -1,3 +0,0 @@
---
other:
- Use reno for release note management.

View File

@ -1,7 +0,0 @@
other:
- |
For translated manuals, the command ``doc-tools-check-languages``
now places a marker file in the root of each directory. This
marker file is needed for proper publishing in the OpenStack CI
environment. For details, see
http://specs.openstack.org/openstack-infra/infra-specs/specs/doc-publishing.html.

View File

@ -1,3 +0,0 @@
---
other:
- "``autohelp.py`` now allows overrides of sections, defined in ``<project>.overrides`` configuration files."

View File

@ -1,6 +0,0 @@
---
features:
- Update CLI Reference generation tool for RST.
To migrate CLI Reference from DocBook to RST,
output the documentation in RST format,
with a few work arounds for RST/Sphinx specific issues.

View File

@ -1,4 +0,0 @@
---
other:
- The configuration items for autohelp are now in the openstack-manuals
repository.

View File

@ -1,6 +0,0 @@
---
prelude: >
The most important change in this release is the removal
of DocBook XML support. Support for checking, building, and
translation of DocBook XML files has been removed. The tools now
only handle RST files.

View File

@ -1,4 +0,0 @@
---
fixes:
- Fix building of translations on older branches where the DebConf
Install Guide does not exist.

View File

@ -1,4 +0,0 @@
features:
- The documentation for this repo has been improved and is now
published at
https://docs.openstack.org/developer/openstack-doc-tools.

View File

@ -1,5 +0,0 @@
---
fixes:
- Fix doc-tools-check-languages, it used a wrong way
of passing arguments to tox and failed therefore with
tox 2.5.0.

View File

@ -1,4 +0,0 @@
---
features:
- Set the bug report project to openstack-i18n
for the translated documents.

View File

@ -1,11 +0,0 @@
---
upgrade:
- |
The `extract_swift_flags` tool no longer supports the `docbook` command.
This previously allowed outputting of Docbook-formatted text but the
feature was broken and unsupported by the community. reStructuredText
should be used for everything.
- |
The `extract_swift_flags` tool no longer supports the `--from` argument.
All Swift documentation has been converted to reStructuredText meaning
there is no reason to keep this around.

View File

@ -1,9 +0,0 @@
---
upgrade:
- |
The `extract_swift_flags.py` script has been removed, and the `autohelp.py`
and `diff_branches.py` scripts no longer build config file documentation
for the swift project. The swift dev team had been manually maintaining
their config files in-tree and to avoid duplication, doc and swift have
agreed to link the config ref out to the dev docs. As such, there is
therefore no reason to keep this tooling around.

View File

@ -1,4 +0,0 @@
---
fixes:
- Rework install guide translation build tool
to process toctree for each distribution dynamically.

View File

@ -1,3 +0,0 @@
---
features:
- The autohelp tools export now also RST tables.

View File

@ -1,4 +0,0 @@
---
features:
- New command ``openstack-indexpage`` to only generate the HTML
index page. The index page layout has also been improved.

View File

@ -1,3 +0,0 @@
---
other:
- extract_swift_flags.py now reads the options help strings from RST tables.

View File

@ -1,3 +0,0 @@
---
other:
- Use jinja templating system to generate configuration reference tables.

View File

@ -1,4 +0,0 @@
---
other:
- Unsupport OpenStackClient in CLI Reference
in favor of docs in the OSC project itself.

View File

@ -1,3 +0,0 @@
---
other:
- autohelp.py update will create the flagmappings file if it doesn't exist yet.

View File

@ -1,3 +0,0 @@
---
other:
- The outdated virtual build and test environment has been removed.

View File

@ -1,279 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'openstackdocstheme',
'reno.sphinxext',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
repository_name = 'openstack/openstack-doc-tools'
bug_tag = u'openstack-doc-tools'
project = u'OpenStack-doc-tools Release Notes'
copyright = u'2015-2017, OpenStack Documentation team'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
from os_doc_tools.version import version_info as doc_tools_version
# The full version, including alpha/beta/rc tags.
release = doc_tools_version.version_string_with_vcs()
# The short X.Y version.
version = doc_tools_version.canonical_version_string()
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'openstackdocs'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
html_last_updated_fmt = '%Y-%m-%d %H:%M'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'OpenStack-Doc-Tools-ReleaseNotesdoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'OpenStack-Doc-Tools-ReleaseNotes.tex',
u'OpenStack-Doc-Tools Release Notes Documentation',
u'Documentation Team', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'openstack-doc-tools-releasenotes',
u'OpenStack-Doc-Tools Release Notes Documentation',
[u'Documentation team'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'OpenStack-Doc-Tools-ReleaseNotes',
u'OpenStack-Doc-Tools Release Notes Documentation',
u'Documentation Team', 'OpenStack-Doc-Tools-ReleaseNotes',
'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
# -- Options for Internationalization output ------------------------------
locale_dirs = ['locale/']

View File

@ -1,5 +0,0 @@
==================================
OpenStack Doc Tools Release Notes
==================================
.. release-notes::

View File

@ -1,13 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr!=2.1.0,>=2.0.0 # Apache-2.0
iso8601>=0.1.11 # MIT
lxml!=3.7.0,>=2.3 # BSD
oslo.config!=4.3.0,!=4.4.0,>=4.0.0 # Apache-2.0
docutils>=0.11 # OSI-Approved Open Source, Public Domain
sphinx>=1.6.2 # BSD
demjson # GLGPLv3+
PyYAML>=3.10.0 # MIT
cliff-tablib>=1.0 # Apache-2.0

View File

@ -1,53 +0,0 @@
[metadata]
name = openstack-doc-tools
summary = Tools for OpenStack Documentation
description-file =
README.rst
author = OpenStack Documentation
author-email = openstack-doc@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5
[files]
packages =
os_doc_tools
autogenerate_config_docs
data_files =
share/openstack-doc-tools/sitemap = sitemap/*
share/openstack-doc-tools/cleanup = cleanup/*
scripts =
bin/doc-tools-check-languages
bin/doc-tools-update-cli-reference
bin/doc-tools-build-rst
[global]
setup-hooks =
pbr.hooks.setup_hook
[entry_points]
console_scripts =
openstack-auto-commands = os_doc_tools.commands:main
openstack-jsoncheck = os_doc_tools.jsoncheck:main
openstack-indexpage = os_doc_tools.index:main
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[wheel]
universal = 1

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr>=2.0.0'],
pbr=True)

View File

@ -1,81 +0,0 @@
=================
Sitemap Generator
=================
This script crawls all available sites on http://docs.openstack.org and
extracts all URLs. Based on the URLs the script generates a sitemap for search
engines according to the `sitemaps protocol
<http://www.sitemaps.org/protocol.html>`_.
Installation
~~~~~~~~~~~~
To install the needed modules you can use pip or the package management system
included in your distribution. When using the package management system maybe
the name of the packages differ. Installation in a virtual environment is
recommended.
.. code-block:: console
$ virtualenv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
When using pip, you may also need to install some development packages. For
example, on Ubuntu 16.04 install the following packages:
.. code-block:: console
$ sudo apt install gcc libssl-dev python-dev python-virtualenv
Usage
~~~~~
To generate a new sitemap file, change into your local clone of the
``openstack/openstack-doc-tools`` repository and run the following commands:
.. code-block:: console
$ cd sitemap
$ scrapy crawl sitemap
The script takes several minutes to crawl all available
sites on http://docs.openstack.org. The result is available in the
``sitemap_docs.openstack.org.xml`` file.
Options
~~~~~~~
domain=URL
Sets the ``domain`` to crawl. Default is ``docs.openstack.org``.
For example, to crawl http://developer.openstack.org use the following
command:
.. code-block:: console
$ scrapy crawl sitemap -a domain=developer.openstack.org
The result is available in the ``sitemap_developer.openstack.org.xml`` file.
urls=URL
You can define a set of additional start URLs using the ``urls`` attribute.
Separate multiple URLs with ``,``.
For example:
.. code-block:: console
$ scrapy crawl sitemap -a domain=developer.openstack.org -a urls="http://developer.openstack.org/de/api-guide/quick-start/"
LOG_FILE=FILE
Write log messages to the specified file.
For example, to write to ``scrapy.log``:
.. code-block:: console
$ scrapy crawl sitemap -s LOG_FILE=scrapy.log

View File

View File

@ -1,90 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import lxml
import scrapy
from scrapy import exporters
class SitemapItemExporter(exporters.XmlItemExporter):
'''XmlItemExporer with adjusted attributes for the root element.'''
def start_exporting(self):
'''Set namespace / schema attributes for the root element.'''
self.xg.startDocument()
self.xg.startElement(self.root_element, {
"xmlns": "http://www.sitemaps.org/schemas/sitemap/0.9",
"xmlns:xsi": "http://www.w3.org/2001/XMLSchema-instance",
"xsi:schemaLocation":
"http://www.sitemaps.org/schemas/sitemap/0.9 "
"http://www.sitemaps.org/schemas/sitemap/0.9/sitemap.xsd"
})
class IgnoreDuplicateUrls(object):
'''Ignore duplicated URLs.'''
def __init__(self):
self.processed = set()
def process_item(self, item, spider):
'''Check if a URL was already found.'''
if item['loc'] in self.processed:
raise scrapy.exceptions.DropItem("Duplicate URL found: %s."
% item['loc'])
else:
self.processed.add(item['loc'])
return item
class ExportSitemap(object):
'''Write found URLs to a sitemap file.
Based on http://doc.scrapy.org/en/latest/topics/exporters.html.
'''
def __init__(self):
self.files = {}
self.exporter = None
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_opened,
scrapy.signals.spider_opened)
crawler.signals.connect(pipeline.spider_closed,
scrapy.signals.spider_closed)
return pipeline
def spider_opened(self, spider):
output = open(os.path.join(os.getcwd(), 'sitemap_%s.xml'
% spider.domain), 'w')
self.files[spider] = output
self.exporter = SitemapItemExporter(output, item_element='url',
root_element='urlset')
self.exporter.start_exporting()
def spider_closed(self, spider):
self.exporter.finish_exporting()
output = self.files.pop(spider)
output.close()
tree = lxml.etree.parse(os.path.join(os.getcwd(), "sitemap_%s.xml"
% spider.domain))
with open(os.path.join(os.getcwd(), "sitemap_%s.xml" % spider.domain),
'w') as pretty:
pretty.write(lxml.etree.tostring(tree, pretty_print=True))
def process_item(self, item, spider):
self.exporter.export_item(item)
return item

View File

@ -1,33 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Configuration variables used inside Scrapy to enable modules/pipelines
# and to affect the behavior of several parts.
from scrapy import linkextractors
BOT_NAME = 'sitemap'
SPIDER_MODULES = ['generator.spiders']
ITEM_PIPELINES = {
'generator.pipelines.IgnoreDuplicateUrls': 500,
'generator.pipelines.ExportSitemap': 100,
}
CONCURRENT_REQUESTS = 32
CONCURRENT_REQUESTS_PER_DOMAIN = 32
CONCURRENT_REQUESTS_PER_IP = 32
DOWNLOAD_WARNSIZE = 67108864
LOG_LEVEL = 'INFO'
LOGGING_ENABLED = True
RANDOMIZE_DOWNLOAD_DELAY = False
ROBOTSTXT_OBEY = True
TELNETCONSOLE_ENABLED = False
linkextractors.IGNORED_EXTENSIONS.remove('pdf')

View File

@ -1,103 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import time
try:
import urlparse
except ImportError:
import urllib.parse as urlparse
from scrapy import item
from scrapy.linkextractors import LinkExtractor
from scrapy import spiders
class SitemapItem(item.Item):
'''Class to represent an item in the sitemap.'''
loc = item.Field()
lastmod = item.Field()
priority = item.Field()
changefreq = item.Field()
class SitemapSpider(spiders.CrawlSpider):
name = 'sitemap'
old_releases = tuple(["/%s" % old_release for old_release in [
'austin',
'bexar',
'cactus',
'diablo',
'essex',
'folsom',
'grizzly',
'havana',
'icehouse',
'juno',
'kilo',
'liberty',
'mitaka',
'newton'
]])
rules = [
spiders.Rule(
LinkExtractor(
allow=[
r'.*\.html',
r'.*\.pdf',
r'.*\.xml',
r'.*\.txt',
r'.*/',
],
deny=[
r'/trunk/',
r'/draft/',
r'/api/',
r'/juno/',
r'/icehouse/'
]
),
follow=True, callback='parse_item'
)
]
def __init__(self, domain='docs.openstack.org', urls='', *args, **kwargs):
super(SitemapSpider, self).__init__(*args, **kwargs)
self.domain = domain
self.allowed_domains = [domain]
self.start_urls = ['http://%s' % domain]
for url in urls.split(','):
if not url:
continue
self.start_urls.append(url)
def parse_item(self, response):
item = SitemapItem()
item['loc'] = response.url
path = urlparse.urlsplit(response.url).path
if path.startswith(self.old_releases):
# weekly changefrequency and lower priority for old files
item['priority'] = '0.5'
item['changefreq'] = 'weekly'
else:
# daily changefrequency and highest priority for current files
item['priority'] = '1.0'
item['changefreq'] = 'daily'
if 'Last-Modified' in response.headers:
timestamp = response.headers['Last-Modified']
else:
timestamp = response.headers['Date']
lastmod = time.strptime(timestamp, "%a, %d %b %Y %H:%M:%S %Z")
item['lastmod'] = time.strftime("%Y-%m-%dT%H:%M:%S%z", lastmod)
return item

View File

@ -1,5 +0,0 @@
[settings]
default = generator.settings
[deploy]
project = generator

View File

@ -1,20 +0,0 @@
<?xml version="1.0"?>
<!-- Use this transform on a sitemap.xml file generated from freesitemapgenerator.com -->
<!-- It removes the /trunk and /draft URLs and the release URLs that you indicate below, here it's folsom -->
<xsl:stylesheet version="1.0"
xmlns:sm="http://www.sitemaps.org/schemas/sitemap/0.9"
xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<!-- template match equals any other url element, keep -->
<xsl:template match="node() | @*">
<xsl:copy>
<xsl:apply-templates select="node() | @*"/>
</xsl:copy>
</xsl:template>
<!-- discard any url/loc that refer to trunk or folsom -->
<xsl:template match="sm:url[starts-with(./sm:loc,'http://docs.openstack.org/trunk')]"/>
<xsl:template match="sm:url[starts-with(./sm:loc,'http://docs.openstack.org/draft')]"/>
<xsl:template match="sm:url[starts-with(./sm:loc,'http://docs.openstack.org/folsom')]"/>
</xsl:stylesheet>

View File

@ -1,18 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
# Hacking already pins down pep8, pyflakes and flake8
hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0
bashate>=0.2 # Apache-2.0
doc8 # Apache-2.0
pylint==1.7.1 # GPLv2
reno!=2.3.1,>=1.8.0 # Apache-2.0
openstackdocstheme>=1.11.0 # Apache-2.0
testrepository>=0.0.18 # Apache-2.0/BSD
# mock object framework
mock>=2.0 # BSD

View File

View File

@ -1,37 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_doc_tools import index
import unittest
class TestGenerateIndex(unittest.TestCase):
def test_dir_created(self):
path = 'path'
with mock.patch.object(index, 'open'):
with mock.patch.object(index.os, 'mkdir') as mock_mkdir:
index.generate_index_file(path)
self.assertTrue(mock_mkdir.called)
def test_dir_not_created_when_exists(self):
path = 'path'
with mock.patch.object(index, 'open'):
with mock.patch.object(index.os, 'mkdir') as mock_mkdir:
with mock.patch.object(index.os.path, 'isdir',
returned_value=True):
index.generate_index_file(path)
self.assertFalse(mock_mkdir.called)
if __name__ == '__main__':
unittest.main()

View File

@ -1,98 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from os_doc_tools import jsoncheck
import unittest
class MockOpen(object):
def read(self):
return "raw"
def write(self):
return True
class TestFileFunctions(unittest.TestCase):
def test_indent_note(self):
note = "Hello\nWorld"
with mock.patch.object(jsoncheck.textwrap, 'fill') as mock_fill:
mock_fill.return_value = "Hello World"
jsoncheck._indent_note(note)
mock_fill.assert_any_call('Hello', initial_indent=' ',
subsequent_indent=' ',
width=80)
mock_fill.assert_any_call('World', initial_indent=' ',
subsequent_indent=' ',
width=80)
def test_get_demjson_diagnostics(self):
raw = "raw"
with mock.patch.object(jsoncheck.demjson, 'decode', return_value=True):
errstr = jsoncheck._get_demjson_diagnostics(raw)
self.assertTrue(errstr is None)
with mock.patch.object(jsoncheck.demjson, 'decode') as mock_decode:
mock_decode.side_effect = jsoncheck.demjson.JSONError(raw)
errstr = jsoncheck._get_demjson_diagnostics(raw)
expected_error_str = " Error: raw"
self.assertEqual(errstr, expected_error_str)
def test_parse_json(self):
raw = "raw"
with mock.patch.object(jsoncheck.json, 'loads',
return_value="Success"):
parsed = jsoncheck._parse_json(raw)
self.assertEqual(parsed, "Success")
with mock.patch.object(jsoncheck.json, 'loads') as mock_loads:
mock_loads.side_effect = ValueError()
with self.assertRaises(jsoncheck.ParserException):
parsed = jsoncheck._parse_json(raw)
def test_format_parsed_json(self):
with mock.patch.object(jsoncheck.json, 'dumps') as mock_dumps:
mock_dumps.return_value = "Success"
returned_value = jsoncheck._format_parsed_json('raw')
self.assertEqual(returned_value, "Success\n")
self.assertTrue(mock_dumps.called)
def test_process_file(self):
with mock.patch.object(jsoncheck, 'open', returned_value=MockOpen()):
with mock.patch.object(jsoncheck, '_parse_json') as mock_parse:
mock_parse.side_effect = jsoncheck.ParserException
with self.assertRaises(ValueError):
jsoncheck._process_file('path')
with mock.patch.object(jsoncheck, 'open', returned_value=MockOpen()):
with mock.patch.object(jsoncheck, '_parse_json',
returned_value="Success"):
with mock.patch.object(jsoncheck, '_format_parsed_json',
returned_value="not_raw"):
with self.assertRaises(ValueError):
jsoncheck._process_file('path', 'check')
with mock.patch.object(jsoncheck, 'open', returned_value=MockOpen()):
with mock.patch.object(jsoncheck, '_parse_json',
returned_value="Success"):
with mock.patch.object(jsoncheck, '_format_parsed_json',
returned_value="not_raw"):
with self.assertRaises(ValueError):
jsoncheck._process_file('path', 'formatting')
if __name__ == '__main__':
unittest.main()

View File

@ -1,197 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from sitemap.generator import pipelines
import unittest
class TestSitemapItemExporter(unittest.TestCase):
def test_start_exporting(self):
output = mock.MagicMock()
itemExplorer = pipelines.SitemapItemExporter(output)
with mock.patch.object(itemExplorer.xg, 'startDocument',
return_value=None) as mock_start_document:
with mock.patch.object(itemExplorer.xg, 'startElement',
return_value=None) as mock_start_element:
itemExplorer.start_exporting()
self.assertTrue(mock_start_document.called)
self.assertTrue(mock_start_element.called)
class TestIgnoreDuplicateUrls(unittest.TestCase):
def setUp(self):
self.ignore_urls = pipelines.IgnoreDuplicateUrls()
def test_set_is_set_at_init(self):
self.assertIsInstance(self.ignore_urls.processed, set)
def test_set_is_empty_at_init(self):
self.assertEqual(len(self.ignore_urls.processed), 0)
def test_duplicate_url(self):
self.ignore_urls.processed.add('url')
item = {'loc': 'url'}
spider = mock.MagicMock()
with self.assertRaises(pipelines.scrapy.exceptions.DropItem):
self.ignore_urls.process_item(item, spider)
def test_url_added_to_processed(self):
self.assertFalse('url' in self.ignore_urls.processed)
item = {'loc': 'url'}
spider = mock.MagicMock()
self.ignore_urls.process_item(item, spider)
self.assertTrue('url' in self.ignore_urls.processed)
def test_item_is_returned(self):
item = {'loc': 'url'}
spider = mock.MagicMock()
returned_item = self.ignore_urls.process_item(item, spider)
self.assertEqual(item, returned_item)
class TestExportSitemap(unittest.TestCase):
def setUp(self):
self.export_sitemap = pipelines.ExportSitemap()
self.spider = mock.MagicMock()
def test_variables_set_at_init(self):
self.assertIsInstance(self.export_sitemap.files, dict)
self.assertTrue(self.export_sitemap.exporter is None)
def test_spider_opened_calls_open(self):
with mock.patch.object(pipelines, 'open',
return_value=None) as mocked_open:
with mock.patch.object(pipelines, 'SitemapItemExporter'):
self.export_sitemap.spider_opened(self.spider)
self.assertTrue(mocked_open.called)
def test_spider_opened_assigns_spider(self):
prev_len = len(self.export_sitemap.files)
with mock.patch.object(pipelines, 'open', return_value=None):
with mock.patch.object(pipelines, 'SitemapItemExporter'):
self.export_sitemap.spider_opened(self.spider)
after_len = len(self.export_sitemap.files)
self.assertTrue(after_len - prev_len, 1)
def test_spider_opened_instantiates_exporter(self):
with mock.patch.object(pipelines, 'open', return_value=None):
with mock.patch.object(pipelines,
'SitemapItemExporter') as mocked_exporter:
self.export_sitemap.spider_opened(self.spider)
self.assertTrue(mocked_exporter.called)
def test_spider_opened_exporter_starts_exporting(self):
with mock.patch.object(pipelines, 'open', return_value=None):
with mock.patch.object(pipelines.SitemapItemExporter,
'start_exporting') as mocked_start:
self.export_sitemap.spider_opened(self.spider)
self.assertTrue(mocked_start.called)
def test_spider_closed_calls_finish(self):
self.export_sitemap.exporter = mock.MagicMock()
self.export_sitemap.exporter.finish_exporting = mock.MagicMock()
self.export_sitemap.files[self.spider] = mock.MagicMock()
with mock.patch.object(pipelines, 'lxml'):
with mock.patch.object(pipelines, 'open'):
self.export_sitemap.spider_closed(self.spider)
self.assertTrue(self.export_sitemap.exporter.finish_exporting.called)
def test_spider_closed_pops_spider(self):
self.export_sitemap.exporter = mock.MagicMock()
self.export_sitemap.files[self.spider] = mock.MagicMock()
self.assertTrue(self.spider in self.export_sitemap.files)
with mock.patch.object(pipelines, 'lxml'):
with mock.patch.object(pipelines, 'open'):
self.export_sitemap.spider_closed(self.spider)
self.assertFalse(self.spider in self.export_sitemap.files)
def test_spider_closed_parses_with_lxml(self):
self.export_sitemap.exporter = mock.MagicMock()
self.export_sitemap.exporter.finish_exporting = mock.MagicMock()
self.export_sitemap.files[self.spider] = mock.MagicMock()
with mock.patch.object(pipelines.lxml, 'etree'):
with mock.patch.object(pipelines.lxml.etree,
'parse') as mocked_lxml_parse:
with mock.patch.object(pipelines, 'open'):
self.export_sitemap.spider_closed(self.spider)
self.assertTrue(mocked_lxml_parse.called)
def test_spider_closed_opens_xml_files(self):
self.export_sitemap.exporter = mock.MagicMock()
self.export_sitemap.exporter.finish_exporting = mock.MagicMock()
self.export_sitemap.files[self.spider] = mock.MagicMock()
with mock.patch.object(pipelines, 'lxml'):
with mock.patch.object(pipelines, 'open') as mocked_open:
self.export_sitemap.spider_closed(self.spider)
self.assertTrue(mocked_open.called)
def test_spider_closed_writes_tree(self):
self.export_sitemap.exporter = mock.MagicMock()
self.export_sitemap.exporter.finish_exporting = mock.MagicMock()
self.export_sitemap.files[self.spider] = mock.MagicMock()
with mock.patch.object(pipelines.lxml, 'etree'):
with mock.patch.object(pipelines.lxml.etree,
'tostring') as mocked_lxml_tostring:
with mock.patch.object(pipelines, 'open'):
self.export_sitemap.spider_closed(self.spider)
self.assertTrue(mocked_lxml_tostring.called)
def test_process_item_exports_item(self):
item = spider = self.export_sitemap.exporter = mock.MagicMock()
self.export_sitemap.exporter.export_item = mock.MagicMock()
self.export_sitemap.process_item(item, spider)
self.assertTrue(self.export_sitemap.exporter.export_item.called)
def test_process_item_returns_item(self):
spider = self.export_sitemap.exporter = mock.MagicMock()
item = {'random': 'item'}
returned_item = self.export_sitemap.process_item(item, spider)
self.assertEqual(item, returned_item)
def test_from_crawler_exists(self):
attr_exists = hasattr(pipelines.ExportSitemap, 'from_crawler')
attr_callable = callable(getattr(pipelines.ExportSitemap,
'from_crawler'))
self.assertTrue(attr_exists and attr_callable)
def test_from_crawler_assigns_pipeline(self):
crawler = mock.MagicMock()
pipelines.ExportSitemap.from_crawler(crawler)
# still thinking how to go about here.
if __name__ == '__main__':
unittest.main()

View File

@ -1,132 +0,0 @@
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import scrapy
from sitemap.generator.spiders import sitemap_file
import unittest
class TestSitemapItem(unittest.TestCase):
def test_class_type(self):
self.assertTrue(type(sitemap_file.SitemapItem) is scrapy.item.ItemMeta)
def test_class_supports_fields(self):
with mock.patch.object(scrapy.item, 'Field'):
a = sitemap_file.SitemapItem()
supported_fields = ['loc', 'lastmod', 'priority', 'changefreq']
for field in supported_fields:
a[field] = field
not_supported_fields = ['some', 'random', 'fields']
for field in not_supported_fields:
with self.assertRaises(KeyError):
a[field] = field
class TestSitemapSpider(unittest.TestCase):
def setUp(self):
self.spider = sitemap_file.SitemapSpider()
def test_set_vars_on_init(self):
domain = 'docs.openstack.org'
self.assertEqual(self.spider.domain, domain)
self.assertEqual(self.spider.allowed_domains, [domain])
self.assertEqual(self.spider.start_urls, ['http://%s' % domain])
def test_start_urls_get_appended(self):
urls = 'new.openstack.org, old.openstack.org'
urls_len = len(urls.split(','))
spider_len = len(self.spider.start_urls)
spider_with_urls = sitemap_file.SitemapSpider(urls=urls)
spider_with_urls_len = len(spider_with_urls.start_urls)
self.assertEqual(spider_with_urls_len, (urls_len + spider_len))
def test_parse_items_inits_sitemap(self):
response = mock.MagicMock()
with mock.patch.object(sitemap_file,
'SitemapItem') as mocked_sitemap_item:
with mock.patch.object(sitemap_file.urlparse,
'urlsplit'):
with mock.patch.object(sitemap_file, 'time'):
self.spider.parse_item(response)
self.assertTrue(mocked_sitemap_item.called)
def test_parse_items_gets_path(self):
response = mock.MagicMock()
with mock.patch.object(sitemap_file, 'SitemapItem'):
with mock.patch.object(sitemap_file.urlparse,
'urlsplit') as mocked_urlsplit:
with mock.patch.object(sitemap_file, 'time'):
self.spider.parse_item(response)
self.assertTrue(mocked_urlsplit.called)
def test_parse_items_low_priority_weekly_freq(self):
response = mock.MagicMock()
path = sitemap_file.urlparse.SplitResult(
scheme='https',
netloc='docs.openstack.com',
path='/mitaka',
query='',
fragment=''
)
with mock.patch.object(sitemap_file.urlparse, 'urlsplit',
return_value=path):
with mock.patch.object(sitemap_file, 'time'):
returned_item = self.spider.parse_item(response)
self.assertEqual('0.5', returned_item['priority'])
self.assertEqual('weekly', returned_item['changefreq'])
def test_parse_items_high_priority_daily_freq(self):
response = mock.MagicMock()
path = sitemap_file.urlparse.SplitResult(
scheme='https',
netloc='docs.openstack.com',
path='/ocata',
query='',
fragment=''
)
with mock.patch.object(sitemap_file.urlparse, 'urlsplit',
return_value=path):
with mock.patch.object(sitemap_file, 'time'):
returned_item = self.spider.parse_item(response)
self.assertEqual('1.0', returned_item['priority'])
self.assertEqual('daily', returned_item['changefreq'])
def test_parse_returns_populated_item(self):
response = mock.MagicMock()
path = sitemap_file.urlparse.SplitResult(
scheme='https',
netloc='docs.openstack.com',
path='/ocata',
query='',
fragment=''
)
with mock.patch.object(sitemap_file.urlparse, 'urlsplit',
return_value=path):
with mock.patch.object(sitemap_file, 'time'):
returned_item = self.spider.parse_item(response)
self.assertEqual(4, len(returned_item))
if __name__ == '__main__':
unittest.main()

View File

@ -1,30 +0,0 @@
#!/usr/bin/env bash
# Client constraint file contains this client version pin that is in conflict
# with installing the client from source. We should remove the version pin in
# the constraints file before applying it for from-source installation.
CONSTRAINTS_FILE=$1
shift 1
set -e
# NOTE(tonyb): Place this in the tox enviroment's log dir so it will get
# published to logs.openstack.org for easy debugging.
localfile="$VIRTUAL_ENV/log/upper-constraints.txt"
if [[ $CONSTRAINTS_FILE != http* ]]; then
CONSTRAINTS_FILE=file://$CONSTRAINTS_FILE
fi
# NOTE(tonyb): need to add curl to bindep.txt if the project supports bindep
curl $CONSTRAINTS_FILE --insecure --progress-bar --output $localfile
pip install -c$localfile openstack-requirements
# This is the main purpose of the script: Allow local installation of
# the current repo. It is listed in constraints file and thus any
# install will be constrained and we need to unconstrain it.
edit-constraints $localfile -- $CLIENT_NAME
pip install -c$localfile -U $*
exit $?

64
tox.ini
View File

@ -1,64 +0,0 @@
[tox]
minversion = 2.0
envlist = py3,py27,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = {toxinidir}/tools/tox_install.sh {env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt} {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
CLIENT_NAME=openstack-doc-tools
# Install also sitemap scraping tool, not installed by default
# therefore not in requirements file
deps = scrapy>=1.0.0
-r{toxinidir}/test-requirements.txt
-r{toxinidir}/requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands =
flake8
# Run doc8 to check .rst and .txt files.
# HACKING.rst is the only file that is not referenced from
# doc/source, so add it explicitly.
doc8 -e txt -e rst doc/source/ HACKING.rst
# Run bashate during pep8 runs to ensure violations are caught by
# the check and gate queues.
bashate autogenerate_config_docs/autohelp-wrapper \
bin/doc-tools-check-languages \
cleanup/remove_trailing_whitespaces.sh
[testenv:pylint]
commands = pylint os_doc_tools cleanup sitemap
[testenv:releasenotes]
commands = sphinx-build -a -E -W -d releasenotes/build/doctrees -b html releasenotes/source releasenotes/build/html
[testenv:sitemap]
# commands = functional test command goes here
[testenv:venv]
commands = {posargs}
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:bindep]
# Do not install any requirements. We want this to be fast and work even if
# system dependencies are missing, since it's used to tell you what system
# dependencies are missing! This also means that bindep must be installed
# separately, outside of the requirements files, and develop mode disabled
# explicitly to avoid unnecessarily installing the checked-out repo too (this
# further relies on "tox.skipsdist = True" above).
deps = bindep
commands = bindep test
usedevelop = False
[flake8]
show-source = True
builtins = _
exclude=.venv,.git,.tox,dist,*lib/python*,*egg,build,*autogenerate_config_docs/venv,*autogenerate_config_docs/sources,doc/source/conf.py
# 28 is currently the most complex thing we have
max-complexity=29
ignore = H101