Retire aeromancer

This hasn't been touched in four years.

Change-Id: I2c633e760b1bcd94b2f591e4fbafbbb15a6d127c
Signed-off-by: Stephen Finucane <stephenfin@redhat.com>
Depends-On: I089397a3d945b762df86a0dd144644e9926acdea
This commit is contained in:
Stephen Finucane 2019-05-03 15:14:11 -06:00
parent 48273e5993
commit 43876c6388
68 changed files with 7 additions and 2426 deletions

View File

@ -1,7 +0,0 @@
[run]
branch = True
source = aeromancer
omit = aeromancer/tests/*,aeromancer/openstack/*
[report]
ignore-errors = True

53
.gitignore vendored
View File

@ -1,53 +0,0 @@
*.py[cod]
# C extensions
*.so
# Packages
*.egg
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
.coverage
.tox
nosetests.xml
.testrepository
.venv
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?

View File

@ -1,3 +0,0 @@
# Format is:
# <preferred e-mail> <other e-mail 1>
# <preferred e-mail> <other e-mail 2>

View File

@ -1,7 +0,0 @@
[DEFAULT]
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
test_id_option=--load-list $IDFILE
test_list_option=--list

View File

@ -1,17 +0,0 @@
If you would like to contribute to the development of OpenStack,
you must follow the steps in the "If you're a developer, start here"
section of this page:
http://wiki.openstack.org/HowToContribute
Once those steps have been completed, changes to OpenStack
should be submitted for review via the Gerrit tool, following
the workflow documented at:
http://wiki.openstack.org/GerritWorkflow
Pull requests submitted through GitHub will be ignored.
Bugs should be filed on Launchpad, not GitHub:
https://bugs.launchpad.net/aeromancer

View File

@ -1,4 +0,0 @@
aeromancer Style Commandments
===============================================
Read the OpenStack Style Commandments http://docs.openstack.org/developer/hacking/

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,6 +0,0 @@
include AUTHORS
include ChangeLog
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,15 +1,9 @@
============
aeromancer
============
This project is no longer maintained.
OpenStack Source Explorer
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/aeromancer
* Source: http://git.openstack.org/cgit/openstack/aeromancer
* Bugs: http://bugs.launchpad.net/aeromancer
Features
--------
* TODO
For any further questions, please email
openstack-discuss@lists.openstack.org.

View File

@ -1,19 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'aeromancer').version_string()

View File

@ -1,81 +0,0 @@
import logging
import os
import sys
from cliff.app import App
from cliff.commandmanager import CommandManager
import pkg_resources
from sqlalchemy.orm import sessionmaker
from aeromancer.db import connect, migrations
class Aeromancer(App):
log = logging.getLogger(__name__)
# CONSOLE_MESSAGE_FORMAT = \
# '[%(asctime)s] %(levelname)-8s %(name)s %(message)s'
def __init__(self):
dist = pkg_resources.get_distribution('aeromancer')
super(Aeromancer, self).__init__(
description='OpenStack source divination',
version=dist.version,
command_manager=CommandManager('aeromancer.cli'),
)
def build_option_parser(self, description, version,
argparse_kwargs=None):
parser = super(Aeromancer, self).build_option_parser(
description,
version,
argparse_kwargs,
)
default_repo_root = os.environ.get('AEROMANCER_REPOS', '~/repos')
parser.add_argument(
'--repo-root',
default=os.path.expanduser(default_repo_root),
help=('directory where repositories are checked out; '
'set with AEROMANCER_REPOS environment variable; '
'defaults to %(default)s'),
)
return parser
def configure_logging(self):
super(Aeromancer, self).configure_logging()
if self.options.verbose_level < 2:
# Quiet the logger that talks about updating the database.
alembic_logger = logging.getLogger('alembic.migration')
alembic_logger.setLevel(logging.WARN)
return
def initialize_app(self, argv):
# Make sure our application directory exists, so we have a
# place to put the database and any other configuration files.
self.app_dir = os.path.expanduser('~/.aeromancer')
if not os.path.exists(self.app_dir):
os.mkdir(self.app_dir)
self.log.debug('updating database')
migrations.run_migrations()
self.engine = connect.connect()
self._session_maker = sessionmaker(bind=self.engine)
def get_db_session(self):
return self._session_maker()
# def prepare_to_run_command(self, cmd):
# self.log.debug('prepare_to_run_command %s', cmd.__class__.__name__)
# def clean_up(self, cmd, result, err):
# self.log.debug('clean_up %s', cmd.__class__.__name__)
# if err:
# self.log.debug('got an error: %s', err)
def main(argv=sys.argv[1:]):
myapp = Aeromancer()
return myapp.run(argv)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))

View File

@ -1,25 +0,0 @@
from __future__ import print_function
import logging
import os
from aeromancer import project
from aeromancer import project_filter
from aeromancer.cli.run import ProjectShellCommandBase
class Grep(ProjectShellCommandBase):
"""Search the contents of files
Accepts most of the arguments of git-grep, unless they conflict
with other arguments to this command.
"""
log = logging.getLogger(__name__)
DEFAULT_SEP = '/'
def _get_command(self, parsed_args):
return ['git', 'grep'] + self._extra

View File

@ -1,131 +0,0 @@
import logging
import os
from aeromancer.db.models import *
from aeromancer import project
from aeromancer import utils
from cliff.command import Command
from cliff.lister import Lister
class Add(Command):
"(Re)register a project to be scanned"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(Add, self).get_parser(prog_name)
parser.add_argument(
'--force',
action='store_true',
default=False,
help='force re-reading the project files',
)
parser.add_argument(
'project',
nargs='+',
default=[],
help=('project directory names under the project root, '
'for example: "stackforge/aeromancer"'),
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
pm = project.ProjectManager(session)
for project_name in parsed_args.project:
project_path = os.path.join(self.app.options.repo_root,
project_name)
pm.add_or_update(project_name, project_path,
force=parsed_args.force)
session.commit()
class List(Lister):
"""Show the registered projects"""
log = logging.getLogger(__name__)
def take_action(self, parsed_args):
session = self.app.get_db_session()
query = session.query(Project).order_by(Project.name)
return (('Name', 'Path'),
((p.name, p.path) for p in query.all()))
class Rescan(Command):
"Rescan all known projects"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(Rescan, self).get_parser(prog_name)
parser.add_argument(
'--force',
action='store_true',
default=False,
help='force re-reading the project files',
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
query = session.query(Project).order_by(Project.name)
pm = project.ProjectManager(session)
for proj_obj in query.all():
pm.update(proj_obj, force=parsed_args.force)
session.commit()
class Discover(Command):
"Find all projects in the repository root"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(Discover, self).get_parser(prog_name)
parser.add_argument(
'--organization', '--org', '-o',
action='append',
default=[],
help=('organization directory names under the project root, '
'for example: "stackforge", defaults to "openstack"'),
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
pm = project.ProjectManager(session)
orgs = parsed_args.organization
if not orgs:
orgs = ['openstack']
for project_name in project.discover(self.app.options.repo_root, orgs):
full_path = os.path.join(self.app.options.repo_root,
project_name)
pm.add_or_update(project_name, full_path)
session.commit()
class Remove(Command):
"Remove a project from the database"
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(Remove, self).get_parser(prog_name)
parser.add_argument(
'project',
nargs='+',
default=[],
help=('project directory names under the project root, '
'for example: "stackforge/aeromancer"'),
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
pm = project.ProjectManager(session)
for project_name in parsed_args.project:
pm.remove(project_name)
session.commit()

View File

@ -1,84 +0,0 @@
from __future__ import print_function
import logging
import os
import shlex
from aeromancer import project
from aeromancer import project_filter
from cliff.command import Command
class ArgumentParserWrapper(object):
"""Wrap a regular argument parser to replace
parse_args with parse_known_args.
Cliff calls parse_args() for subcommands, but we want
parse_known_args() so any extra values that look like command
switches will be ignored.
"""
def __init__(self, parser):
self._parser = parser
def parse_args(self, argv):
return self._parser.parse_known_args(argv)
class ProjectShellCommandBase(Command):
"""Run a command for each project"""
log = logging.getLogger(__name__)
DEFAULT_SEP = ''
def get_parser(self, prog_name):
parser = super(ProjectShellCommandBase, self).get_parser(prog_name)
project_filter.ProjectFilter.add_arguments(parser)
parser.add_argument(
'--sep',
action='store',
default=self.DEFAULT_SEP,
help=('separator between project name and command output, '
'defaults to %(default)r'),
)
return ArgumentParserWrapper(parser)
def _show_text_output(self, parsed_args, project, out):
for line in out.decode('utf-8').splitlines():
print(project.name + parsed_args.sep + line)
def _get_command(self, parsed_args):
raise NotImplementedError()
def _show_output(self, parsed_args, proj_obj, out, err):
self._show_text_output(parsed_args, proj_obj, err or out)
def take_action(self, parsed_args_tuple):
# Handle the tuple we'll get from the parser wrapper.
parsed_args, extra = parsed_args_tuple
self._extra = extra
session = self.app.get_db_session()
pm = project.ProjectManager(session)
prj_filt = project_filter.ProjectFilter.from_parsed_args(parsed_args)
command = self._get_command(parsed_args)
results = pm.run(command, prj_filt)
for proj_obj, out, err in results:
self._show_output(parsed_args, proj_obj, out, err)
class Run(ProjectShellCommandBase):
"""Run a command for each project"""
log = logging.getLogger(__name__)
DEFAULT_SEP = ':'
def get_parser(self, prog_name):
parser = super(Run, self).get_parser(prog_name)
return parser
def _get_command(self, parsed_args):
return self._extra

View File

@ -1,59 +0,0 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# sqlalchemy.url = driver://user:pass@localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -1 +0,0 @@
Generic single-database configuration.

View File

@ -1,72 +0,0 @@
from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
#from logging.config import fileConfig
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
#fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(url=url, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata
)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,22 +0,0 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -1,29 +0,0 @@
"""add line table
Revision ID: 1fb08a62dd91
Revises: 5123eb59e1bb
Create Date: 2014-10-30 17:52:17.984359
"""
# revision identifiers, used by Alembic.
revision = '1fb08a62dd91'
down_revision = '5123eb59e1bb'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'line',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('file_id', sa.Integer,
sa.ForeignKey('file.id', name='fk_line_file_id')),
sa.Column('number', sa.Integer, nullable=False),
sa.Column('content', sa.String()),
)
def downgrade():
op.drop_table('line')

View File

@ -1,22 +0,0 @@
"""track file hash
Revision ID: 22e0aa22ab8e
Revises: 1fb08a62dd91
Create Date: 2014-11-13 00:32:24.909035
"""
# revision identifiers, used by Alembic.
revision = '22e0aa22ab8e'
down_revision = '1fb08a62dd91'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.add_column('file', sa.Column('sha', sa.String))
def downgrade():
op.drop_column('file', 'sha')

View File

@ -1,29 +0,0 @@
"""add file table
Revision ID: 5123eb59e1bb
Revises: 575c6e7ef2ea
Create Date: 2014-10-30 12:54:15.087307
"""
# revision identifiers, used by Alembic.
revision = '5123eb59e1bb'
down_revision = '575c6e7ef2ea'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'file',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('project_id', sa.Integer,
sa.ForeignKey('project.id', name='fk_file_project_id')),
sa.Column('name', sa.String(), nullable=False),
sa.Column('path', sa.String()),
)
def downgrade():
op.drop_table('file')

View File

@ -1,27 +0,0 @@
"""create project table
Revision ID: 575c6e7ef2ea
Revises: None
Create Date: 2014-10-27 22:32:51.240215
"""
# revision identifiers, used by Alembic.
revision = '575c6e7ef2ea'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'project',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('name', sa.String(), nullable=False, unique=True),
sa.Column('path', sa.String()),
)
def downgrade():
op.drop_table('project')

View File

@ -1,24 +0,0 @@
"""add indexes
Revision ID: a3d002d161a
Revises: 22e0aa22ab8e
Create Date: 2014-11-24 14:24:29.824147
"""
# revision identifiers, used by Alembic.
revision = 'a3d002d161a'
down_revision = '22e0aa22ab8e'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_index('file_project_idx', 'file', ['project_id'])
op.create_index('line_file_idx', 'line', ['file_id'])
def downgrade():
op.drop_index('line_file_idx', 'line')
op.drop_index('file_project_idx', 'file')

View File

@ -1,28 +0,0 @@
import os
import re
from sqlalchemy import create_engine
from sqlalchemy import event
def get_url():
"""Return the database URL"""
db_file_path = os.path.expanduser('~/.aeromancer/aeromancer.db')
return "sqlite:///%s" % db_file_path
def _re_fn(expr, item):
"Registered as the regexp function with sqlite."
reg = re.compile(expr, re.I)
return reg.search(item) is not None
def connect():
"""Return a database engine"""
engine = create_engine(get_url())
@event.listens_for(engine, "begin")
def do_begin(conn):
conn.connection.create_function('regexp', 2, _re_fn)
return engine

View File

@ -1,58 +0,0 @@
import logging
from alembic import config
from alembic import command
from alembic import environment
from alembic import script
from sqlalchemy import engine_from_config, pool
from aeromancer.db import connect
from aeromancer import filehandler
LOG = logging.getLogger(__name__)
def _run_migrations_in_location(location):
LOG.debug('loading migrations from %s', location)
url = connect.get_url()
# We need a unique version_table for each set of migrations.
version_table = location.replace('.', '_') + '_versions'
# Modified version of alembic.command.upgrade().
# command.upgrade(cfg, 'head')
revision = 'head'
cfg = config.Config()
cfg.set_main_option('script_location', location + ':alembic')
cfg.set_main_option("sqlalchemy.url", url)
script_dir = script.ScriptDirectory.from_config(cfg)
def upgrade(rev, context):
return script_dir._upgrade_revs(revision, rev)
with environment.EnvironmentContext(
cfg,
script_dir,
fn=upgrade,
as_sql=False,
starting_rev=None,
destination_rev=revision,
tag=None,
version_table=version_table,
):
script_dir.run_env()
def run_migrations():
_run_migrations_in_location("aeromancer.db")
file_handlers = filehandler.load_handlers()
for fh in file_handlers:
_run_migrations_in_location(fh.entry_point.module_name)
if __name__ == '__main__':
run_migrations()

View File

@ -1,39 +0,0 @@
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship, backref
Base = declarative_base()
class Project(Base):
__tablename__ = 'project'
id = Column(Integer, primary_key=True)
name = Column(String, unique=True, nullable=False)
path = Column(String)
files = relationship('File',
backref='project',
cascade="all, delete, delete-orphan")
class File(Base):
__tablename__ = 'file'
id = Column(Integer, primary_key=True)
project_id = Column(Integer, ForeignKey('project.id'))
name = Column(String, nullable=False)
path = Column(String)
sha = Column(String)
lines = relationship('Line',
backref='file',
cascade="all, delete, delete-orphan")
@property
def project_path(self):
return '%s/%s' % (self.project.name, self.name)
class Line(Base):
__tablename__ = 'line'
id = Column(Integer, primary_key=True)
file_id = Column(Integer, ForeignKey('file.id'))
number = Column(Integer, nullable=False)
content = Column(String)

View File

@ -1,13 +0,0 @@
from stevedore import extension
def _load_error_handler(*args, **kwds):
raise
def load_handlers():
return extension.ExtensionManager(
'aeromancer.filehandler',
invoke_on_load=True,
on_load_failure_callback=_load_error_handler,
)

View File

@ -1,24 +0,0 @@
import abc
import fnmatch
import os
class FileHandler(object):
__metaclass__ = abc.ABCMeta
INTERESTING_PATTERNS = []
def supports_file(self, file_obj):
"""Does this plugin want to process the file?
"""
base_filename = os.path.basename(file_obj.path)
return any(fnmatch.fnmatch(base_filename, ip)
for ip in self.INTERESTING_PATTERNS)
@abc.abstractmethod
def process_file(self, session, file_obj):
return
@abc.abstractmethod
def delete_data_for_file(self, session, file_obj):
return

View File

@ -1,59 +0,0 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# sqlalchemy.url = driver://user:pass@localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -1 +0,0 @@
Generic single-database configuration.

View File

@ -1,72 +0,0 @@
from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
#fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(url=url, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata
)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,22 +0,0 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -1,30 +0,0 @@
"""add oslo module table
Revision ID: 28d0cdc12de0
Revises: None
Create Date: 2014-11-12 20:38:44.826444
"""
# revision identifiers, used by Alembic.
revision = '28d0cdc12de0'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'oslo_module',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('line_id', sa.Integer,
sa.ForeignKey('line.id', name='fk_oslo_module_line_id')),
sa.Column('project_id', sa.Integer,
sa.ForeignKey('project.id', name='fk_oslo_module_project_id')),
sa.Column('name', sa.String()),
)
def downgrade():
op.drop_table('oslo_module')

View File

@ -1,58 +0,0 @@
import logging
import os
from aeromancer.db import models
from aeromancer.oslo import models as oslo_models
from aeromancer import project
from aeromancer import utils
from cliff.lister import Lister
from sqlalchemy import distinct
from sqlalchemy.orm import aliased
class List(Lister):
"""List the Oslo modules used by a project"""
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(List, self).get_parser(prog_name)
parser.add_argument(
'project',
help=('project directory name under the project root, '
'for example: "stackforge/aeromancer"'),
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
query = session.query(oslo_models.Module).join(models.Project).filter(
models.Project.name == parsed_args.project
).order_by(oslo_models.Module.name)
return (('Name',),
((r.name,)
for r in query.all()))
class Uses(Lister):
"""List the projects that use the Oslo module"""
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(Uses, self).get_parser(prog_name)
parser.add_argument(
'module',
help='the module name',
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
query = session.query(oslo_models.Module).join(models.Project).filter(
oslo_models.Module.name == parsed_args.module
).order_by(models.Project.name)
return (('Project',),
((r.project.name,) for r in query.all()))

View File

@ -1,51 +0,0 @@
import logging
import pkg_resources
from aeromancer.db import models as models
from aeromancer.filehandler import base
from aeromancer.oslo import models as oslo_models
LOG = logging.getLogger(__name__)
def read_sync_file(file_obj):
for line in file_obj.lines:
text = line.content.strip()
if not text or text.startswith('#'):
continue
if not text.startswith('module'):
continue
text = text[len('module'):]
text = text.lstrip('= ')
modules = text.split(',')
for m in modules:
yield m.strip(), line
class OsloSyncHandler(base.FileHandler):
INTERESTING_PATTERNS = [
'openstack-common.conf',
]
def process_file(self, session, file_obj):
LOG.info('loading Oslo settings from %s', file_obj.project_path)
parent_project = file_obj.project
for module_name, line in read_sync_file(file_obj):
LOG.debug('module: %s', module_name)
new_r = oslo_models.Module(
name=module_name,
line=line,
project=parent_project,
)
session.add(new_r)
def delete_data_for_file(self, session, file_obj):
LOG.debug('deleting Oslo modules from %r', file_obj.path)
query = session.query(oslo_models.Module).join(models.Line).filter(
models.Line.file_id == file_obj.id
)
for r in query.all():
session.delete(r)
return

View File

@ -1,20 +0,0 @@
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship, backref
from aeromancer.db import models
class Module(models.Base):
__tablename__ = 'oslo_module'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
line_id = Column(Integer, ForeignKey('line.id'))
line = relationship(
models.Line,
uselist=False,
)
project_id = Column(Integer, ForeignKey('project.id'))
project = relationship(
models.Project,
backref='oslo_modules',
)

View File

@ -1,222 +0,0 @@
import fnmatch
import glob
import io
import itertools
import logging
import os
import subprocess
from sqlalchemy.orm.exc import NoResultFound
from aeromancer.db.models import Project, File, Line
from aeromancer import filehandler
from aeromancer import utils
LOG = logging.getLogger(__name__)
def discover(repo_root, organizations):
"""Discover project-like directories under the repository root"""
glob_patterns = ['%s/*' % o for o in organizations]
with utils.working_dir(repo_root):
return itertools.ifilter(
lambda x: os.path.isdir(os.path.join(repo_root, x)),
itertools.chain(*(glob.glob(g) for g in glob_patterns))
)
def _find_files_in_project(path):
"""Return a list of the files managed in the project and their sha hash.
Uses 'git ls-files -s'
"""
with utils.working_dir(path):
# Ask git to tell us the sha hash so we can tell if the file
# has changed since we looked at it last.
cmd = subprocess.Popen(['git', 'ls-files', '-z', '-s'],
stdout=subprocess.PIPE)
output = cmd.communicate()[0]
entries = output.split('\0')
for e in entries:
if not e:
continue
metadata, ignore, filename = e.partition('\t')
sha = metadata.split(' ')[1]
yield (filename, sha)
class ProjectManager(object):
_DO_NOT_READ = [
'*.doc', '*.docx', '*.graffle', '*.odp', '*.pptx', '*.vsd', '*.xsd',
'*.gif', '*.ico', '*.jpeg', '*.jpg', '*.png', '*.tiff', '*.JPG',
'*.gpg',
'*.jar', # Why do we check in jar files?!
'*.swf', '*.eot',
'*.ttf', '*.woff', # webfont; horizon
'*.xml',
'*.gz', '*.zip', '*.z',
'*.mo', '*.db',
]
def __init__(self, session):
self.file_handlers = filehandler.load_handlers()
self.session = session
def get_project(self, name):
"""Return an existing project, if there is one"""
query = self.session.query(Project).filter(Project.name == name)
try:
return query.one()
except NoResultFound:
return None
def add_or_update(self, name, path, force=False):
"""Create a new project definition or update an existing one"""
proj_obj = self.get_project(name)
if proj_obj:
proj_obj.path = path
LOG.info('updating project %s from %s', name, path)
else:
proj_obj = Project(name=name, path=path)
LOG.info('adding project %s from %s', name, path)
self.session.add(proj_obj)
self.session.flush()
assert proj_obj.id, 'No id for new project'
self.update(proj_obj, force=force)
return proj_obj
def update(self, proj_obj, force=False):
"""Update the settings for an existing project"""
self._update_project_files(proj_obj, force=force)
def remove(self, name):
"""Delete stored data for the named project"""
query = self.session.query(Project).filter(Project.name == name)
try:
proj_obj = query.one()
LOG.info('removing project %s', name)
except NoResultFound:
return
for file_obj in proj_obj.files:
self._remove_plugin_data_for_file(file_obj)
self.session.delete(proj_obj)
def _remove_plugin_data_for_file(self, file_obj):
# We have to explicitly have the handlers delete their data
# because the parent-child relationship of the tables is reversed
# because the plugins define the relationships.
for fh in self.file_handlers:
if fh.obj.supports_file(file_obj):
LOG.debug('removing %s plugin data for %s', fh.name, file_obj.name)
fh.obj.delete_data_for_file(self.session, file_obj)
def _remove_file_data(self, file_obj, reason='file has changed'):
"""Delete the data associated with the file, including plugin data and
file contents.
"""
LOG.debug('removing cached contents of %s: %s', file_obj.name, reason)
self._remove_plugin_data_for_file(file_obj)
self.session.query(Line).filter(Line.file_id == file_obj.id).delete()
self.session.delete(file_obj)
def _update_project_files(self, proj_obj, force):
"""Update the files stored for each project"""
LOG.debug('reading file contents in %s', proj_obj.name)
# Collect the known files in a project so we can test their
# SHAs quickly.
known = {f.name: f for f in proj_obj.files}
# Track the files we've seen so we can delete any files that
# are no longer present.
seen = set()
# Now load the files currently being managed by git.
for filename, sha in _find_files_in_project(proj_obj.path):
# Remember that we have seen the file in the project.
seen.add(filename)
# Skip things that are not files (usually symlinks).
fullname = os.path.join(proj_obj.path, filename)
if not os.path.isfile(fullname):
continue
try:
existing_file = known[filename]
if existing_file.sha == sha and not force:
# File has not changed, we can use the content we
# already have.
LOG.debug('using cached version of %s', filename)
continue
self._remove_file_data(existing_file)
except KeyError:
pass
new_file = File(project=proj_obj, name=filename, path=fullname, sha=sha)
self.session.add(new_file)
self.session.flush() # make sure new_file gets an id
assert new_file.id, 'No id for new file'
if any(fnmatch.fnmatch(filename, dnr) for dnr in self._DO_NOT_READ):
LOG.debug('ignoring contents of %s', fullname)
else:
LOG.debug('reading %s', fullname)
with io.open(fullname, mode='r', encoding='utf-8') as f:
try:
body = f.read()
except UnicodeDecodeError:
# FIXME(dhellmann): Be smarter about trying other
# encodings?
LOG.warn('Could not read %s as a UTF-8 encoded file, ignoring',
fullname)
continue
lines = body.splitlines()
# Use SQLalchemy's core mode to bulk insert the lines.
if lines:
self.session.execute(
Line.__table__.insert(),
[{'file_id': new_file.id,
'number': num,
'content': content}
for num, content in enumerate(lines, 1)]
)
LOG.debug('%s/%s has %s lines', proj_obj.name, filename, len(lines))
# Invoke plugins for processing files in special ways
for fh in self.file_handlers:
if fh.obj.supports_file(new_file):
fh.obj.process_file(self.session, new_file)
self.session.flush()
# Remove files that we have in the database but that were no
# longer seen in the git repository.
for name, obj in known.items():
if name not in seen:
self._remove_file_data(obj, reason='file no longer exists')
self.session.flush()
def run(self, command, prj_filter):
"""Given a command, run it for all projects.
Returns sequence of tuples containing project objects, the
output, and the errors from the command.
"""
# TODO: Would it be more efficient to register the regexp
# function on the db session here instead of when we connect?
# We could pre-compile the regex and not pass it to each
# invocation of the function.
query = self.session.query(Project)
if prj_filter:
query = prj_filter.update_query(query)
query = query.order_by(Project.name)
#return query.yield_per(20).all()
for project in query.all():
cmd = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=project.path,
env={'PAGER': ''}, # override pager for git commands
)
out, err = cmd.communicate()
yield (project, out, err)

View File

@ -1,52 +0,0 @@
import argparse
import logging
from aeromancer.db.models import Project
LOG = logging.getLogger(__name__)
class ProjectFilter(object):
"""Manage the arguments for filtering queries by project.
"""
@staticmethod
def add_arguments(parser):
"""Given an argparse.ArgumentParser add arguments.
"""
grp = parser.add_argument_group('Project Filter')
grp.add_argument(
'--project',
action='append',
default=[],
dest='projects',
help=('projects to limit search, '
'by exact name or glob-style patterns'),
)
@classmethod
def from_parsed_args(cls, parsed_args):
return cls(projects=parsed_args.projects)
def __init__(self, projects):
self.exact = []
self.patterns = []
for p in projects:
if '*' in p:
self.patterns.append(p.replace('*', '%'))
else:
self.exact.append(p)
self.projects = projects
def update_query(self, query):
the_filter = ()
if self.exact:
LOG.debug('filtering on projects in %s', self.exact)
the_filter += (Project.name.in_(self.exact),)
if self.patterns:
LOG.debug('filtering on projects matching %s', self.patterns)
the_filter += tuple(Project.name.ilike(p)
for p in self.patterns)
if the_filter:
query = query.filter(*the_filter)
return query

View File

@ -1,59 +0,0 @@
# A generic, single database configuration.
[alembic]
# path to migration scripts
script_location = alembic
# template used to generate migration files
# file_template = %%(rev)s_%%(slug)s
# max length of characters to apply to the
# "slug" field
#truncate_slug_length = 40
# set to 'true' to run the environment during
# the 'revision' command, regardless of autogenerate
# revision_environment = false
# set to 'true' to allow .pyc and .pyo files without
# a source .py file to be detected as revisions in the
# versions/ directory
# sourceless = false
# sqlalchemy.url = driver://user:pass@localhost/dbname
# Logging configuration
[loggers]
keys = root,sqlalchemy,alembic
[handlers]
keys = console
[formatters]
keys = generic
[logger_root]
level = WARN
handlers = console
qualname =
[logger_sqlalchemy]
level = WARN
handlers =
qualname = sqlalchemy.engine
[logger_alembic]
level = INFO
handlers =
qualname = alembic
[handler_console]
class = StreamHandler
args = (sys.stderr,)
level = NOTSET
formatter = generic
[formatter_generic]
format = %(levelname)-5.5s [%(name)s] %(message)s
datefmt = %H:%M:%S

View File

@ -1 +0,0 @@
Generic single-database configuration.

View File

@ -1,72 +0,0 @@
from __future__ import with_statement
from alembic import context
from sqlalchemy import engine_from_config, pool
from logging.config import fileConfig
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# Interpret the config file for Python logging.
# This line sets up loggers basically.
#fileConfig(config.config_file_name)
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = None
# other values from the config, defined by the needs of env.py,
# can be acquired:
# my_important_option = config.get_main_option("my_important_option")
# ... etc.
def run_migrations_offline():
"""Run migrations in 'offline' mode.
This configures the context with just a URL
and not an Engine, though an Engine is acceptable
here as well. By skipping the Engine creation
we don't even need a DBAPI to be available.
Calls to context.execute() here emit the given string to the
script output.
"""
url = config.get_main_option("sqlalchemy.url")
context.configure(url=url, target_metadata=target_metadata)
with context.begin_transaction():
context.run_migrations()
def run_migrations_online():
"""Run migrations in 'online' mode.
In this scenario we need to create an Engine
and associate a connection with the context.
"""
engine = engine_from_config(
config.get_section(config.config_ini_section),
prefix='sqlalchemy.',
poolclass=pool.NullPool)
connection = engine.connect()
context.configure(
connection=connection,
target_metadata=target_metadata
)
try:
with context.begin_transaction():
context.run_migrations()
finally:
connection.close()
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()

View File

@ -1,22 +0,0 @@
"""${message}
Revision ID: ${up_revision}
Revises: ${down_revision}
Create Date: ${create_date}
"""
# revision identifiers, used by Alembic.
revision = ${repr(up_revision)}
down_revision = ${repr(down_revision)}
from alembic import op
import sqlalchemy as sa
${imports if imports else ""}
def upgrade():
${upgrades if upgrades else "pass"}
def downgrade():
${downgrades if downgrades else "pass"}

View File

@ -1,30 +0,0 @@
"""add global_requirement table
Revision ID: 17770a76e3c7
Revises: 203851643975
Create Date: 2014-11-12 12:58:13.309422
"""
# revision identifiers, used by Alembic.
revision = '17770a76e3c7'
down_revision = '203851643975'
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'global_requirement',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('line_id', sa.Integer,
sa.ForeignKey('line.id', name='fk_requirement_line_id')),
sa.Column('name', sa.String()),
)
pass
def downgrade():
op.drop_table('global_requirement')
pass

View File

@ -1,30 +0,0 @@
"""add requirements table
Revision ID: 203851643975
Revises: None
Create Date: 2014-11-04 14:02:12.847385
"""
# revision identifiers, used by Alembic.
revision = '203851643975'
down_revision = None
from alembic import op
import sqlalchemy as sa
def upgrade():
op.create_table(
'requirement',
sa.Column('id', sa.Integer, primary_key=True),
sa.Column('line_id', sa.Integer,
sa.ForeignKey('line.id', name='fk_requirement_line_id')),
sa.Column('project_id', sa.Integer,
sa.ForeignKey('project.id', name='fk_requirement_project_id')),
sa.Column('name', sa.String()),
)
def downgrade():
op.drop_table('requirement')

View File

@ -1,99 +0,0 @@
import logging
import os
from aeromancer.db import models
from aeromancer.requirements import models as req_models
from aeromancer import project
from aeromancer import utils
from cliff.lister import Lister
from sqlalchemy import distinct
from sqlalchemy.orm import aliased
class List(Lister):
"""List the requirements for a project"""
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(List, self).get_parser(prog_name)
parser.add_argument(
'project',
help=('project directory name under the project root, '
'for example: "stackforge/aeromancer"'),
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
query = session.query(req_models.Requirement).join(models.Project).filter(
models.Project.name == parsed_args.project
).order_by(req_models.Requirement.name)
return (('Name', 'Spec', 'File'),
((r.name, r.line.content.strip(), r.line.file.name)
for r in query.all()))
class Uses(Lister):
"""List the projects that use requirement"""
log = logging.getLogger(__name__)
def get_parser(self, prog_name):
parser = super(Uses, self).get_parser(prog_name)
parser.add_argument(
'requirement',
help='the dist name for the requirement',
)
return parser
def take_action(self, parsed_args):
session = self.app.get_db_session()
query = session.query(req_models.Requirement).join(models.Project).filter(
req_models.Requirement.name.ilike(parsed_args.requirement)
).order_by(models.Project.name)
return (('Name', 'Spec', 'File'),
((r.project.name, r.line.content.strip(), r.line.file.name) for r in query.all()))
class Unused(Lister):
"""List global requirements not used by any projects"""
log = logging.getLogger(__name__)
def take_action(self, parsed_args):
session = self.app.get_db_session()
used_requirements = session.query(distinct(req_models.Requirement.name))
query = session.query(req_models.GlobalRequirement).filter(
req_models.GlobalRequirement.name.notin_(used_requirements)
).order_by(req_models.GlobalRequirement.name)
return (('Name', 'Spec'),
((r.name, r.line.content.strip()) for r in query.all()))
class Outdated(Lister):
"""List requirements in projects that do not match the global spec"""
log = logging.getLogger(__name__)
def take_action(self, parsed_args):
session = self.app.get_db_session()
used_requirements = session.query(distinct(req_models.Requirement.name))
global_line = aliased(models.Line)
project_line = aliased(models.Line)
query = session.query(req_models.Requirement,
models.Project,
global_line,
project_line,
req_models.GlobalRequirement).filter(
req_models.Requirement.project_id == models.Project.id,
req_models.Requirement.name == req_models.GlobalRequirement.name,
project_line.id == req_models.Requirement.line_id,
global_line.id == req_models.GlobalRequirement.line_id,
project_line.content != global_line.content,
).order_by(models.Project.name, req_models.Requirement.name)
return (('Project', 'Local', 'Global'),
((r[1].name, r[3].content.strip(), r[2].content.strip())
for r in query.all()))

View File

@ -1,79 +0,0 @@
import logging
import pkg_resources
from aeromancer.db import models as models
from aeromancer.filehandler import base
from aeromancer.requirements import models as req_models
LOG = logging.getLogger(__name__)
def read_requirements_file(file_obj):
for line in file_obj.lines:
text = line.content.strip()
if not text or text.startswith('#'):
continue
try:
# FIXME(dhellmann): Use pbr's requirements parser.
dist_name = pkg_resources.Requirement.parse(text).project_name
except ValueError:
LOG.warn('could not parse dist name from %r',
line.content)
continue
yield dist_name, line
class RequirementsHandler(base.FileHandler):
INTERESTING_PATTERNS = [
'requirements.txt',
'requirements-py*.txt',
'test-requirements.txt',
'test-requirements-py*.txt',
]
def process_file(self, session, file_obj):
LOG.info('loading requirements from %s', file_obj.project_path)
parent_project = file_obj.project
for dist_name, line in read_requirements_file(file_obj):
LOG.debug('requirement: %s', dist_name)
new_r = req_models.Requirement(
name=dist_name,
line=line,
project=parent_project,
)
session.add(new_r)
def delete_data_for_file(self, session, file_obj):
LOG.debug('deleting requirements from %r', file_obj.path)
query = session.query(req_models.Requirement).join(models.Line).filter(
models.Line.file_id == file_obj.id
)
for r in query.all():
session.delete(r)
return
class GlobalRequirementsHandler(base.FileHandler):
INTERESTING_PATTERNS = [
'global-requirements.txt',
]
def process_file(self, session, file_obj):
LOG.info('loading global requirements from %s', file_obj.project_path)
parent_project = file_obj.project
for dist_name, line in read_requirements_file(file_obj):
LOG.debug('global requirement: %s', dist_name)
new_r = req_models.GlobalRequirement(
name=dist_name,
line=line,
)
session.add(new_r)
def delete_data_for_file(self, session, file_obj):
LOG.debug('deleting global requirements from %r', file_obj.path)
query = session.query(req_models.GlobalRequirement)
query.delete()
return

View File

@ -1,33 +0,0 @@
from sqlalchemy import Column, Integer, String, ForeignKey
from sqlalchemy.orm import relationship, backref
from aeromancer.db import models
class Requirement(models.Base):
__tablename__ = 'requirement'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
line_id = Column(Integer, ForeignKey('line.id'))
line = relationship(
models.Line,
uselist=False,
single_parent=True,
)
project_id = Column(Integer, ForeignKey('project.id'))
project = relationship(
models.Project,
backref='requirements',
)
class GlobalRequirement(models.Base):
__tablename__ = 'global_requirement'
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False)
line_id = Column(Integer, ForeignKey('line.id'))
line = relationship(
models.Line,
uselist=False,
single_parent=True,
)

View File

@ -1,23 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2010-2011 OpenStack Foundation
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslotest import base
class TestCase(base.BaseTestCase):
"""Test case base class for all unit tests."""

View File

@ -1,28 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
test_aeromancer
----------------------------------
Tests for `aeromancer` module.
"""
from aeromancer.tests import base
class TestAeromancer(base.TestCase):
def test_something(self):
pass

View File

@ -1,10 +0,0 @@
import contextlib
import os
@contextlib.contextmanager
def working_dir(new_dir):
before = os.getcwd()
os.chdir(new_dir)
yield
os.chdir(before)

View File

@ -1 +0,0 @@
[python: **.py]

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'aeromancer'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../CONTRIBUTING.rst

View File

@ -1,24 +0,0 @@
.. aeromancer documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to aeromancer's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install aeromancer
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv aeromancer
$ pip install aeromancer

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use aeromancer in a project::
import aeromancer

View File

@ -1,6 +0,0 @@
[DEFAULT]
# The list of modules to copy from oslo-incubator.git
# The base module to hold the copy of openstack.common
base=aeromancer

View File

@ -1,10 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=0.6,!=0.7,<1.0
Babel>=1.3
SQLAlchemy>=0.9.7,<=0.9.99
alembic>=0.6.4
stevedore>=1.0.0 # Apache-2.0
cliff>=1.7.0 # Apache-2.0

View File

@ -1,70 +0,0 @@
[metadata]
name = aeromancer
summary = OpenStack Source Explorer
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
[files]
packages =
aeromancer
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = aeromancer/locale
domain = aeromancer
[update_catalog]
domain = aeromancer
output_dir = aeromancer/locale
input_file = aeromancer/locale/aeromancer.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = aeromancer/locale/aeromancer.pot
[entry_points]
console_scripts =
aeromancer = aeromancer.cli.app:main
aeromancer.cli =
add = aeromancer.cli.project:Add
list = aeromancer.cli.project:List
remove = aeromancer.cli.project:Remove
rescan = aeromancer.cli.project:Rescan
discover = aeromancer.cli.project:Discover
requirements list = aeromancer.requirements.cli:List
requirements outdated = aeromancer.requirements.cli:Outdated
requirements unused = aeromancer.requirements.cli:Unused
requirements uses = aeromancer.requirements.cli:Uses
oslo list = aeromancer.oslo.cli:List
oslo uses = aeromancer.oslo.cli:Uses
grep = aeromancer.cli.grep:Grep
run = aeromancer.cli.run:Run
aeromancer.filehandler =
requirements = aeromancer.requirements.handler:RequirementsHandler
global_requirements = aeromancer.requirements.handler:GlobalRequirementsHandler
oslo_sync = aeromancer.oslo.handler:OsloSyncHandler

View File

@ -1,22 +0,0 @@
#!/usr/bin/env python
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,15 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking>=0.9.2,<0.10
coverage>=3.6
discover
python-subunit
sphinx>=1.1.2
oslosphinx
oslotest>=1.1.0.0a1
testrepository>=0.0.18
testscenarios>=0.4
testtools>=0.9.34

34
tox.ini
View File

@ -1,34 +0,0 @@
[tox]
minversion = 1.6
envlist = py33,py34,py27,pypy,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
commands = python setup.py testr --slowest --testr-args='{posargs}'
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py testr --coverage --testr-args='{posargs}'
[testenv:docs]
commands = python setup.py build_sphinx
[flake8]
# H803 skipped on purpose per list discussion.
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125,H803
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build