Drop files are not related to packetary

This commit is contained in:
Bulat Gaifullin 2015-12-24 13:00:48 +03:00
parent e9b9e24ce2
commit 2873c724bf
81 changed files with 14 additions and 4566 deletions

View File

@ -26,7 +26,7 @@ description:
maintainers:
- contrib/:
- ./:
- name: Bulat Gaifullin
email: bgaifullin@mirantis.com
IRC: bgaifullin
@ -34,21 +34,3 @@ maintainers:
- name: Vladimir Kozhukalov
email: vkozhukalov@mirantis.com
IRC: kozhukalov
- packetary/:
- name: Bulat Gaifullin
email: bgaifullin@mirantis.com
IRC: bgaifullin
- name: Vladimir Kozhukalov
email: vkozhukalov@mirantis.com
IRC: kozhukalov
- perestroika/: &build_team
- name: Dmitry Burmistrov
email: dburmistrov@mirantis.com
IRC: dburmistrov
- name: Sergey Kulanov
email: skulanov@mirantis.com
IRC: SergK

View File

@ -2,21 +2,6 @@
Repository structure
====================
* contrib/fuel_mirror
It is a command line utility that provides the same functionality
and user interface as deprecated fuel-createmirror. It provides
two major features:
* clone/build mirror (full or partial)
* update repository configuration in nailgun
First one is a matter of packetary while second one should be left
totally up to fuelclient. So this module is to be deprecated soon
in favor of packetary and fuelclient.
WARNING: It is not designed to be used on 'live' repositories
that are available to clients during synchronization. That means
repositories will be inconsistent during the update. Please use these
scripts in conjunction with snapshots, on inactive repos, etc.
* debian
Specs for DEB packages.
@ -24,16 +9,17 @@ Repository structure
Documentation for packetary module.
* packetary
It is a Python library and command line utilty that allows
one to clone and build rpm/deb repositories.
Package provides object model and API for dealing with deb
and rpm repositories. One can use this framework to
implement operations like building repository
from a set of packages, clone repository, find package
dependencies, mix repositories, pull out a subset of
packages into a separate repository, etc.
Features:
* Common interface for different package-managers.
* Utility to build dependency graph for package(s).
* Utility to create mirror of repository according to dependency graph.
* perestroika
It is a set shell/python script that are used to build DEB/RPM
packages. These scripts are widely used by Fuel Packaging CI.
* specs
Specs for RPM packages.

View File

@ -1,8 +0,0 @@
include AUTHORS
include ChangeLog
recursive-include etc *
exclude .gitignore
exclude .gitreview
global-exclude *.pyc

View File

@ -1,16 +0,0 @@
===========
fuel_mirror
===========
The fuel-mirror is utility, that allows to create local repositories
with packages are required for the OpenStack deployment.
* Free software: Apache license
* Documentation: http://docs.openstack.org/developer/fuel-mirror
* Source: http://git.openstack.org/cgit/openstack/fuel-mirror/
* Bugs: http://bugs.launchpad.net/fuel
Features
--------
* TODO

View File

@ -1,2 +0,0 @@
[python: **.py]

View File

@ -1,55 +0,0 @@
fuel_release_match:
version: $openstack_version
operating_system: CentOS
repos:
- &centos
name: "centos"
uri: "http://mirror.centos.org/centos/6/os/x86_64"
type: "rpm"
priority: null
- &centos_updates
name: "centos-updates"
uri: "http://mirror.centos.org/centos/6/updates/x86_64"
type: "rpm"
priority: null
- &mos
name: "mos"
uri: "http://mirror.fuel-infra.org/mos-repos/centos/mos$mos_version-centos6-fuel/os/x86_64"
type: "rpm"
priority: null
- &mos_updates
name: "mos-updates"
uri: "http://mirror.fuel-infra.org/mos-repos/centos/mos$mos_version-centos6-fuel/updates/x86_64"
type: "rpm"
priority: null
- &mos_security
name: "mos-security"
uri: "http://mirror.fuel-infra.org/mos-repos/centos/mos$mos_version-centos6-fuel/security/x86_64"
type: "rpm"
priority: null
- &mos_holdback
name: "mos-holdback"
uri: "http://mirror.fuel-infra.org/mos-repos/centos/mos$mos_version-centos6-fuel/holdback/x86_64"
type: "rpm"
priority: null
groups:
mos:
- *mos
- *mos_updates
- *mos_security
- *mos_holdback
centos:
- *centos
- *centos_updates
inheritance:
centos: mos

View File

@ -1,140 +0,0 @@
# GLOBAL variables
ubuntu_baseurl: &ubuntu_baseurl http://archive.ubuntu.com/ubuntu
mos_baseurl: &mos_baseurl http://mirror.fuel-infra.org/mos-repos/ubuntu/$mos_version
fuel_release_match:
version: $openstack_version
operating_system: Ubuntu
repos:
- &ubuntu
name: "ubuntu"
uri: *ubuntu_baseurl
suite: "trusty"
section: "main multiverse restricted universe"
type: "deb"
priority: null
- &ubuntu_updates
name: "ubuntu-updates"
uri: *ubuntu_baseurl
suite: "trusty-updates"
section: "main multiverse restricted universe"
type: "deb"
priority: null
- &ubuntu_security
name: "ubuntu-security"
uri: *ubuntu_baseurl
suite: "trusty-security"
section: "main multiverse restricted universe"
type: "deb"
priority: null
- &mos
name: "mos"
uri: *mos_baseurl
suite: "mos$mos_version"
section: "main restricted"
type: "deb"
priority: 1000
- &mos_updates
name: "mos-updates"
uri: *mos_baseurl
suite: "mos$mos_version-updates"
section: "main restricted"
type: "deb"
priority: 1000
- &mos_security
name: "mos-security"
uri: *mos_baseurl
suite: "mos$mos_version-security"
section: "main restricted"
type: "deb"
priority: 1000
- &mos_holdback
name: "mos-holdback"
uri: *mos_baseurl
suite: "mos$mos_version"
section: "main restricted"
type: "deb"
priority: 1000
packages: &packages
- "acpi-support"
- "anacron"
- "aptitude"
- "atop"
- "bash-completion"
- "bc"
- "build-essential"
- "cloud-init"
- "conntrackd"
- "cpu-checker"
- "cpufrequtils"
- "debconf-utils"
- "devscripts"
- "fping"
- "git"
- "grub-pc"
- "htop"
- "ifenslave"
- "iperf"
- "iptables-persistent"
- "irqbalance"
- "language-pack-en"
- "linux-firmware-nonfree"
- "linux-headers-generic-lts-trusty"
- "linux-image-generic-lts-trusty"
- "livecd-rootfs"
- "memcached"
- "monit"
- "nginx"
- "ntp"
- "openssh-server"
- "percona-toolkit"
- "percona-xtrabackup"
- "pm-utils"
- "python-lesscpy"
- "python-pip"
- "puppet"
- "rsyslog-gnutls"
- "rsyslog-relp"
- "screen"
- "swift-plugin-s3"
- "sysfsutils"
- "sysstat"
- "telnet"
- "tmux"
- "traceroute"
- "ubuntu-standard"
- "vim"
- "virt-what"
- "xinetd"
groups:
mos:
- *mos
- *mos_updates
- *mos_security
- *mos_holdback
ubuntu:
- *ubuntu
- *ubuntu_updates
- *ubuntu_security
inheritance:
ubuntu: mos
osnames:
mos: ubuntu
requirements:
ubuntu: *packages

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
#'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'fuel_mirror'
copyright = u'2015, Mirantis, Inc'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
#intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,4 +0,0 @@
============
Contributing
============
.. include:: ../../../../CONTRIBUTING.rst

View File

@ -1,25 +0,0 @@
.. fuel_mirror documentation master file, created by
sphinx-quickstart on Tue Jul 9 22:26:36 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to fuel_mirror's documentation!
========================================================
Contents:
.. toctree::
:maxdepth: 2
readme
installation
usage
contributing
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`

View File

@ -1,12 +0,0 @@
============
Installation
============
At the command line::
$ pip install fuel_mirror
Or, if you have virtualenvwrapper installed::
$ mkvirtualenv fuel_mirror
$ pip install fuel_mirror

View File

@ -1 +0,0 @@
.. include:: ../../README.rst

View File

@ -1,7 +0,0 @@
========
Usage
========
To use fuel_mirror in a project::
import fuel_createmirror

View File

@ -1,11 +0,0 @@
threads_num: 10
ignore_errors_num: 2
retries_num: 3
target_dir: "/var/www/nailgun/mirrors"
pattern_dir: "/usr/share/fuel-mirror"
base_url: "http://{FUEL_SERVER_IP}:8080/mirrors/"
# uncomment if need
# http_proxy: null
# https_proxy: null
# fuel_server: 10.20.0.2

View File

@ -1,22 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
__version__ = pbr.version.VersionInfo(
'fuel_mirror').version_string()

View File

@ -1,155 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from cliff import app
from cliff.commandmanager import CommandManager
import yaml
import fuel_mirror
from fuel_mirror.common import accessors
from fuel_mirror.common import utils
class Application(app.App):
"""Main cliff application class.
Performs initialization of the command manager and
configuration of basic engines.
"""
config = None
fuel = None
repo_manager_accessor = None
sources = None
versions = None
def build_option_parser(self, description, version, argparse_kwargs=None):
"""Specifies common cmdline arguments."""
p_inst = super(Application, self)
parser = p_inst.build_option_parser(description=description,
version=version,
argparse_kwargs=argparse_kwargs)
parser.add_argument(
"--config",
default="/etc/fuel-mirror/config.yaml",
metavar="PATH",
help="Path to config file."
)
parser.add_argument(
"-S", "--fuel-server",
metavar="FUEL-SERVER",
help="The public address of Fuel Master."
)
parser.add_argument(
"--fuel-user",
help="Fuel Master admin login."
" Alternatively, use env var KEYSTONE_USER)."
)
parser.add_argument(
"--fuel-password",
help="Fuel Master admin password."
" Alternatively, use env var KEYSTONE_PASSWORD)."
)
return parser
def initialize_app(self, argv):
"""Initialises common options."""
with open(self.options.config, "r") as stream:
self.config = yaml.load(stream)
self._initialize_fuel_accessor()
self._initialize_repo_manager()
def _initialize_repo_manager(self):
self.repo_manager_accessor = accessors.get_packetary_accessor(
threads_num=int(self.config.get('threads_num', 0)),
retries_num=int(self.config.get('retries_num', 0)),
ignore_errors_num=int(self.config.get('ignore_errors_num', 0)),
http_proxy=self.config.get('http_proxy'),
https_proxy=self.config.get('https_proxy'),
)
def _initialize_fuel_accessor(self):
fuel_default = utils.get_fuel_settings()
fuel_server = utils.first(
self.options.fuel_server,
self.config.get("fuel_server"),
fuel_default.get("server")
)
fuel_user = utils.first(
self.options.fuel_user,
fuel_default.get("user")
)
fuel_password = utils.first(
self.options.fuel_password,
fuel_default.get("password")
)
if not fuel_server:
for option in ("mos_version", "openstack_version"):
if not self.config.setdefault(option, ''):
self.LOG.warning(
"The option '{0}' is not defined."
"Please specify the option 'fuel-server' or '{0}'."
.format(option)
)
return
self.config["base_url"] = self.config["base_url"].format(
FUEL_SERVER_IP=fuel_server.split(":", 1)[0]
)
self.fuel = accessors.get_fuel_api_accessor(
fuel_server,
fuel_user,
fuel_password
)
fuel_ver = self.fuel.FuelVersion.get_all_data()
self.config.setdefault(
'mos_version', fuel_ver['release']
)
self.config.setdefault(
'openstack_version', fuel_ver['openstack_version']
)
def main(argv=None):
"""Entry point."""
return Application(
description="The utility to create local mirrors.",
version=fuel_mirror.__version__,
command_manager=CommandManager("fuel_mirror", convert_underscores=True)
).run(argv)
def debug(name, cmd_class, argv=None):
"""Helps to debug command."""
import sys
if argv is None:
argv = sys.argv[1:]
argv = [name] + argv + ["-v", "-v", "--debug"]
cmd_mgr = CommandManager("test_fuel_mirror", convert_underscores=True)
cmd_mgr.add_command(name, cmd_class)
return Application(
description="The fuel mirror utility test.",
version="0.0.1",
command_manager=cmd_mgr
).run(argv)

View File

@ -1,161 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from packetary.library.utils import localize_repo_url
from fuel_mirror.commands.base import BaseCommand
from fuel_mirror.common.utils import is_subdict
from fuel_mirror.common.utils import lists_merge
class ApplyCommand(BaseCommand):
"""Applies local mirrors for Fuel-environments."""
def get_parser(self, prog_name):
parser = super(ApplyCommand, self).get_parser(prog_name)
parser.add_argument(
"--default",
dest="set_default",
action="store_true",
default=False,
help="Set as default repository."
)
parser.add_argument(
"-e", "--env",
dest="env", nargs="+",
help="Fuel environment ID to update, "
"by default applies for all environments."
)
return parser
def take_action(self, parsed_args):
if self.app.fuel is None:
raise ValueError("Please specify the fuel-server option.")
data = self.load_data(parsed_args)
base_url = self.app.config["base_url"]
localized_repos = []
for _, repos in self.get_groups(parsed_args, data):
for repo_data in repos:
new_data = repo_data.copy()
new_data['uri'] = localize_repo_url(
base_url, repo_data['uri']
)
localized_repos.append(new_data)
release_match = data["fuel_release_match"]
self.update_clusters(parsed_args.env, localized_repos, release_match)
if parsed_args.set_default:
self.update_default_repos(localized_repos, release_match)
self.app.stdout.write(
"Operations have been completed successfully.\n"
)
def update_clusters(self, ids, repositories, release_match):
"""Applies repositories for existing clusters.
:param ids: the cluster ids.
:param repositories: the meta information of repositories
:param release_match: The pattern to check Fuel Release
"""
self.app.stdout.write("Updating the Cluster repositories...\n")
if ids:
clusters = self.app.fuel.Environment.get_by_ids(ids)
else:
clusters = self.app.fuel.Environment.get_all()
for cluster in clusters:
releases = six.moves.filter(
lambda x: is_subdict(release_match, x.data),
self.app.fuel.Release.get_by_ids([cluster.data["release_id"]])
)
if next(releases, None) is None:
continue
modified = self._update_repository_settings(
cluster.get_settings_data(),
repositories
)
if modified:
self.app.LOG.info(
"Try to update the Cluster '%s'",
cluster.data['name']
)
self.app.LOG.debug(
"The modified cluster attributes: %s",
modified
)
cluster.set_settings_data(modified)
def update_default_repos(self, repositories, release_match):
"""Applies repositories for existing default settings.
:param repositories: the meta information of repositories
:param release_match: The pattern to check Fuel Release
"""
self.app.stdout.write("Updating the default repositories...\n")
releases = six.moves.filter(
lambda x: is_subdict(release_match, x.data),
self.app.fuel.Release.get_all()
)
for release in releases:
if self._update_repository_settings(
release.data["attributes_metadata"], repositories
):
self.app.LOG.info(
"Try to update the Release '%s'",
release.data['name']
)
self.app.LOG.debug(
"The modified release attributes: %s",
release.data
)
# TODO(need to add method for release object)
release.connection.put_request(
release.instance_api_path.format(release.id),
release.data
)
def _update_repository_settings(self, settings, repositories):
"""Updates repository settings.
:param settings: the target settings
:param repositories: the meta of repositories
"""
editable = settings["editable"]
if 'repo_setup' not in editable:
self.app.LOG.info('Attributes is read-only.')
return
repos_attr = editable["repo_setup"]["repos"]
lists_merge(repos_attr['value'], repositories, "name")
return {"editable": {"repo_setup": {"repos": repos_attr}}}
def debug(argv=None):
"""Helper for debugging Apply command."""
from fuel_mirror.app import debug
return debug("apply", ApplyCommand, argv)
if __name__ == "__main__":
debug()

View File

@ -1,97 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os.path
from string import Template
from cliff import command
import yaml
class BaseCommand(command.Command):
"""The Base command for fuel-mirror."""
REPO_ARCH = "x86_64"
@property
def stdout(self):
"""Shortcut for self.app.stdout."""
return self.app.stdout
def get_parser(self, prog_name):
"""Specifies common options."""
parser = super(BaseCommand, self).get_parser(prog_name)
input_group = parser.add_mutually_exclusive_group(required=True)
input_group.add_argument(
'-I', '--input-file',
metavar='PATH',
help='The path to file with input data.')
input_group.add_argument(
'-P', '--pattern',
metavar='NAME',
help='The builtin input file name.'
)
parser.add_argument(
"-G", "--group",
dest="groups",
required=True,
nargs='+',
help="The name of repository groups."
)
return parser
def resolve_input_pattern(self, pattern):
"""Gets the full path to input file by pattern.
:param pattern: the config file name without ext
:return: the full path
"""
return os.path.join(
self.app.config['pattern_dir'], pattern + ".yaml"
)
def load_data(self, parsed_args):
"""Load the input data.
:param parsed_args: the command-line arguments
:return: the input data
"""
if parsed_args.pattern:
input_file = self.resolve_input_pattern(parsed_args.pattern)
else:
input_file = parsed_args.input_file
# TODO(add input data validation scheme)
with open(input_file, "r") as fd:
return yaml.load(Template(fd.read()).safe_substitute(
mos_version=self.app.config["mos_version"],
openstack_version=self.app.config["openstack_version"],
))
@classmethod
def get_groups(cls, parsed_args, data):
"""Gets repository groups from input data.
:param parsed_args: the command-line arguments
:param data: the input data
:return: the sequence of pairs (group_name, repositories)
"""
all_groups = data['groups']
return (
(x, all_groups[x]) for x in parsed_args.groups if x in all_groups
)

View File

@ -1,75 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_mirror.commands.base import BaseCommand
from fuel_mirror.common.url_builder import get_url_builder
class CreateCommand(BaseCommand):
"""Creates a new local mirrors."""
def take_action(self, parsed_args):
"""See the Command.take_action."""
data = self.load_data(parsed_args)
repos_reqs = data.get('requirements', {})
inheritance = data.get('inheritance', {})
target_dir = self.app.config["target_dir"]
total_stats = None
for group_name, repos in self.get_groups(parsed_args, data):
url_builder = get_url_builder(repos[0]["type"])
repo_manager = self.app.repo_manager_accessor(
repos[0]["type"], self.REPO_ARCH
)
if group_name in inheritance:
child_group = inheritance[group_name]
dependencies = [
url_builder.get_repo_url(x)
for x in data['groups'][child_group]
]
else:
dependencies = None
stat = repo_manager.clone_repositories(
[url_builder.get_repo_url(x) for x in repos],
target_dir,
dependencies,
repos_reqs.get(group_name)
)
if total_stats is None:
total_stats = stat
else:
total_stats += stat
if total_stats is not None:
self.stdout.write(
"Packages processed: {0.copied}/{0.total}\n"
.format(total_stats)
)
else:
self.stdout.write(
"No packages.\n"
)
def debug(argv=None):
"""Helper for debugging Create command."""
from fuel_mirror.app import debug
return debug("create", CreateCommand, argv)
if __name__ == "__main__":
debug()

View File

@ -1,60 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
import os
def get_packetary_accessor(**kwargs):
"""Gets the configured repository manager.
:param kwargs: The packetary configuration parameters.
"""
import packetary
return functools.partial(
packetary.RepositoryApi.create,
packetary.Context(packetary.Configuration(**kwargs))
)
def get_fuel_api_accessor(address=None, user=None, password=None):
"""Gets the fuel client api accessor.
:param address: The address of Fuel Master node.
:param user: The username to access to the Fuel Master node.
:param user: The password to access to the Fuel Master node.
"""
if address:
host_and_port = address.split(":")
os.environ["SERVER_ADDRESS"] = host_and_port[0]
if len(host_and_port) > 1:
os.environ["LISTEN_PORT"] = host_and_port[1]
if user is not None:
os.environ["KEYSTONE_USER"] = user
if password is not None:
os.environ["KEYSTONE_PASS"] = password
# import fuelclient.ClientAPI after configuring
# environment variables
try:
from fuelclient import objects
except ImportError:
raise RuntimeError(
"fuelclient module seems not installed. "
"This action requires it to be available."
)
return objects

View File

@ -1,68 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def get_url_builder(repotype):
"""Gets the instance of RepoUrlBuilder.
:param repotype: the type of repository: rpm|deb
:return: the RepoBuilder implementation
"""
return {
"deb": AptRepoUrlBuilder,
"rpm": YumRepoUrlBuilder
}[repotype]
class RepoUrlBuilder(object):
REPO_FOLDER = "mirror"
@classmethod
def get_repo_url(cls, repo_data):
"""Gets the url with replaced variable holders.
:param repo_data: the repositories`s meta data
:return: the full repository`s url
"""
class AptRepoUrlBuilder(RepoUrlBuilder):
"""URL builder for apt-repository(es)."""
@classmethod
def get_repo_url(cls, repo_data):
return " ".join(
repo_data[x] for x in ("uri", "suite", "section")
)
class YumRepoUrlBuilder(RepoUrlBuilder):
"""URL builder for Yum repository(es)."""
@classmethod
def split_url(cls, url, maxsplit=2):
"""Splits url to baseurl, reponame adn architecture.
:param url: the repository`s URL
:param maxsplit: the number of expected components
:return the components of url
"""
# TODO(need generic url building algorithm)
# there is used assumption that url has following format
# $baseurl/$reponame/$repoarch
return url.rstrip("/").rsplit("/", maxsplit)
@classmethod
def get_repo_url(cls, repo_data):
return cls.split_url(repo_data["uri"], 1)[0]

View File

@ -1,90 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import yaml
def lists_merge(main, patch, key):
"""Merges the list of dicts with same keys.
>>> lists_merge([{"a": 1, "c": 2}], [{"a": 1, "c": 3}], key="a")
[{'a': 1, 'c': 3}]
:param main: the main list
:type main: list
:param patch: the list of additional elements
:type patch: list
:param key: the key for compare
"""
main_idx = dict(
(x[key], i) for i, x in enumerate(main)
)
patch_idx = dict(
(x[key], i) for i, x in enumerate(patch)
)
for k in sorted(patch_idx):
if k in main_idx:
main[main_idx[k]].update(patch[patch_idx[k]])
else:
main.append(patch[patch_idx[k]])
return main
def is_subdict(dict1, dict2):
"""Checks that dict1 is subdict of dict2.
>>> is_subdict({"a": 1}, {'a': 1, 'b': 1})
True
:param dict1: the candidate
:param dict2: the super dict
:return: True if all keys from dict1 are present
and has same value in dict2 otherwise False
"""
for k, v in six.iteritems(dict1):
if k not in dict2 or dict2[k] != v:
return False
return True
def first(*args):
"""Get first not empty value.
>>> first(0, 1) == next(iter(filter(None, [0, 1])))
True
:param args: the list of arguments
:return first value that bool(v) is True, None if not found.
"""
for arg in args:
if arg:
return arg
def get_fuel_settings():
"""Gets the fuel settings from astute container, if it is available."""
try:
with open("/etc/fuel/astute.yaml", "r") as fd:
settings = yaml.load(fd)
return {
"server": settings.get("ADMIN_NETWORK", {}).get("ipaddress"),
"user": settings.get("FUEL_ACCESS", {}).get("user"),
"password": settings.get("FUEL_ACCESS", {}).get("password")
}
except (OSError, IOError):
return {}

View File

@ -1,24 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
try:
import unittest2 as unittest
except ImportError:
import unittest
class TestCase(unittest.TestCase):
"""Test case base class for all unit tests."""

View File

@ -1,22 +0,0 @@
fuel_release_match:
operating_system: CentOS
inheritance:
centos: mos
groups:
mos:
- name: "mos"
type: "rpm"
uri: "http://localhost/mos$mos_version/x86_64"
priority: 10
centos:
- name: "centos"
type: "rpm"
uri: "http://localhost/centos/os/x86_64"
priority: 5
requirements:
centos:
- "package_rpm"

View File

@ -1,7 +0,0 @@
threads_num: 1
ignore_errors_num: 2
retries_num: 3
http_proxy: "http://localhost"
https_proxy: "https://localhost"
target_dir: "/var/www/"
base_url: "http://{FUEL_SERVER_IP}:8080/"

View File

@ -1,26 +0,0 @@
fuel_release_match:
operating_system: Ubuntu
inheritance:
ubuntu: mos
groups:
mos:
- name: "mos"
type: "deb"
uri: "http://localhost/mos"
suite: "mos$mos_version"
section: "main restricted"
priority: 1000
ubuntu:
- name: "ubuntu"
type: "deb"
uri: "http://localhost/ubuntu"
suite: "trusty"
section: "main multiverse restricted universe"
priority: 500
requirements:
ubuntu:
- "package_deb"

View File

@ -1,90 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
from fuel_mirror.common import accessors
from fuel_mirror.tests import base
class TestAccessors(base.TestCase):
def test_get_packetary_accessor(self):
packetary = mock.MagicMock()
with mock.patch.dict("sys.modules", packetary=packetary):
accessor = accessors.get_packetary_accessor(
http_proxy="http://localhost",
https_proxy="https://localhost",
retries_num=1,
threads_num=2,
ignore_errors_num=3
)
accessor("deb")
accessor("yum")
packetary.Configuration.assert_called_once_with(
http_proxy="http://localhost",
https_proxy="https://localhost",
retries_num=1,
threads_num=2,
ignore_errors_num=3
)
packetary.Context.assert_called_once_with(
packetary.Configuration()
)
self.assertEqual(2, packetary.RepositoryApi.create.call_count)
packetary.RepositoryApi.create.assert_any_call(
packetary.Context(), "deb"
)
packetary.RepositoryApi.create.assert_any_call(
packetary.Context(), "yum"
)
@mock.patch("fuel_mirror.common.accessors.os")
def test_get_fuel_api_accessor(self, os):
fuelclient = mock.MagicMock()
patch = {
"fuelclient": fuelclient,
"fuelclient.objects": fuelclient.objects
}
with mock.patch.dict("sys.modules", patch):
accessor = accessors.get_fuel_api_accessor(
"localhost:8080", "guest", "123"
)
accessor.Environment.get_all()
os.environ.__setitem__.asseert_any_call(
"SERVER_ADDRESS", "localhost"
)
os.environ.__setitem__.asseert_any_call(
"LISTEN_PORT", "8080"
)
os.environ.__setitem__.asseert_any_call(
"KEYSTONE_USER", "guest"
)
os.environ.__setitem__.asseert_any_call(
"KEYSTONE_PASS", "123"
)
fuelclient.objects.Environment.get_all.assert_called_once_with()
@mock.patch("fuel_mirror.common.accessors.os")
def test_get_fuel_api_accessor_with_default_parameters(self, os):
fuelclient = mock.MagicMock()
patch = {
"fuelclient": fuelclient,
"fuelclient.objects": fuelclient.objects
}
with mock.patch.dict("sys.modules", patch):
accessors.get_fuel_api_accessor()
os.environ.__setitem__.assert_not_called()

View File

@ -1,351 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import os.path
import subprocess
# The cmd2 does not work with python3.5
# because it tries to get access to the property mswindows,
# that was removed in 3.5
subprocess.mswindows = False
from fuel_mirror.commands import apply
from fuel_mirror.commands import create
from fuel_mirror.tests import base
CONFIG_PATH = os.path.join(
os.path.dirname(__file__), "data", "test_config.yaml"
)
UBUNTU_PATH = os.path.join(
os.path.dirname(__file__), "data", "test_ubuntu.yaml"
)
CENTOS_PATH = os.path.join(
os.path.dirname(__file__), "data", "test_centos.yaml"
)
@mock.patch.multiple(
"fuel_mirror.app",
accessors=mock.DEFAULT
)
class TestCliCommands(base.TestCase):
common_argv = [
"--config", CONFIG_PATH,
"--fuel-server=10.25.0.10",
"--fuel-user=test",
"--fuel-password=test1"
]
def start_cmd(self, cmd, argv, data_file):
cmd.debug(
argv + self.common_argv + ["--input-file", data_file]
)
def _setup_fuel_versions(self, fuel_mock):
fuel_mock.FuelVersion.get_all_data.return_value = {
"release": "1",
"openstack_version": "2"
}
def _create_fuel_release(self, fuel_mock, osname):
release = mock.MagicMock(data={
"name": "test release",
"operating_system": osname,
"attributes_metadata": {
"editable": {"repo_setup": {"repos": {"value": []}}}
}
})
fuel_mock.Release.get_by_ids.return_value = [release]
fuel_mock.Release.get_all.return_value = [release]
return release
def _create_fuel_env(self, fuel_mock):
env = mock.MagicMock(data={
"name": "test",
"release_id": 1
})
env.get_settings_data.return_value = {
"editable": {"repo_setup": {"repos": {"value": []}}}
}
fuel_mock.Environment.get_by_ids.return_value = [env]
fuel_mock.Environment.get_all.return_value = [env]
return env
def test_create_mos_ubuntu(self, accessors):
self._setup_fuel_versions(accessors.get_fuel_api_accessor())
packetary = accessors.get_packetary_accessor()
self.start_cmd(create, ["--group", "mos"], UBUNTU_PATH)
accessors.get_packetary_accessor.assert_called_with(
threads_num=1,
ignore_errors_num=2,
retries_num=3,
http_proxy="http://localhost",
https_proxy="https://localhost",
)
packetary.assert_called_with("deb", "x86_64")
api = packetary()
api.clone_repositories.assert_called_once_with(
['http://localhost/mos mos1 main restricted'],
'/var/www/',
None, None
)
def test_create_partial_ubuntu(self, accessors):
self._setup_fuel_versions(accessors.get_fuel_api_accessor())
packetary = accessors.get_packetary_accessor()
self.start_cmd(create, ["--group", "ubuntu"], UBUNTU_PATH)
accessors.get_packetary_accessor.assert_called_with(
threads_num=1,
ignore_errors_num=2,
retries_num=3,
http_proxy="http://localhost",
https_proxy="https://localhost",
)
packetary.assert_called_with("deb", "x86_64")
api = packetary()
api.clone_repositories.assert_called_once_with(
['http://localhost/ubuntu trusty '
'main multiverse restricted universe'],
'/var/www/',
['http://localhost/mos mos1 main restricted'],
['package_deb']
)
def test_create_mos_centos(self, accessors):
self._setup_fuel_versions(accessors.get_fuel_api_accessor())
packetary = accessors.get_packetary_accessor()
self.start_cmd(create, ["--group", "mos"], CENTOS_PATH)
accessors.get_packetary_accessor.assert_called_with(
threads_num=1,
ignore_errors_num=2,
retries_num=3,
http_proxy="http://localhost",
https_proxy="https://localhost",
)
packetary.assert_called_with("rpm", "x86_64")
api = packetary()
api.clone_repositories.assert_called_once_with(
['http://localhost/mos1'],
'/var/www/',
None, None
)
def test_create_partial_centos(self, accessors):
self._setup_fuel_versions(accessors.get_fuel_api_accessor())
packetary = accessors.get_packetary_accessor()
self.start_cmd(create, ["--group", "centos"], CENTOS_PATH)
accessors.get_packetary_accessor.assert_called_with(
threads_num=1,
ignore_errors_num=2,
retries_num=3,
http_proxy="http://localhost",
https_proxy="https://localhost",
)
packetary.assert_called_with("rpm", "x86_64")
api = packetary()
api.clone_repositories.assert_called_once_with(
['http://localhost/centos/os'],
'/var/www/',
['http://localhost/mos1'],
["package_rpm"]
)
def test_apply_for_ubuntu_based_env(self, accessors):
fuel = accessors.get_fuel_api_accessor()
self._setup_fuel_versions(fuel)
env = self._create_fuel_env(fuel)
self._create_fuel_release(fuel, "Ubuntu")
self.start_cmd(
apply, ['--group', 'mos', 'ubuntu', '--env', '1'],
UBUNTU_PATH
)
accessors.get_fuel_api_accessor.assert_called_with(
"10.25.0.10", "test", "test1"
)
fuel.FuelVersion.get_all_data.assert_called_once_with()
env.set_settings_data.assert_called_with(
{'editable': {'repo_setup': {'repos': {'value': [
{
'priority': 1000,
'name': 'mos',
'suite': 'mos1',
'section': 'main restricted',
'type': 'deb',
'uri': 'http://10.25.0.10:8080/mos'
},
{
'priority': 500,
'name': 'ubuntu',
'suite': 'trusty',
'section': 'main multiverse restricted universe',
'type': 'deb',
'uri': 'http://10.25.0.10:8080/ubuntu'
}
]}}}}
)
def test_apply_for_centos_based_env(self, accessors):
fuel = accessors.get_fuel_api_accessor()
self._setup_fuel_versions(fuel)
env = self._create_fuel_env(fuel)
self._create_fuel_release(fuel, "CentOS")
self.start_cmd(
apply, ['--group', 'mos', 'centos', '--env', '1'],
CENTOS_PATH
)
accessors.get_fuel_api_accessor.assert_called_with(
"10.25.0.10", "test", "test1"
)
fuel.FuelVersion.get_all_data.assert_called_once_with()
env.set_settings_data.assert_called_with(
{'editable': {'repo_setup': {'repos': {'value': [
{
'priority': 5,
'name': 'centos',
'type': 'rpm',
'uri': 'http://10.25.0.10:8080/centos/os/x86_64'
},
{
'priority': 10,
'name': 'mos',
'type': 'rpm',
'uri': 'http://10.25.0.10:8080/mos1/x86_64'
}]
}}}}
)
def test_apply_for_ubuntu_release(self, accessors):
fuel = accessors.get_fuel_api_accessor()
self._setup_fuel_versions(fuel)
env = self._create_fuel_env(fuel)
release = self._create_fuel_release(fuel, "Ubuntu")
self.start_cmd(
apply, ['--group', 'mos', 'ubuntu', '--default'],
UBUNTU_PATH
)
accessors.get_fuel_api_accessor.assert_called_with(
"10.25.0.10", "test", "test1"
)
fuel.FuelVersion.get_all_data.assert_called_once_with()
self.assertEqual(1, env.set_settings_data.call_count)
release.connection.put_request.assert_called_once_with(
release.instance_api_path.format(),
{
'name': "test release",
'operating_system': 'Ubuntu',
'attributes_metadata': {
'editable': {'repo_setup': {'repos': {'value': [
{
'name': 'mos',
'priority': 1000,
'suite': 'mos1',
'section': 'main restricted',
'type': 'deb',
'uri': 'http://10.25.0.10:8080/mos'
},
{
'name': 'ubuntu',
'priority': 500,
'suite': 'trusty',
'section': 'main multiverse restricted universe',
'type': 'deb',
'uri': 'http://10.25.0.10:8080/ubuntu'
}
]}}}
}
}
)
def test_apply_for_centos_release(self, accessors):
fuel = accessors.get_fuel_api_accessor()
self._setup_fuel_versions(fuel)
env = self._create_fuel_env(fuel)
release = self._create_fuel_release(fuel, "CentOS")
self.start_cmd(
apply, ['--group', 'mos', 'centos', '--default'],
CENTOS_PATH
)
accessors.get_fuel_api_accessor.assert_called_with(
"10.25.0.10", "test", "test1"
)
fuel.FuelVersion.get_all_data.assert_called_once_with()
self.assertEqual(1, env.set_settings_data.call_count)
release.connection.put_request.assert_called_once_with(
release.instance_api_path.format(),
{
'name': "test release",
'operating_system': 'CentOS',
'attributes_metadata': {
'editable': {'repo_setup': {'repos': {'value': [
{
'name': 'centos',
'priority': 5,
'type': 'rpm',
'uri': 'http://10.25.0.10:8080/centos/os/x86_64'
},
{
'name': 'mos',
'priority': 10,
'type': 'rpm',
'uri': 'http://10.25.0.10:8080/mos1/x86_64'
},
]}}}
}
}
)
@mock.patch("fuel_mirror.app.utils.get_fuel_settings")
def test_apply_fail_if_no_fuel_address(self, m_get_settings, accessors):
m_get_settings.return_value = {}
with self.assertRaisesRegexp(
ValueError, "Please specify the fuel-server option"):
apply.debug(
["--config", CONFIG_PATH, "-G", "mos", "-I", UBUNTU_PATH]
)
self.assertFalse(accessors.get_fuel_api_accessor.called)
@mock.patch("fuel_mirror.app.utils.get_fuel_settings")
def test_create_without_fuel_address(self, m_get_settings, accessors):
m_get_settings.return_value = {}
packetary = accessors.get_packetary_accessor()
create.debug(
["--config", CONFIG_PATH, "-G", "mos", "-I", UBUNTU_PATH]
)
self.assertFalse(accessors.get_fuel_api_accessor.called)
accessors.get_packetary_accessor.assert_called_with(
threads_num=1,
ignore_errors_num=2,
retries_num=3,
http_proxy="http://localhost",
https_proxy="https://localhost",
)
packetary.assert_called_with("deb", "x86_64")
api = packetary()
api.clone_repositories.assert_called_once_with(
['http://localhost/mos mos main restricted'],
'/var/www/',
None,
None
)

View File

@ -1,68 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from fuel_mirror.common import url_builder
from fuel_mirror.tests import base
class TestUrlBuilder(base.TestCase):
def test_get_url_builder(self):
self.assertTrue(issubclass(
url_builder.get_url_builder("deb"),
url_builder.AptRepoUrlBuilder
))
self.assertTrue(issubclass(
url_builder.get_url_builder("rpm"),
url_builder.YumRepoUrlBuilder
))
with self.assertRaises(KeyError):
url_builder.get_url_builder("unknown")
class TestAptUrlBuilder(base.TestCase):
@classmethod
def setUpClass(cls):
cls.builder = url_builder.get_url_builder("deb")
cls.repo_data = {
"name": "ubuntu",
"suite": "trusty",
"section": "main restricted",
"type": "deb",
"uri": "http://localhost/ubuntu"
}
def test_get_repo_url(self):
self.assertEqual(
"http://localhost/ubuntu trusty main restricted",
self.builder.get_repo_url(self.repo_data)
)
class TestYumUrlBuilder(base.TestCase):
@classmethod
def setUpClass(cls):
cls.builder = url_builder.get_url_builder("rpm")
cls.repo_data = {
"name": "centos",
"type": "rpm",
"uri": "http://localhost/os/x86_64"
}
def test_get_repo_url(self):
self.assertEqual(
"http://localhost/os",
self.builder.get_repo_url(self.repo_data)
)

View File

@ -1,102 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import mock
import six
from fuel_mirror.common import utils
from fuel_mirror.tests import base
class DictAsObj(object):
def __init__(self, d):
self.__dict__.update(d)
def __eq__(self, other):
return self.__dict__ == other.__dict__
class TestUtils(base.TestCase):
def test_lists_merge(self):
main = [{"a": 1, "b": 2, "c": 0}, {"a": 2, "b": 3, "c": 1}]
patch = [{"a": 2, "b": 4}, {"a": 3, "b": 5}]
utils.lists_merge(
main,
patch,
key="a"
)
self.assertItemsEqual(
[{"a": 1, "b": 2, "c": 0},
{"a": 2, "b": 4, "c": 1},
{"a": 3, "b": 5}],
main
)
def test_first(self):
self.assertEqual(
1,
utils.first(0, 1, 0),
)
self.assertEqual(
1,
utils.first(None, [], '', 1),
)
self.assertIsNone(
utils.first(None, [], 0, ''),
)
self.assertIsNone(
utils.first(),
)
def test_is_subdict(self):
self.assertFalse(utils.is_subdict({"c": 1}, {"a": 1, "b": 1}))
self.assertFalse(utils.is_subdict({"a": 1, "b": 2}, {"a": 1, "b": 1}))
self.assertFalse(
utils.is_subdict({"a": 1, "b": 1, "c": 2}, {"a": 1, "b": 1})
)
self.assertFalse(
utils.is_subdict({"a": 1, "b": None}, {"a": 1})
)
self.assertTrue(utils.is_subdict({}, {"a": 1}))
self.assertTrue(utils.is_subdict({"a": 1}, {"a": 1, "b": 1}))
self.assertTrue(utils.is_subdict({"a": 1, "b": 1}, {"a": 1, "b": 1}))
@mock.patch("fuel_mirror.common.utils.open")
def test_get_fuel_settings(self, m_open):
m_open().__enter__.side_effect = [
six.StringIO(
'ADMIN_NETWORK:\n'
' ipaddress: "10.20.0.4"\n'
'FUEL_ACCESS:\n'
' user: "test"\n'
' password: "test_pwd"\n',
),
OSError
]
self.assertEqual(
{
"server": "10.20.0.4",
"user": "test",
"password": "test_pwd",
},
utils.get_fuel_settings()
)
self.assertEqual(
{},
utils.get_fuel_settings()
)

View File

@ -1,33 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os.path
import yaml
from fuel_mirror.tests import base
DATA_DIR = os.path.join(os.path.dirname(__file__), "..", "..", "data")
class TestValidateConfigs(base.TestCase):
def test_validate_data_files(self):
for f in os.listdir(DATA_DIR):
with open(os.path.join(DATA_DIR, f), "r") as fd:
data = yaml.load(fd)
# TODO(add input data validation scheme)
self.assertIn("groups", data)
self.assertIn("fuel_release_match", data)

View File

@ -1,11 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
pbr>=0.8
Babel>=1.3
cliff>=1.7.0
six>=1.5.2
PyYAML>=3.10
packetary>=0.1.0
python-fuelclient>=7.0.0

View File

@ -1,80 +0,0 @@
#!/bin/bash
echo "This script is DEPRECATED. Please use fuel-mirror utility!"
# This shell script was wraps the fuel-mirror utility to provide backward compatibility
# with previous version of tool.
usage() {
cat <<EOF
Usage: `basename $0` [options]
Create and update local mirrors of MOS and/or Ubuntu.
IMPORTANT!
If NO parameters specified, this script will:
- Create/Update both MOS and Ubuntu local mirrors
- Set them as repositories for existing NEW environments in Fuel UI
- Set them as DEFAULT repositories for new environments
Options:
-h| --help This help screen.
-d| --no-default Don't change default repositories for new environments
-a| --no-apply Don't apply changes to Fuel environments
-M| --mos Create/Update MOS local mirror only
-U| --ubuntu Create/Update Ubuntu local mirror only
-p| --password Fuel Master admin password (defaults to admin)
EOF
}
# Parse options
OPTS=`getopt -o hdaMUNp: -l help,no-default,no-apply,mos,ubuntu,password:,dry-run -- "$@"`
if [ $? != 0 ]; then
usage
exit 1
fi
eval set -- "$OPTS"
CMD_OPTS="--pattern=ubuntu"
REPO_GROUPS=""
while true ; do
case "$1" in
-h| --help ) usage ; exit 0;;
-d | --no-default ) OPT_NO_DEFAULT=1; shift;;
-a | --no-apply ) OPT_NO_APPLY=1; shift;;
-N | --dry-run ) EXEC_PREFIX="echo EXEC "; shift;;
-M | --mos ) REPO_GROUPS="$REPO_GROUPS mos"; shift;;
-U | --ubuntu ) REPO_GROUPS="$REPO_GROUPS ubuntu"; shift;;
-p | --password ) CMD_OPTS="$CMD_OPTS --fuel-password=$2"; shift; shift;;
-- ) shift; break;;
* ) break;;
esac
done
if [[ "$@" != "" ]]; then
echo "Invalid option -- $@"
usage
exit 1
fi
if [[ "$REPO_GROUPS" == "" ]]; then
REPO_GROUPS="mos ubuntu"
fi
CMD_OPTS="$CMD_OPTS --group $REPO_GROUPS"
$EXEC_PREFIX fuel-mirror create ${CMD_OPTS}
if [[ "$OPT_NO_DEFAULT" == "" ]]; then
CMD_OPTS="$CMD_OPTS --default"
fi
if [[ "$OPT_NO_APPLY" == "" ]]; then
CMD_OPTS="$CMD_OPTS --apply"
fi
$EXEC_PREFIX fuel-mirror apply ${CMD_OPTS}

View File

@ -1,67 +0,0 @@
[metadata]
name = fuel_mirror
version = 8.0.0
summary = The Utility to create local repositories with packages is
required for openstack deployment.
description-file =
README.rst
author = Mirantis Inc.
author_email = product@mirantis.com
url = http://mirantis.com
home-page = http://mirantis.com
classifier =
Development Status :: 4 - Beta
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: GNU General Public License v2 (GPLv2)
Operating System :: POSIX :: Linux
Programming Language :: Python
Programming Language :: Python :: 2
Programming Language :: Python :: 2.7
Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
Topic :: Utilities
[files]
packages =
fuel_mirror
data_files =
etc/fuel-mirror = etc/*
share/fuel-mirror = data/*
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[entry_points]
console_scripts =
fuel-mirror=fuel_mirror.app:main
fuel_mirror =
apply=fuel_mirror.commands.apply:ApplyCommand
create=fuel_mirror.commands.create:CreateCommand
[upload_sphinx]
upload-dir = doc/build/html
[compile_catalog]
directory = locale
domain = fuel_mirror
[update_catalog]
domain = fuel_mirror
output_dir = locale
input_file = locale/fuel_mirror.pot
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = locale/fuel_mirror.pot
[global]
setup-hooks =
pbr.hooks.setup_hook
setup_hooks.setup_hook

View File

@ -1,28 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,21 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
def setup_hook(config):
import pbr
import pbr.packaging
# this monkey patch is to avoid appending git version to version
pbr.packaging._get_version_from_git = lambda pre_version: pre_version

View File

@ -1,17 +0,0 @@
# The order of packages is significant, because pip processes them in the order
# of appearance. Changing the order has an impact on the overall integration
# process, which may cause wedges in the gate later.
hacking<0.11,>=0.10.0
coverage>=3.6
discover
python-subunit>=0.0.18
sphinx!=1.2.0,!=1.3b1,<1.3,>=1.1.2
oslosphinx>=2.5.0 # Apache-2.0
oslotest>=1.10.0 # Apache-2.0
testrepository>=0.0.18
testscenarios>=0.4
testtools>=1.4.0
cliff>=1.7.0
six>=1.5.2

View File

@ -1,35 +0,0 @@
[tox]
minversion = 1.6
envlist = py34,py27,py26,pep8
skipsdist = True
[testenv]
usedevelop = True
install_command = pip install -U {opts} {packages}
setenv =
VIRTUAL_ENV={envdir}
deps = -r{toxinidir}/test-requirements.txt
commands = python setup.py test --slowest --testr-args='{posargs:fuel_mirror}'
[testenv:pep8]
commands = flake8
[testenv:venv]
commands = {posargs}
[testenv:cover]
commands = python setup.py test --coverage --testr-args='{posargs:fuel_mirror}'
[testenv:docs]
commands = python setup.py build_sphinx
[testenv:debug]
commands = oslo_debug_helper {posargs}
[flake8]
# E123, E125 skipped as they are invalid PEP-8.
show-source = True
ignore = E123,E125
builtins = _
exclude=.venv,.git,.tox,dist,doc,*openstack/common*,*lib/python*,*egg,build

5
debian/changelog vendored
View File

@ -1,5 +0,0 @@
fuel-mirror (8.0.0-1) experimental; urgency=low
* Initial release.
-- bgaifullin <bgaifullin@mirantis.com> Fri, 27 Nov 2015 00:28:26 +0300

1
debian/compat vendored
View File

@ -1 +0,0 @@
9

54
debian/control vendored
View File

@ -1,54 +0,0 @@
Source: fuel-mirror
Section: Utilities
Priority: extra
Maintainer: Mirantis Product <product@mirantis.com>
Build-Depends: debhelper (>= 9),
dh-python,
openstack-pkg-tools (>= 23~),
python-all,
python-pbr (>= 0.8),
python-setuptools
Standards-Version: 3.9.6
Homepage: mirantis.com
Package: fuel-mirror
Architecture: all
Section: python
Depends: python-babel,
python-cliff (>= 1.7.0),
python-packetary (= ${binary:Version}),
python-pbr (>= 0.8),
python-six,
python-yaml,
python-tz,
${python:Depends}
Recommends: python-fuelclient (>= 7.0.0)
Description: Utility to create RPM and DEB mirror
Provides two commands fuel-mirror and fuel-createmirror.
Second one is for backward compatibility with the previous
generation of the utility. These commands could be used
to create local copies of MOS and upstream deb and rpm
repositories.
Package: python-packetary
Architecture: all
Depends: createrepo,
python-babel,
python-bintrees (>= 2.0.2),
python-chardet,
python-cliff (>= 1.7.0),
python-debian (>= 0.1.21),
python-eventlet (>= 0.15),
python-lxml,
python-pbr (>= 0.8),
python-six,
python-stevedore (>= 1.1.0),
python-tz,
${python:Depends}
Description: Library allows to build and clone deb and rpm repos
Provides object model and API for dealing with deb
and rpm repositories. One can use this framework to
implement operations like building repository
from a set of packages, clone repository, find package
dependencies, mix repositories, pull out a subset of
packages into a separate repository, etc.

43
debian/copyright vendored
View File

@ -1,43 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: fuel-mirror
Source: git://github.com/openstack/fuel-mirror.git
Files: debian/*
Copyright: (c) 2014, Mirantis
License: GPL-2
Files: *
Copyright: (c) 2014, Mirantis
License: Apache-2
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at:
.
http://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
.
On Debian-based systems the full text of the Apache version 2.0 license can be
found in /usr/share/common-licenses/Apache-2.0.
License: GPL-2
Licensed under the GPL License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at:
.
http://www.opensource.org/licenses/GPL-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
.
On Debian-based systems the full text of the GPL version 2.0 license can be
found in /usr/share/common-licenses/GPL-2.

View File

@ -1,2 +0,0 @@
contrib/fuel_mirror/etc/config.yaml /etc/fuel-mirror
contrib/fuel_mirror/scripts/fuel-createmirror /usr/bin/

39
debian/rules vendored
View File

@ -1,39 +0,0 @@
#!/usr/bin/make -f
PYTHONS:=$(shell pyversions -vr)
include /usr/share/openstack-pkg-tools/pkgos.make
export OSLO_PACKAGE_VERSION=$(shell dpkg-parsechangelog | grep Version: | cut -d' ' -f2 | sed -e 's/^[[:digit:]]*://' -e 's/[-].*//' -e 's/~/.0/' | head -n 1)
%:
dh $@ --buildsystem=python_distutils --with python2
override_dh_clean:
rm -rf build
dh_clean -O--buildsystem=python_distutils
override_dh_auto_install:
set -e ; for pyvers in $(PYTHONS); do \
python$$pyvers setup.py install --install-layout=deb \
--root $(CURDIR)/debian/python-packetary; \
done
set -e ; cd contrib/fuel_mirror/; \
for pyvers in $(PYTHONS); do \
python$$pyvers ./setup.py install --install-layout=deb \
--root $(CURDIR)/debian/fuel-mirror; \
done
override_dh_fixperms:
set -e; chmod 755 $(CURDIR)/debian/fuel-mirror/usr/bin/fuel-createmirror
override_dh_python2:
dh_python2 --no-guessing-deps
override_dh_installcatalogs:
override_dh_installemacsen override_dh_installifupdown:
override_dh_installinfo override_dh_installmenu override_dh_installmime:
override_dh_installmodules override_dh_installlogcheck:
override_dh_installpam override_dh_installppp override_dh_installudev override_dh_installwm:
override_dh_installxfonts override_dh_gconf override_dh_icons override_dh_perl override_dh_usrlocal:
override_dh_installgsettings:

View File

@ -1 +0,0 @@
3.0 (quilt)

View File

@ -36,6 +36,7 @@ class TestDebDriver(base.TestCase):
def setUpClass(cls):
super(TestDebDriver, cls).setUpClass()
cls.driver = deb_driver.DebRepositoryDriver()
cls.driver.logger = mock.MagicMock()
def setUp(self):
self.connection = mock.MagicMock()

View File

@ -45,6 +45,7 @@ class TestRpmDriver(base.TestCase):
super(TestRpmDriver, cls).setUpClass()
cls.driver = rpm_driver.RpmRepositoryDriver()
cls.driver.logger = mock.MagicMock()
def setUp(self):
self.createrepo.reset_mock()

View File

@ -1,2 +0,0 @@
.package-defaults
.publisher-defaults

View File

@ -1,151 +0,0 @@
#!/bin/bash
set -o xtrace
set -o errexit
[ -f ".packages-defaults" ] && source .packages-defaults
BINDIR=$(dirname `readlink -e $0`)
source "${BINDIR}"/build-functions.sh
main () {
set_default_params
# Get package tree from gerrit
fetch_upstream
local _srcpath="${MYOUTDIR}/${PACKAGENAME}-src"
local _specpath=$_srcpath
local _testspath=$_srcpath
[ "$IS_OPENSTACK" == "true" ] && _specpath="${MYOUTDIR}/${PACKAGENAME}-spec${SPEC_PREFIX_PATH}" && _testspath="${MYOUTDIR}/${PACKAGENAME}-spec"
local _debianpath=$_specpath
if [ -d "${_debianpath}/debian" ] ; then
# Unpacked sources and specs
local srcpackagename=`head -1 ${_debianpath}/debian/changelog | cut -d' ' -f1`
local version=`head -1 ${_debianpath}/debian/changelog | sed 's|^.*(||;s|).*$||' | awk -F "-" '{print $1}'`
local binpackagenames="`cat ${_debianpath}/debian/control | grep ^Package | cut -d' ' -f 2 | tr '\n' ' '`"
local epochnumber=`head -1 ${_debianpath}/debian/changelog | grep -o "(.:" | sed 's|(||'`
local distro=`head -1 ${_debianpath}/debian/changelog | awk -F'[ ;]' '{print $3}'`
# Get last commit info
# $message $author $email $cdate $commitsha $lastgitlog
get_last_commit_info ${_srcpath}
TAR_NAME="${srcpackagename}_${version#*:}.orig.tar.gz"
if [ "$IS_OPENSTACK" == "true" ] ; then
# Get version number from the latest git tag for openstack packages
local release_tag=$(git -C $_srcpath describe --abbrev=0)
# Deal with PyPi versions like 2015.1.0rc1
# It breaks version comparison
# Change it to 2015.1.0~rc1
local convert_version_py="$(dirname $(readlink -e $0))/convert-version.py"
version=$(python ${convert_version_py} --tag ${release_tag})
local TAR_NAME="${srcpackagename}_${version}.orig.tar.gz"
# Get revision number as commit count from tag to head of source branch
local _rev=$(git -C $_srcpath rev-list --no-merges ${release_tag}..origin/${SOURCE_BRANCH} | wc -l)
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && _rev=$(( $_rev + 1 ))
local release=$(dpkg-parsechangelog --show-field Version -l${_debianpath}/debian/changelog | cut -d'-' -f2 | sed -r 's|[0-9]+$||')
local release="${release}${_rev}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && release="${release}+git.${gitshasrc}.${gitshaspec}"
local fullver=${epochnumber}${version}-${release}
# Update version and changelog
local firstline=1
local _dchopts="-c ${_debianpath}/debian/changelog"
echo "$lastgitlog" | while read LINE; do
[ $firstline == 1 ] && local cmd="dch $_dchopts -D $distro -b --force-distribution -v $fullver" || local cmd="dch $_dchopts -a"
firstline=0
local commitid=`echo "$LINE" | cut -d'|' -f1`
local email=`echo "$LINE" | cut -d'|' -f2`
local author=`echo "$LINE" | cut -d'|' -f3`
local subject=`echo "$LINE" | cut -d'|' -f4`
DEBFULLNAME="$author" DEBEMAIL="$email" $cmd "$commitid $subject"
done
# Prepare source tarball
pushd $_srcpath &>/dev/null
if [ "$PACKAGENAME" == "murano-apps" -o "$PACKAGENAME" == "rally" ]; then
# Do not perform `setup.py sdist` for murano-apps and rally packages
tar -czf ${BUILDDIR}/$TAR_NAME $EXCLUDES .
else
python setup.py --version # this will download pbr if it's not available
PBR_VERSION=$release_tag python setup.py sdist -d ${BUILDDIR}/
# Fix source folder name at sdist tarball
local sdist_tarball=$(find ${BUILDDIR}/ -maxdepth 1 -name "*.gz")
if [ "$(tar -tf $sdist_tarball | head -n 1 | cut -d'/' -f1)" != "${srcpackagename}-${version}" ] ; then
# rename source folder
local tempdir=$(mktemp -d)
tar -C $tempdir -xf $sdist_tarball
mv $tempdir/* $tempdir/${srcpackagename}-${version}
tar -C $tempdir -czf ${BUILDDIR}/$TAR_NAME ${srcpackagename}-${version}
rm -f $sdist_tarball
[ -d "$tempdir" ] && rm -rf $tempdir
else
mv $sdist_tarball ${BUILDDIR}/$TAR_NAME || :
fi
fi
popd &>/dev/null
else
# Update changelog
DEBFULLNAME=$author DEBEMAIL=$email dch -c ${_debianpath}/debian/changelog -a "$commitsha $message"
# Prepare source tarball
# Exclude debian and tests dir
mv ${_srcpath}/debian ${_srcpath}/renameforexcludedebian
[ -d "${_srcpath}/tests" ] && mv ${_srcpath}/tests ${_srcpath}/renameforexcludetests
pushd ${_srcpath} &>/dev/null
tar -czf "${BUILDDIR}/${TAR_NAME}" $EXCLUDES --exclude=renameforexcludedebian --exclude=renameforexcludetests *
popd &>/dev/null
mv ${_srcpath}/renameforexcludedebian ${_srcpath}/debian
[ -d "${_srcpath}/renameforexcludetests" ] && mv ${_srcpath}/renameforexcludetests ${_srcpath}/tests
fi
mkdir -p ${BUILDDIR}/$srcpackagename
cp -R ${_debianpath}/debian ${BUILDDIR}/${srcpackagename}/
else
# Packed sources (.dsc + .gz )
cp ${_srcpath}/* $BUILDDIR
fi
# Prepare tests folder to provide as parameter
rm -f ${WRKDIR}/tests.envfile
[ -d "${_testspath}/tests" ] && echo "TESTS_CONTENT='`tar -cz -C ${_testspath} tests | base64 -w0`'" > ${WRKDIR}/tests.envfile
# Build stage
local REQUEST=$REQUEST_NUM
[ -n "$LP_BUG" ] && REQUEST=$LP_BUG
COMPONENTS="main restricted"
EXTRAREPO="http://${REMOTE_REPO_HOST}/${DEB_REPO_PATH} ${DEB_DIST_NAME} ${COMPONENTS}"
[ "$IS_UPDATES" == 'true' ] \
&& EXTRAREPO="${EXTRAREPO}|http://${REMOTE_REPO_HOST}/${DEB_REPO_PATH} ${DEB_PROPOSED_DIST_NAME} ${COMPONENTS}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && [ "$IS_UPDATES" != "true" ] \
&& EXTRAREPO="${EXTRAREPO}|http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${DEB_REPO_PATH} ${DEB_DIST_NAME} ${COMPONENTS}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && [ "$IS_UPDATES" == "true" ] \
&& EXTRAREPO="${EXTRAREPO}|http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${DEB_REPO_PATH} ${DEB_PROPOSED_DIST_NAME} ${COMPONENTS}"
export EXTRAREPO
pushd $BUILDDIR &>/dev/null
echo "BUILD_SUCCEEDED=false" > ${WRKDIR}/buildresult.params
bash -ex ${BINDIR}/docker-builder/build-deb-package.sh
local exitstatus=`cat buildresult/exitstatus.sbuild || echo 1`
rm -f buildresult/exitstatus.sbuild
[ -f "buildresult/buildlog.sbuild" ] && mv buildresult/buildlog.sbuild ${WRKDIR}/buildlog.txt
fill_buildresult $exitstatus 0 $PACKAGENAME DEB
if [ "$exitstatus" == "0" ] ; then
tmpdir=`mktemp -d ${PKG_DIR}/build-XXXXXXXX`
rm -f ${WRKDIR}/buildresult.params
cat >${WRKDIR}/buildresult.params<<-EOL
BUILD_HOST=`hostname -f`
PKG_PATH=$tmpdir
GERRIT_CHANGE_STATUS=$GERRIT_CHANGE_STATUS
REQUEST_NUM=$REQUEST_NUM
LP_BUG=$LP_BUG
IS_SECURITY=$IS_SECURITY
EXTRAREPO="$EXTRAREPO"
REPO_TYPE=deb
DIST=$DIST
EOL
mv buildresult/* $tmpdir/
fi
popd &>/dev/null
exit $exitstatus
}
main "$@"
exit 0

View File

@ -1,129 +0,0 @@
#!/bin/bash
set -o xtrace
set -o errexit
[ -f .fuel-default ] && source .fuel-default
BINDIR=$(dirname `readlink -e $0`)
source "${BINDIR}"/build-functions.sh
main () {
set_default_params
[ -n "$GERRIT_BRANCH" ] && SOURCE_BRANCH=$GERRIT_BRANCH && SOURCE_REFSPEC=$GERRIT_REFSPEC
[ -n "$GERRIT_PROJECT" ] && SRC_PROJECT=$GERRIT_PROJECT
PACKAGENAME=${SRC_PROJECT##*/}
local DEBSPECFILES="${PACKAGENAME}-src/debian"
# If we are triggered from gerrit env, let's keep current workflow,
# and fetch code from upstream
# otherwise let's define custom path to already prepared source code
# using $CUSTOM_SRC_PATH variable
if [ -n "${GERRIT_BRANCH}" ]; then
# Get package tree from gerrit
fetch_upstream
local _srcpath="${MYOUTDIR}/${PACKAGENAME}-src"
else
local _srcpath="${CUSTOM_SRC_PATH}"
fi
local _specpath=$_srcpath
local _debianpath=$_specpath
if [ -d "${_debianpath}/debian" ] ; then
# Unpacked sources and specs
local srcpackagename=`head -1 ${_debianpath}/debian/changelog | cut -d' ' -f1`
local version=`head -1 ${_debianpath}/debian/changelog | sed 's|^.*(||;s|).*$||' | awk -F "-" '{print $1}'`
local binpackagenames="`cat ${_debianpath}/debian/control | grep ^Package | cut -d' ' -f 2 | tr '\n' ' '`"
local epochnumber=`head -1 ${_debianpath}/debian/changelog | grep -o "(.:" | sed 's|(||'`
local distro=`head -1 ${_debianpath}/debian/changelog | awk -F'[ ;]' '{print $3}'`
# Get last commit info
# $message $author $email $cdate $commitsha $lastgitlog
get_last_commit_info ${_srcpath}
# Get revision number as commit count for src+spec projects
local _rev=`git -C $_srcpath rev-list --no-merges origin/${SOURCE_BRANCH} | wc -l`
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && _rev=$(( $_rev + 1 ))
local release="1~u14.04+mos${_rev}"
# if gitshasrc is not defined (we are not using fetch_upstream), let's do it
[ -n "${gitshasrc}" ] || local gitshasrc=$(git -C $_srcpath log -1 --pretty="%h")
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && release="${release}+git.${gitshasrc}"
local fullver=${epochnumber}${version}-${release}
# Update version and changelog
local firstline=1
local _dchopts="-c ${_debianpath}/debian/changelog"
echo "$lastgitlog" | while read LINE; do
[ $firstline == 1 ] && local cmd="dch $_dchopts -D $distro -b --force-distribution -v $fullver" || local cmd="dch $_dchopts -a"
firstline=0
local commitid=`echo "$LINE" | cut -d'|' -f1`
local email=`echo "$LINE" | cut -d'|' -f2`
local author=`echo "$LINE" | cut -d'|' -f3`
local subject=`echo "$LINE" | cut -d'|' -f4`
DEBFULLNAME="$author" DEBEMAIL="$email" $cmd "$commitid $subject"
done
TAR_NAME="${srcpackagename}_${version#*:}.orig.tar.gz"
# Update changelog
DEBFULLNAME=$author DEBEMAIL=$email dch -c ${_debianpath}/debian/changelog -a "$commitsha $message"
# Prepare source tarball
# Exclude debian dir
pushd $_srcpath &>/dev/null
cat >.gitattributes<<-EOF
/debian export-ignore
/.gitignore export-ignore
/.gitreview export-ignore
EOF
git archive --prefix=./ --format=tar.gz --worktree-attributes HEAD --output="${BUILDDIR}/${TAR_NAME}"
popd &>/dev/null
mkdir -p ${BUILDDIR}/$srcpackagename
cp -R ${_debianpath}/debian ${BUILDDIR}/${srcpackagename}/
fi
# Build stage
local REQUEST=$REQUEST_NUM
[ -n "$LP_BUG" ] && REQUEST=$LP_BUG
COMPONENTS="main restricted"
[ -n "${EXTRAREPO}" ] && EXTRAREPO="${EXTRAREPO}|"
EXTRAREPO="${EXTRAREPO}http://${REMOTE_REPO_HOST}/${DEB_REPO_PATH} ${DEB_DIST_NAME} ${COMPONENTS}"
[ "$IS_UPDATES" == 'true' ] \
&& EXTRAREPO="${EXTRAREPO}|http://${REMOTE_REPO_HOST}/${DEB_REPO_PATH} ${DEB_PROPOSED_DIST_NAME} ${COMPONENTS}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && [ "$IS_UPDATES" != "true" ] \
&& EXTRAREPO="${EXTRAREPO}|http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${DEB_REPO_PATH} ${DEB_DIST_NAME} ${COMPONENTS}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && [ "$IS_UPDATES" == "true" ] \
&& EXTRAREPO="${EXTRAREPO}|http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${DEB_REPO_PATH} ${DEB_PROPOSED_DIST_NAME} ${COMPONENTS}"
export EXTRAREPO
pushd $BUILDDIR &>/dev/null
echo "BUILD_SUCCEEDED=false" > ${WRKDIR}/buildresult.params
bash -ex ${BINDIR}/docker-builder/build-deb-package.sh
local exitstatus=`cat buildresult/exitstatus.sbuild || echo 1`
rm -f buildresult/exitstatus.sbuild
[ -f "buildresult/buildlog.sbuild" ] && mv buildresult/buildlog.sbuild ${WRKDIR}/buildlog.txt
fill_buildresult $exitstatus 0 $PACKAGENAME DEB
if [ "$exitstatus" == "0" ] && [ -n "${GERRIT_BRANCH}" ]; then
tmpdir=`mktemp -d ${PKG_DIR}/build-XXXXXXXX`
rm -f ${WRKDIR}/buildresult.params
cat >${WRKDIR}/buildresult.params<<-EOL
BUILD_HOST=`hostname -f`
PKG_PATH=$tmpdir
GERRIT_CHANGE_STATUS=$GERRIT_CHANGE_STATUS
REQUEST_NUM=$REQUEST_NUM
LP_BUG=$LP_BUG
IS_SECURITY=$IS_SECURITY
EXTRAREPO="$EXTRAREPO"
REPO_TYPE=deb
DIST=$DIST
EOL
mv buildresult/* $tmpdir/
fi
popd &>/dev/null
echo "Packages: $PACKAGENAME"
exit $exitstatus
}
main $@
exit 0

View File

@ -1,130 +0,0 @@
#!/bin/bash
set -o xtrace
set -o errexit
[ -f .fuel-default ] && source .fuel-default
BINDIR=$(dirname `readlink -e $0`)
source "${BINDIR}"/build-functions.sh
main () {
set_default_params
[ -n "$GERRIT_BRANCH" ] && SOURCE_BRANCH=$GERRIT_BRANCH && SOURCE_REFSPEC=$GERRIT_REFSPEC
[ -n "$GERRIT_PROJECT" ] && SRC_PROJECT=$GERRIT_PROJECT
PACKAGENAME=${SRC_PROJECT##*/}
# If we are triggered from gerrit env, let's keep current workflow,
# and fetch code from upstream
# otherwise let's define custom path to already prepared source code
# using $CUSTOM_SRC_PATH variable
if [ -n "${GERRIT_BRANCH}" ]; then
# Get package tree from gerrit
fetch_upstream
local _srcpath="${MYOUTDIR}/${PACKAGENAME}-src"
else
local _srcpath="${CUSTOM_SRC_PATH}"
fi
local _specpath="${_srcpath}/specs"
# Get last commit info
# $message $author $email $cdate $commitsha $lastgitlog
get_last_commit_info ${_srcpath}
# Update specs
local specfile=`find $_specpath -name *.spec`
local version=`rpm -q --specfile $specfile --queryformat '%{VERSION}\n' | head -1`
local release=`rpm -q --specfile $specfile --queryformat '%{RELEASE}\n' | head -1`
## Add changelog section if it doesn't exist
[ `cat ${specfile} | grep -c '^%changelog'` -eq 0 ] && echo "%changelog" >> ${specfile}
local _rev=`git -C $_srcpath rev-list --no-merges origin/${SOURCE_BRANCH} | wc -l`
# if gitshasrc is not defined (we are not using fetch_upstream), let's do it
[ -n "${gitshasrc}" ] || local gitshasrc=$(git -C $_srcpath log -1 --pretty="%h")
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && _rev=$(( $_rev + 1 ))
local release="1.mos${_rev}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && release="${release}.git.${gitshasrc}"
local TAR_NAME=${PACKAGENAME}-${version}.tar.gz
# Update version and changelog
sed -i "s|Version:.*$|Version: ${version}|" $specfile
sed -i "s|Release:.*$|Release: ${release}|" $specfile
sed -i "s|Source0:.*$|Source0: ${TAR_NAME}|" $specfile
## Update changelog
local firstline=1
if [ ! -z "$lastgitlog" ]; then
sed -i "/%changelog/i%newchangelog" ${specfile}
echo "$lastgitlog" | while read LINE; do
local commitid=`echo "$LINE" | cut -d'|' -f1`
local email=`echo "$LINE" | cut -d'|' -f2`
local author=`echo "$LINE" | cut -d'|' -f3`
# Get current date to avoid wrong chronological order in %changelog section
local date=`LC_TIME=C date +"%a %b %d %Y"`
local subject=`echo "$LINE" | cut -d'|' -f4`
[ $firstline == 1 ] && sed -i "/%changelog/i\* $date $author \<${email}\> \- ${version}-${release}" ${specfile}
sed -i "/%changelog/i\- $commitid $subject" ${specfile}
firstline=0
done
fi
sed -i '/%changelog/i\\' ${specfile}
sed -i '/^%changelog/d' ${specfile}
sed -i 's|^%newchangelog|%changelog|' ${specfile}
cp ${specfile} ${BUILDDIR}/
# Prepare source tarball
pushd $_srcpath &>/dev/null
git archive --format tar --worktree-attributes HEAD > ${BUILDDIR}/${PACKAGENAME}.tar
git rev-parse HEAD > ${BUILDDIR}/version.txt
pushd $BUILDDIR &>/dev/null
tar -rf ${PACKAGENAME}.tar version.txt
gzip -9 ${PACKAGENAME}.tar
mv ${PACKAGENAME}.tar.gz ${PACKAGENAME}-${version}.tar.gz
[ -f version.txt ] && rm -f version.txt
popd &>/dev/null
popd &>/dev/null
# Build stage
local REQUEST=$REQUEST_NUM
[ -n "$LP_BUG" ] && REQUEST=$LP_BUG
[ -n "${EXTRAREPO}" ] && EXTRAREPO="${EXTRAREPO}|"
EXTRAREPO="${EXTRAREPO}repo1,http://${REMOTE_REPO_HOST}/${RPM_OS_REPO_PATH}/x86_64"
[ "$IS_UPDATES" == 'true' ] && \
EXTRAREPO="${EXTRAREPO}|repo2,http://${REMOTE_REPO_HOST}/${RPM_PROPOSED_REPO_PATH}/x86_64"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && [ "$IS_UPDATES" != "true" ] && \
EXTRAREPO="${EXTRAREPO}|repo3,http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${RPM_OS_REPO_PATH}/x86_64"
[ "$GERRIT_STATUS" == "NEW" ] && [ "$IS_UPDATES" == "true" ] && \
EXTRAREPO="${EXTRAREPO}|repo3,http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${RPM_PROPOSED_REPO_PATH}/x86_64"
export EXTRAREPO
pushd $BUILDDIR &>/dev/null
echo "BUILD_SUCCEEDED=false" > ${WRKDIR}/buildresult.params
bash -x ${BINDIR}/docker-builder/build-rpm-package.sh
local exitstatus=`cat build/exitstatus.mock || echo 1`
rm -f build/exitstatus.mock build/state.log
[ -f "build/build.log" ] && mv build/build.log ${WRKDIR}/buildlog.txt
[ -f "build/root.log" ] && mv build/root.log ${WRKDIR}/rootlog.txt
fill_buildresult $exitstatus 0 $PACKAGENAME RPM
if [ "$exitstatus" == "0" ] && [ -n "${GERRIT_BRANCH}" ]; then
tmpdir=`mktemp -d ${PKG_DIR}/build-XXXXXXXX`
rm -f ${WRKDIR}/buildresult.params
cat >${WRKDIR}/buildresult.params<<-EOL
BUILD_HOST=`hostname -f`
PKG_PATH=$tmpdir
GERRIT_CHANGE_STATUS=$GERRIT_CHANGE_STATUS
REQUEST_NUM=$REQUEST_NUM
LP_BUG=$LP_BUG
IS_SECURITY=$IS_SECURITY
EXTRAREPO="$EXTRAREPO"
REPO_TYPE=rpm
DIST=$DIST
EOL
mv build/* $tmpdir/
fi
popd &>/dev/null
echo "Packages: $PACKAGENAME"
exit $exitstatus
}
main $@
exit 0

View File

@ -1,262 +0,0 @@
#!/bin/bash
[ -z "$GERRIT_USER" ] && GERRIT_USER='openstack-ci-jenkins'
[ -z "$GERRIT_HOST" ] && GERRIT_HOST=$gerrit_host
[ -z "$GERRIT_PORT" ] && GERRIT_PORT=$gerrit_port
[ -z "$GERRIT_PORT" ] && GERRIT_PORT=29418
[ -z "$GERRIT_SCHEME" ] && GERRIT_SCHEME="ssh"
URL="${GERRIT_SCHEME}://${GERRIT_USER}@${GERRIT_HOST}:${GERRIT_PORT}"
GITDATA=${HOME}/gitdata/$GERRIT_HOST
METADATA=${HOME}/repometadata
PKG_DIR=${HOME}/built_packages
EXCLUDES='--exclude-vcs'
WRKDIR=`pwd`
MYOUTDIR=${WRKDIR}/wrk-build
BUILDDIR=${MYOUTDIR}/src-to-build
rm -rf $BUILDDIR
mkdir -p $BUILDDIR
[ ! -d "$PKG_DIR" ] && mkdir -p $PKG_DIR
[ -f "${WRKDIR}/buildlog.txt" ] && rm -f ${WRKDIR}/buildlog.txt
error () {
echo
echo -e "ERROR: $*"
echo
exit 1
}
info () {
echo
echo -e "INFO: $*"
echo
}
job_lock() {
local LOCKFILE=$1
local TIMEOUT=600
shift
fd=15
eval "exec $fd>$LOCKFILE"
if [ "$1" = "set" ]; then
flock --timeout $TIMEOUT -x $fd
elif [ "$1" = "unset" ]; then
flock -u $fd
fi
}
request_is_merged () {
local REF=$1
local CHANGENUMBER=`echo $REF | cut -d '/' -f4`
local result=1
local status=`ssh ${GERRIT_USER}@${GERRIT_HOST} -p $GERRIT_PORT gerrit query --format=TEXT $CHANGENUMBER | egrep -o " +status:.*" | awk -F': ' '{print $2}'`
[ "$status" == "MERGED" ] && local result=0
return $result
}
set_default_params () {
[ -z "$PROJECT_NAME" ] && error "Project name is not defined! Exiting!"
[ -z "$PROJECT_VERSION" ] && error "Project version is not defined! Exiting!"
[ -z "$SECUPDATETAG" ] && local SECUPDATETAG="^Security-update"
[ -z "$IS_SECURITY" ] && IS_SECURITY='false'
if [ -n "$GERRIT_PROJECT" ]; then
GERRIT_CHANGE_STATUS="NEW"
if [ -n "$GERRIT_REFSPEC" ]; then
request_is_merged $GERRIT_REFSPEC && GERRIT_CHANGE_STATUS="MERGED"
else
# Support ref-updated gerrit event
GERRIT_CHANGE_STATUS="REF_UPDATED"
GERRIT_BRANCH=$GERRIT_REFNAME
fi
if [ -n "$GERRIT_CHANGE_COMMIT_MESSAGE" ] ; then
local GERRIT_MEGGASE="`echo $GERRIT_CHANGE_COMMIT_MESSAGE | base64 -d || :`"
fi
if [ "$GERRIT_CHANGE_STATUS" == "NEW" ] ; then
REQUEST_NUM="CR-$GERRIT_CHANGE_NUMBER"
local _LP_BUG=`echo "$GERRIT_TOPIC" | egrep -o "group/[0-9]+" | cut -d'/' -f2`
#[ -z "$_LP_BUG" ] && _LP_BUG=`echo "$GERRIT_MEGGASE" | egrep -i -o "(closes|partial|related)-bug: ?#?[0-9]+" | sort -u | head -1 | awk -F'[: #]' '{print $NF}'`
[ -n "$_LP_BUG" ] && LP_BUG="LP-$_LP_BUG"
else
if [ -n "$GERRIT_MESSAGE" ] ; then
if [ `echo $GERRIT_MESSAGE | grep -c \"$SECUPDATETAG\"` -gt 0 ] ; then
IS_SECURITY='true'
fi
fi
fi
# Detect packagename
PACKAGENAME=${GERRIT_PROJECT##*/}
[ "${PACKAGENAME##*-}" == "build" ] && PACKAGENAME=${PACKAGENAME%-*}
SRC_PROJECT=${SRC_PROJECT_PATH}/$PACKAGENAME
[ "$IS_OPENSTACK" == "true" ] && SPEC_PROJECT=${SPEC_PROJECT_PATH}/${PACKAGENAME}${SPEC_PROJECT_SUFFIX}
case $GERRIT_PROJECT in
"$SRC_PROJECT" ) SOURCE_REFSPEC=$GERRIT_REFSPEC ;;
"$SPEC_PROJECT" ) SPEC_REFSPEC=$GERRIT_REFSPEC ;;
esac
SOURCE_BRANCH=$GERRIT_BRANCH
[ "$IS_OPENSTACK" == "true" ] && SPEC_BRANCH=$GERRIT_BRANCH
fi
[ -z "$PACKAGENAME" ] && error "Package name is not defined! Exiting!"
[ -z "$SOURCE_BRANCH" ] && error "Source branch is not defined! Exiting!"
[ "$IS_OPENSTACK" == "true" ] && [ -z "$SPEC_BRANCH" ] && SPEC_BRANCH=$SOURCE_BRANCH
[ "$IS_OPENSTACK" == "true" ] && SPEC_PROJECT=${SPEC_PROJECT_PATH}/${PACKAGENAME}${SPEC_PROJECT_SUFFIX}
SRC_PROJECT=${SRC_PROJECT_PATH}/$PACKAGENAME
}
fetch_upstream () {
# find corresponding requests
if [ -n "$SPEC_PROJECT" -a "${GERRIT_TOPIC%/*}" = "spec" ] ; then
local CORR_GERRIT_PROJECT=$SRC_PROJECT
[ "$GERRIT_PROJECT" == "$SRC_PROJECT" ] && CORR_GERRIT_PROJECT=$SPEC_PROJECT
local search_string="topic:${GERRIT_TOPIC} branch:${GERRIT_BRANCH} project:${CORR_GERRIT_PROJECT} -status:abandoned"
local CORR_CHANGE=`ssh -p $GERRIT_PORT ${GERRIT_USER}@$GERRIT_HOST gerrit query --current-patch-set \'${search_string}\'`
local CORR_CHANGE_REFSPEC="`echo \"${CORR_CHANGE}\" | grep 'ref:' | awk '{print $NF}'`"
local CORR_CHANGE_NUMBER=`echo $CORR_CHANGE_REFSPEC | cut -d'/' -f4`
local CORR_PATCHSET_NUMBER=`echo $CORR_CHANGE_REFSPEC | cut -d'/' -f5`
local CORR_CHANGE_URL=`echo "${CORR_CHANGE}" | grep 'url:' | awk '{print $NF}'`
local CORR_CHANGE_STATUS=`echo "${CORR_CHANGE}" | grep 'status:' | awk '{print $NF}'`
local corr_ref_count=`echo "$CORR_CHANGE_REFSPEC" | wc -l`
[ $corr_ref_count -gt 1 ] && error "ERROR: Multiple corresponding changes found!"
if [ -n "$CORR_CHANGE_NUMBER" ] ; then
# Provide corresponding change to vote script
cat > ${WRKDIR}/corr.setenvfile <<-EOL
CORR_CHANGE_NUMBER=$CORR_CHANGE_NUMBER
CORR_PATCHSET_NUMBER=$CORR_PATCHSET_NUMBER
CORR_CHANGE_URL=$CORR_CHANGE_URL
CORR_CHANGE_REFSPEC=$CORR_CHANGE_REFSPEC
EOL
fi
# Do not perform build stage if corresponding CR is not merged
if [ -n "${CORR_CHANGE_STATUS}" ] && [ "$GERRIT_CHANGE_STATUS" == "MERGED" ] && [ "$CORR_CHANGE_STATUS" != "MERGED" ] ; then
echo "SKIPPED=1" >> ${WRKDIR}/corr.setenvfile
error "Skipping build due to unmerged status of corresponding change ${CORR_CHANGE_URL}"
fi
fi
# Do not clone projects every time. It makes gerrit sad. Cache it!
for prj in $SRC_PROJECT $SPEC_PROJECT; do
# Update code base cache
[ -d ${GITDATA} ] || mkdir -p ${GITDATA}
if [ ! -d ${GITDATA}/$prj ]; then
info "Cache for $prj doesn't exist. Cloning to ${HOME}/gitdata/$prj"
mkdir -p ${GITDATA}/$prj
# Lock cache directory
job_lock ${GITDATA}/${prj}.lock set
pushd ${GITDATA} &>/dev/null
info "Cloning sources from $URL/$prj.git ..."
git clone "$URL/$prj.git" "$prj"
popd &>/dev/null
else
# Lock cache directory
job_lock ${GITDATA}/${prj}.lock set
info "Updating cache for $prj"
pushd ${GITDATA}/$prj &>/dev/null
info "Fetching sources from $URL/$prj.git ..."
# Replace git remote user
local remote=`git remote -v | head -1 | awk '{print $2}' | sed "s|//.*@|//${GERRIT_USER}@|"`
git remote rm origin
git remote add origin $remote
# Update gitdata
git fetch --all
popd &>/dev/null
fi
if [ "$prj" == "$SRC_PROJECT" ]; then
local _DIRSUFFIX=src
local _BRANCH=$SOURCE_BRANCH
[ -n "$SOURCE_REFSPEC" ] && local _REFSPEC=$SOURCE_REFSPEC
fi
if [ "$prj" == "$SPEC_PROJECT" ]; then
local _DIRSUFFIX=spec
local _BRANCH=$SPEC_BRANCH
[ -n "$SPEC_REFSPEC" ] && local _REFSPEC=$SPEC_REFSPEC
fi
[ -e "${MYOUTDIR}/${PACKAGENAME}-${_DIRSUFFIX}" ] && rm -rf "${MYOUTDIR}/${PACKAGENAME}-${_DIRSUFFIX}"
info "Getting $_DIRSUFFIX from $URL/$prj.git ..."
cp -R ${GITDATA}/${prj} ${MYOUTDIR}/${PACKAGENAME}-${_DIRSUFFIX}
# Unlock cache directory
job_lock ${GITDATA}/${prj}.lock unset
pushd ${MYOUTDIR}/${PACKAGENAME}-${_DIRSUFFIX} &>/dev/null
switch_to_revision $_BRANCH
# Get code from HEAD if change is merged
[ "$GERRIT_CHANGE_STATUS" == "MERGED" ] && unset _REFSPEC
# If _REFSPEC specified switch to it
if [ -n "$_REFSPEC" ] ; then
switch_to_changeset $prj $_REFSPEC
else
[ "$prj" == "${CORR_GERRIT_PROJECT}" ] && [ -n "${CORR_CHANGE_REFSPEC}" ] && switch_to_changeset $prj $CORR_CHANGE_REFSPEC
fi
popd &>/dev/null
case $_DIRSUFFIX in
src) gitshasrc=$gitsha
;;
spec) gitshaspec=$gitsha
;;
*) error "Unknown project type"
;;
esac
unset _DIRSUFFIX
unset _BRANCH
unset _REFSPEC
done
}
switch_to_revision () {
info "Switching to branch $*"
if ! git checkout $*; then
error "$* not accessible by default clone/fetch"
else
git reset --hard origin/$*
gitsha=`git log -1 --pretty="%h"`
fi
}
switch_to_changeset () {
info "Switching to changeset $2"
git fetch "$URL/$1.git" $2
git checkout FETCH_HEAD
gitsha=`git log -1 --pretty="%h"`
}
get_last_commit_info () {
if [ -n "$1" ] ; then
pushd $1 &>/dev/null
message="$(git log -n 1 --pretty=format:%B)"
author=$(git log -n 1 --pretty=format:%an)
email=$(git log -n 1 --pretty=format:%ae)
cdate=$(git log -n 1 --pretty=format:%ad | cut -d' ' -f1-3,5)
commitsha=$(git log -n 1 --pretty=format:%h)
lastgitlog=$(git log --pretty="%h|%ae|%an|%s" -n 10)
popd &>/dev/null
fi
}
fill_buildresult () {
#$status $time $PACKAGENAME $pkgtype
local status=$1
local time=$2
local packagename=$3
local pkgtype=$4
local xmlfilename=${WRKDIR}/buildresult.xml
local failcnt=0
local buildstat="Succeeded"
[ "$status" != "0" ] && failcnt=1 && buildstat="Failed"
echo "<testsuite name=\"Package build\" tests=\"Package build\" errors=\"0\" failures=\"$failcnt\" skip=\"0\">" > $xmlfilename
echo -n "<testcase classname=\"$pkgtype\" name=\"$packagename\" time=\"0\"" >> $xmlfilename
if [ "$failcnt" == "0" ] ; then
echo "/>" >> $xmlfilename
else
echo ">" >> $xmlfilename
echo "<failure type=\"Failure\" message=\"$buildstat\">" >> $xmlfilename
if [ -f "${WRKDIR}/buildlog.txt" ] ; then
cat ${WRKDIR}/buildlog.txt | sed -n '/^dpkg: error/,/^Package installation failed/p' | egrep -v '^Get|Selecting|Unpacking|Preparing' >> $xmlfilename || :
cat ${WRKDIR}/buildlog.txt | sed -n '/^The following information may help to resolve the situation/,/^Package installation failed/p' >> $xmlfilename || :
cat ${WRKDIR}/buildlog.txt | grep -B 20 '^dpkg-buildpackage: error' >> $xmlfilename || :
cat ${WRKDIR}/buildlog.txt | grep -B 20 '^EXCEPTION:' >> $xmlfilename || :
fi
if [ -f "${WRKDIR}/rootlog.txt" ] ; then
cat ${WRKDIR}/rootlog.txt | sed -n '/No Package found/,/Exception/p' >> $xmlfilename || :
cat ${WRKDIR}/rootlog.txt | sed -n '/Error: /,/You could try using --skip-broken to work around the problem/p' >> $xmlfilename || :
fi
echo "</failure>" >> $xmlfilename
echo "</testcase>" >> $xmlfilename
fi
echo "</testsuite>" >> $xmlfilename
}

View File

@ -1,167 +0,0 @@
#!/bin/bash -ex
usage() {
cat <<EOF
Usage: $(basename "$0") [options]
If NO parameters specified, this script will:
- search for sources in the local directory
- put built packages to ./buildresult
- use the preconfigured upstream mirror (http://mirror.yandex.ru/ubuntu)
Mandatory arguments to long options are mandatory for short options too.
-h, --help display this help and exit
-b, --build-target distname (currently "trusty" and "centos7" are supported)
-s, --source sources directory
-u, --upstream-repo upstream mirror (default is mirror.yandex.ru/ubuntu)
-r, --ext-repos additional mirrors
-o, --output-dir output directory
Please use the following syntax to add additional repositories:
rpm:
"name1,http://url/to/repo1|name2,http://url/to/repo2"
deb:
"http://url/to/repo1 distro component1 component2|http://url/to/repo2 distro component3 component4"
IMPORTANT:
Sources should be prepared by the maintainer before the build:
rpm:
- srpm file:
./python-amqp-1.4.5-2.mira1.src.rpm
- file tree with .spec file and source tarball:
./python-pbr-0.10.0.tar.gz
./some-patch.patch
./python-pbr.spec
deb:
- packed sources (.dsc, .*z , .diff files):
./websocket-client_0.12.0-ubuntu1.debian.tar.gz
./websocket-client_0.12.0-ubuntu1.dsc
./websocket-client_0.12.0.orig.tar.gz
- file tree with pristine source tarball in the root of tree and debian folder inside some parent folder:
./python-pbr/debian/*
./python-pbr_0.10.0.orig.tar.gz
EOF
}
usage_short() {
echo Usage: $(basename "$0") [options]
echo
echo -e Try $(basename "$0") --help for more options.
}
die() { echo "$@" 1>&2 ; exit 1; }
OPTS=$(getopt -o b:s:e:o:u:h -l build-target:,source:,ext-repos:,output-dir:,upstream-repo:,help -- "$@")
if [ $? != 0 ]; then
usage_short
exit 1
fi
eval set -- "$OPTS"
WORKING_DIR=${0%/*}
while true ; do
case "$1" in
-h| --help ) usage ; exit 0;;
-b | --build-target ) BUILD_TARGET="$2"; shift; shift;;
-s | --source ) BUILD_SOURCE="$2"; shift; shift;;
-e | --ext-repos ) EXTRAREPO="$2"; export EXTRAREPO; shift; shift;;
-o | --output-dir ) OUTPUT_DIR="$2"; shift; shift;;
-u | --upstream-repo ) UPSTREAM_MIRROR="$2"; export UPSTREAM_MIRROR; shift; shift;;
-- ) shift; break;;
* ) break;;
esac
done
if [[ ${BUILD_SOURCE} = "" ]]; then
BUILD_SOURCE=${PWD}
fi
build_docker_image() {
case "$BUILD_TARGET" in
centos7)
docker build -t mockbuild "${WORKING_DIR}"/docker-builder/mockbuild/
;;
trusty)
docker build -t sbuild "${WORKING_DIR}"/docker-builder/sbuild/
;;
esac
}
create_buildroot() {
case "$BUILD_TARGET" in
centos7)
"${WORKING_DIR}"/docker-builder/create-rpm-chroot.sh
;;
trusty)
"${WORKING_DIR}"/docker-builder/create-deb-chroot.sh
;;
*) die "Unknown build target specified. Currently 'trusty' and 'centos7' are supported"
esac
}
update_buildroot() {
case "$BUILD_TARGET" in
centos7)
"${WORKING_DIR}"/docker-builder/update-rpm-chroot.sh
;;
trusty)
"${WORKING_DIR}"/docker-builder/update-deb-chroot.sh
;;
*) die "Unknown build target specified. Currently 'trusty' and 'centos7' are supported"
esac
}
main () {
case "$BUILD_TARGET" in
trusty)
export DIST="${BUILD_TARGET}"
if [[ "$(docker images -q sbuild 2> /dev/null)" == "" ]]; then
build_docker_image
create_buildroot
else
if [[ ! -d /var/cache/docker-builder/sbuild/"${BUILD_TARGET}"-amd64 ]]; then
create_buildroot
else
update_buildroot
fi
fi
cd "${BUILD_SOURCE}"
bash -ex "${WORKING_DIR}"/docker-builder/build-deb-package.sh
local exitstatus=`cat buildresult/exitstatus.sbuild || echo 1`
if [[ "${OUTPUT_DIR}" != "" ]]; then
mkdir -p "${OUTPUT_DIR}"
mv buildresult/* "${OUTPUT_DIR}"
rm -rf buildresult
fi
;;
centos7)
export DIST="${BUILD_TARGET}"
if [[ "$(docker images -q mockbuild 2> /dev/null)" == "" ]]; then
build_docker_image
create_buildroot
else
if [[ ! -d /var/cache/docker-builder/mock/cache/epel-7-x86_64 ]]; then
create_buildroot
else
update_buildroot
fi
fi
cd "${BUILD_SOURCE}"
bash -ex "${WORKING_DIR}"/docker-builder/build-rpm-package.sh
local exitstatus=`cat build/exitstatus.mock || echo 1`
if [[ "${OUTPUT_DIR}" != "" ]]; then
mkdir -p "${OUTPUT_DIR}"
mv build/* "${OUTPUT_DIR}"
rm -rf build
fi
;;
*) die "Unknown build target specified. Currently 'trusty' and 'centos7' are supported"
esac
exit "${exitstatus}"
}
main "${@}"

View File

@ -1,160 +0,0 @@
#!/bin/bash
set -o xtrace
set -o errexit
[ -f ".packages-defaults" ] && source .packages-defaults
BINDIR=$(dirname `readlink -e $0`)
source "${BINDIR}"/build-functions.sh
main () {
set_default_params
# Get package tree from gerrit
fetch_upstream
local _srcpath="${MYOUTDIR}/${PACKAGENAME}-src"
local _specpath=$_srcpath
local _testspath=$_srcpath
[ "$IS_OPENSTACK" == "true" ] && _specpath="${MYOUTDIR}/${PACKAGENAME}-spec${SPEC_PREFIX_PATH}" && _testspath="${MYOUTDIR}/${PACKAGENAME}-spec"
# Get last commit info
# $message $author $email $cdate $commitsha $lastgitlog
get_last_commit_info ${_srcpath}
# Update specs
local specfile=`find $_specpath -name *.spec`
#local binpackagename=`rpm -q $RPMQUERYPARAMS --specfile $specfile --queryformat %{NAME}"\n" | head -1`
local define_macros=(
-D 'kernel_module_package_buildreqs kernel-devel'
-D 'kernel_module_package(n:v:r:s:f:xp:) \
%package -n kmod-%{-n*} \
Summary: %{-n*} kernel module(s) \
Version: %{version} \
Release: %{release} \
%description -n kmod-%{-n*} \
This package provides the %{-n*} kernel modules
' )
local version=`rpm -q "${define_macros[@]}" --specfile $specfile --queryformat %{VERSION}"\n" | head -1`
local release=`rpm -q "${define_macros[@]}" --specfile $specfile --queryformat %{RELEASE}"\n" | head -1`
## Add changelog section if it doesn't exist
[ "`cat ${specfile} | grep -c '^%changelog'`" -eq 0 ] && echo "%changelog" >> ${specfile}
if [ "$IS_OPENSTACK" == "true" ] ; then
# Get version number from the latest git tag for openstack packages
local release_tag=`git -C $_srcpath describe --abbrev=0`
# Deal with PyPi versions like 2015.1.0rc1
# It breaks version comparison
# Change it to 2015.1.0~rc1
local convert_version_py="$(dirname $(readlink -e $0))/convert-version.py"
version=$(python ${convert_version_py} --tag ${release_tag})
# Get revision number as commit count for src+spec projects
local _rev=$(git -C $_srcpath rev-list --no-merges ${version}..origin/${SOURCE_BRANCH} | wc -l)
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && _rev=$(( $_rev + 1 ))
local release="mos8.0.${_rev}"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && release="${release}.git.${gitshasrc}.${gitshaspec}"
local TAR_NAME=${PACKAGENAME}-${version}.tar.gz
# Update version and changelog
sed -i "s|Version:.*$|Version: ${version}|" $specfile
sed -i "/Release/s|%{?dist}.*$|%{?dist}~${release}|" $specfile
sed -i "s|Source0:.*$|Source0: ${TAR_NAME}|" $specfile
# Prepare source tarball
pushd $_srcpath &>/dev/null
if [ "$PACKAGENAME" == "murano-apps" ]; then
# Do not perform `setup.py sdist` for murano-apps package
tar -czf ${BUILDDIR}/$TAR_NAME $EXCLUDES .
else
python setup.py --version # this will download pbr if it's not available
PBR_VERSION=$release_tag python setup.py sdist -d ${BUILDDIR}/
# Fix source folder name at sdist tarball
local sdist_tarball=$(find ${BUILDDIR}/ -maxdepth 1 -name "*.gz")
if [ "$(tar -tf $sdist_tarball | head -n 1 | cut -d'/' -f1)" != "${PACKAGENAME}-${version}" ] ; then
# rename source folder
local tempdir=$(mktemp -d)
tar -C $tempdir -xf $sdist_tarball
mv $tempdir/* $tempdir/${PACKAGENAME}-${version}
tar -C $tempdir -czf ${BUILDDIR}/$TAR_NAME ${PACKAGENAME}-${version}
rm -f $sdist_tarball
[ -d "$tempdir" ] && rm -rf $tempdir
else
mv $sdist_tarball ${BUILDDIR}/$TAR_NAME || :
fi
fi
cp $_specpath/rpm/SOURCES/* ${BUILDDIR}/ &>/dev/null || :
else
# TODO: Support unpacked source tree
# Packed sources (.spec + .gz + stuff)
# Exclude tests folder
cp -R ${_srcpath}/* $BUILDDIR
[ -d "${BUILDDIR}/tests" ] && rm -rf ${BUILDDIR}/tests
fi
## Update changelog
firstline=1
if [ ! -z "$lastgitlog" ]; then
sed -i "/^%changelog/i%newchangelog" ${specfile}
echo "$lastgitlog" | while read LINE; do
commitid=`echo "$LINE" | cut -d'|' -f1`
email=`echo "$LINE" | cut -d'|' -f2`
author=`echo "$LINE" | cut -d'|' -f3`
# Get current date to avoid wrong chronological order in %changelog section
date=`LC_TIME=C date +"%a %b %d %Y"`
subject=`echo "$LINE" | cut -d'|' -f4`
[ $firstline == 1 ] && sed -i "/^%changelog/i\* $date $author \<${email}\> \- ${version}-${release}" ${specfile}
sed -i "/^%changelog/i\- $commitid $subject" ${specfile}
firstline=0
done
sed -i '/^%changelog/i\\' ${specfile}
sed -i '/^%changelog/d' ${specfile}
sed -i 's|^%newchangelog|%changelog|' ${specfile}
fi
echo "Resulting spec-file:"
cat ${specfile}
cp ${specfile} ${BUILDDIR}/
# Prepare tests folder to provide as parameter
rm -f ${WRKDIR}/tests.envfile
[ -d "${_testspath}/tests" ] && echo "TESTS_CONTENT='`tar -cz -C ${_testspath} tests | base64 -w0`'" > ${WRKDIR}/tests.envfile
# Build stage
local REQUEST=$REQUEST_NUM
[ -n "$LP_BUG" ] && REQUEST=$LP_BUG
EXTRAREPO="repo1,http://${REMOTE_REPO_HOST}/${RPM_OS_REPO_PATH}/x86_64"
[ "$IS_UPDATES" == 'true' ] && \
EXTRAREPO="${EXTRAREPO}|repo2,http://${REMOTE_REPO_HOST}/${RPM_PROPOSED_REPO_PATH}/x86_64"
[ "$GERRIT_CHANGE_STATUS" == "NEW" ] && [ "$IS_UPDATES" != "true" ] && \
EXTRAREPO="${EXTRAREPO}|repo3,http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${RPM_OS_REPO_PATH}/x86_64"
[ "$GERRIT_STATUS" == "NEW" ] && [ "$IS_UPDATES" == "true" ] && \
EXTRAREPO="${EXTRAREPO}|repo3,http://${REMOTE_REPO_HOST}/${REPO_REQUEST_PATH_PREFIX}/${REQUEST}/${RPM_PROPOSED_REPO_PATH}/x86_64"
export EXTRAREPO
pushd $BUILDDIR &>/dev/null
echo "BUILD_SUCCEEDED=false" > ${WRKDIR}/buildresult.params
bash -x ${BINDIR}/docker-builder/build-rpm-package.sh
local exitstatus=`cat build/exitstatus.mock || echo 1`
rm -f build/exitstatus.mock build/state.log
[ -f "build/build.log" ] && mv build/build.log ${WRKDIR}/buildlog.txt
[ -f "build/root.log" ] && mv build/root.log ${WRKDIR}/rootlog.txt
fill_buildresult $exitstatus 0 $PACKAGENAME RPM
if [ "$exitstatus" == "0" ] ; then
tmpdir=`mktemp -d ${PKG_DIR}/build-XXXXXXXX`
rm -f ${WRKDIR}/buildresult.params
cat >${WRKDIR}/buildresult.params<<-EOL
BUILD_HOST=`hostname -f`
PKG_PATH=$tmpdir
GERRIT_CHANGE_STATUS=$GERRIT_CHANGE_STATUS
REQUEST_NUM=$REQUEST_NUM
LP_BUG=$LP_BUG
IS_SECURITY=$IS_SECURITY
EXTRAREPO="$EXTRAREPO"
REPO_TYPE=rpm
DIST=$DIST
EOL
mv build/* $tmpdir/
fi
popd &>/dev/null
exit $exitstatus
}
main "$@"
exit 0

View File

@ -1,72 +0,0 @@
#!/usr/bin/env python
##
# Convert pip style alpha/beta/rc/dev versions to the ones suitable for a
# package manager.
# Does not modify the conventional 3-digit version numbers.
# Examples:
# 1.2.3.0a4 -> 1.2.3~a4
# 1.2.3rc1 -> 1.2.3~rc1
# 1.2.3 -> 1.2.3
import argparse
from pkg_resources import parse_version
import re
def strip_leading_zeros(s):
return re.sub(r"^0+([0-9]+)", r"\1", s)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'-t', '--tag', dest='tag', action='store', type=str,
help='PyPi version tag', required=True, default='0'
)
params, other_params = parser.parse_known_args()
pip_ver = params.tag
# drop dashed part from version string because
# it represents a patch level of given version
pip_ver = pip_ver.split('-')[0]
# add leading 1 if tag is starting from letter
if re.match(r"^[a-zA-Z]", pip_ver):
pip_ver = '1' + pip_ver
# parse_version converts string '12.0.0.0rc1'
# to touple ('00000012', '*c', '00000001', '*final')
# details:
# http://galaxy-dist.readthedocs.org/en/latest/lib/pkg_resources.html
pip_ver_parts = parse_version(pip_ver)
_ver = True
pkg_ver_part = []
pkg_alpha = ""
pkg_rev_part = []
for part in pip_ver_parts:
if part == "*final":
continue
if re.match(r"[*a-z]", part):
_ver = False
pkg_alpha = re.sub(r"^\*", "~", part)
continue
if _ver:
pkg_ver_part.append(strip_leading_zeros(part))
else:
pkg_rev_part.append(strip_leading_zeros(part))
# replace 'c' and '@' with 'rc' and 'dev' at pkg_alpha
pkg_alpha = pkg_alpha.replace('c', 'rc')
pkg_alpha = pkg_alpha.replace('@', 'dev')
# expand version to three items
while (len(pkg_ver_part) < 3):
pkg_ver_part.append('0')
print('.'.join(pkg_ver_part) + pkg_alpha + '.'.join(pkg_rev_part))
if __name__ == "__main__":
main()

View File

@ -1,42 +0,0 @@
#!/bin/bash -ex
. $(dirname $(readlink -f $0))/config
CONTAINERNAME=sbuild:latest
CACHEPATH=/var/cache/docker-builder/sbuild
[ -z "$DIST" ] && DIST=trusty
if [ -n "$EXTRAREPO" ] ; then
EXTRACMD=""
OLDIFS="$IFS"
IFS='|'
for repo in $EXTRAREPO; do
IFS="$OLDIFS"
EXTRACMD="${EXTRACMD} --chroot-setup-commands=\"apt-add-repo deb $repo\" "
IFS='|'
done
IFS="$OLDIFS"
fi
dscfile=$(find . -maxdepth 1 -name \*.dsc | head -1)
debianfolder=$(find . -wholename "*debian/changelog*" | head -1 | sed 's|^./||; s|debian/changelog||')
if [ -n "$dscfile" ]; then
SOURCEDEST=$dscfile
SOURCEDEST=`basename $SOURCEDEST`
elif [ -n "$debianfolder" ] ; then
SOURCEDEST=$debianfolder
fi
docker run ${DNSPARAM} --privileged --rm -v ${CACHEPATH}:/srv/images:ro \
-v $(pwd):/srv/source ${CONTAINERNAME} \
bash -c "( sed -i '/debian\/rules/d' /usr/bin/sbuild
DEB_BUILD_OPTIONS=nocheck /usr/bin/sbuild -d ${DIST} --nolog \
--source --force-orig-source \
$EXTRACMD \
--chroot-setup-commands=\"apt-get update\" \
--chroot-setup-commands=\"apt-get upgrade -f -y --force-yes\" \
/srv/source/${SOURCEDEST} 2>&1
echo \$? > /srv/build/exitstatus.sbuild ) \
| tee /srv/build/buildlog.sbuild
rm -rf /srv/source/buildresult
mv /srv/build /srv/source/buildresult
chown -R `id -u`:`id -g` /srv/source"

View File

@ -1,43 +0,0 @@
#!/bin/bash -ex
. $(dirname $(readlink -f $0))/config
CONTAINERNAME=mockbuild:latest
CACHEPATH=/var/cache/docker-builder/mock
DIST_VERSION=`echo $DIST | sed 's|centos||'`
[ -z "${DIST_VERSION}" ] && DIST_VERSION=7
EXTRACMD=":"
if [ -n "$EXTRAREPO" ] ; then
EXTRACMD="sed -i"
OLDIFS="$IFS"
IFS='|'
for repo in $EXTRAREPO ; do
IFS="$OLDIFS"
reponame=${repo%%,*}
repourl=${repo##*,}
EXTRACMD="$EXTRACMD -e \"$ i[${reponame}]\nname=${reponame}\nbaseurl=${repourl}\ngpgcheck=0\nenabled=1\nskip_if_unavailable=1\""
IFS='|'
done
IFS="$OLDIFS"
EXTRACMD="$EXTRACMD /etc/mock/centos-${DIST_VERSION}-x86_64.cfg"
fi
docker run ${DNSPARAM} --privileged --rm -v ${CACHEPATH}:/srv/mock:ro \
-v $(pwd):/home/abuild/rpmbuild ${CONTAINERNAME} \
bash -x -c "mkdir -p /srv/tmpfs/cache
mount -t tmpfs overlay /srv/tmpfs/cache
mount -t aufs -o br=/srv/tmpfs/cache/:/srv/mock/cache none /var/cache/mock/
$EXTRACMD
su - abuild -c 'mock -r centos-${DIST_VERSION}-x86_64 --verbose --update'
chown -R abuild.mock /home/abuild
[[ \$(ls /home/abuild/rpmbuild/*.src.rpm | wc -l) -eq 0 ]] \
&& su - abuild -c 'mock -r centos-${DIST_VERSION}-x86_64 --no-clean --no-cleanup-after --buildsrpm --verbose \
--sources=/home/abuild/rpmbuild --resultdir=/home/abuild/rpmbuild --buildsrpm \
--spec=\$(ls /home/abuild/rpmbuild/*.spec)'
rm -rf /home/abuild/rpmbuild/build
su - abuild -c 'mock -r centos-${DIST_VERSION}-x86_64 --no-clean --no-cleanup-after --verbose \
--resultdir=/home/abuild/rpmbuild/build \$(ls /home/abuild/rpmbuild/*.src.rpm)'
echo \$? > /home/abuild/rpmbuild/build/exitstatus.mock
umount -f /var/cache/mock /srv/tmpfs/cache
rm -rf /srv/tmpfs
rm -f /home/abuild/rpmbuild/\*.src.rpm /home/abuild/rpmbuild/{build,root,state}.log
chown -R `id -u`:`id -g` /home/abuild"

View File

@ -1 +0,0 @@
DNSPARAM="--dns 172.18.80.136"

View File

@ -1,36 +0,0 @@
#!/bin/bash
#
# Prepare chroot (must exist before starting any builds) environment
# with `sbuild-createchroot` which prepares everything for building DEBs
#
# Usage: DIST=trusty ./create-deb-chroot.sh # for Trusty
# DIST=precise ./create-deb-chroot.sh # for Precise
# UPSTREAM_MIRROR=http://ua.archive.ubuntu.com/ubuntu/ ./create-deb-chroot.sh
set -ex
BIN="${0%/*}"
source "${BIN}/config"
CONTAINERNAME=sbuild:latest
CACHEPATH=/var/cache/docker-builder/sbuild
# define upstream Ubuntu mirror
MIRROR=${UPSTREAM_MIRROR:-http://mirror.yandex.ru/ubuntu}
# Use trusty distro by default
[ -z "${DIST}" ] && DIST=trusty
if [ "${DIST}" != "precise" ] && [ "${DIST}" != "trusty" ]; then
echo "Unknown dist version: ${DIST}"
exit 1
fi
docker run ${DNSPARAM} --privileged --rm -v ${CACHEPATH}:/srv/images ${CONTAINERNAME} \
bash -c "rm -f /etc/schroot/chroot.d/*
sbuild-createchroot ${DIST} /srv/images/${DIST}-amd64 ${MIRROR}
echo deb ${MIRROR} ${DIST} main universe multiverse restricted > /srv/images/${DIST}-amd64/etc/apt/sources.list
echo deb ${MIRROR} ${DIST}-updates main universe multiverse restricted >> /srv/images/${DIST}-amd64/etc/apt/sources.list
sbuild-update -udcar ${DIST}
echo '#!/bin/bash' > /srv/images/${DIST}-amd64/usr/bin/apt-add-repo
echo 'echo \$* >> /etc/apt/sources.list' >> /srv/images/${DIST}-amd64/usr/bin/apt-add-repo
chmod +x /srv/images/${DIST}-amd64/usr/bin/apt-add-repo"

View File

@ -1,32 +0,0 @@
#!/bin/bash
#
# Prepare chroot (must exist before starting any builds) environment
# with `mock --init` which installs all packages (@buildsys-build)
# required for building RPMs
#
# Usage: DIST=6 ./create-rpm-chroot.sh # for CentOS 6
# DIST=7 ./create-rpm-chroot.sh # for CentOS 7
set -ex
BIN="${0%/*}"
source "${BIN}/config"
CONTAINERNAME=mockbuild:latest
CACHEPATH=/var/cache/docker-builder/mock
# check DIST=centos6 which can be passed from upstream job or defined in env
DIST_VERSION=${DIST/centos/}
# by default we init env for CentOS 7
[ -z "${DIST_VERSION}" ] && DIST_VERSION=7
if [ "${DIST_VERSION}" != 6 ] && [ "${DIST_VERSION}" != 7 ]; then
echo "Unknown dist version: ${DIST_VERSION}"
exit 1
fi
docker run ${DNSPARAM} --privileged --rm -v ${CACHEPATH}/cache:/var/cache/mock ${CONTAINERNAME} \
bash -c "chown -R abuild:mock /var/cache/mock
chmod g+s /var/cache/mock
su - abuild -c 'mock -r centos-${DIST_VERSION}-x86_64 -v --init'"

View File

@ -1,14 +0,0 @@
FROM centos:centos7
# Authors: Dmitry Burmistrov <dburmistrov@mirantis.com>
# Igor Gnatenko <ignatenko@mirantis.com>
MAINTAINER Igor Gnatenko <ignatenko@mirantis.com>
RUN yum -y --disableplugin=fastestmirror install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm && \
yum -y --disableplugin=fastestmirror install --enablerepo=epel-testing mock && \
yum clean --enablerepo=epel-testing all && \
useradd abuild -g mock
COPY mock_configure.sh /
RUN /mock_configure.sh; \
rm -f /mock_configure.sh

View File

@ -1,24 +0,0 @@
#!/bin/bash
# Generate mock configuration files:
# /etc/mock/centos-7-x86_64.cfg
# /etc/mock/centos-6-x86_64.cfg
# both for el6, el7,
# Add configuration param:
# config_opts['macros']['%dist'] = '.${DIST}${DISTSUFFIX}'
set -e
for cfg in /etc/mock/epel-{6,7}-x86_64.cfg; do
DIST=$(awk -F"'" "/config_opts\['dist'\]/ {print \$4}" "${cfg}")
sed -e "/config_opts\['dist'\]/s/$/\nconfig_opts['macros']['%dist'] = '.${DIST}${DISTSUFFIX}'/" $cfg \
>${cfg/epel/centos}
done
# Enable tmpfs mock plugin
cat > /etc/mock/site-defaults.cfg <<HEREDOC
config_opts['plugin_conf']['tmpfs_enable'] = True
config_opts['plugin_conf']['tmpfs_opts'] = {}
config_opts['plugin_conf']['tmpfs_opts']['required_ram_mb'] = 2048
config_opts['plugin_conf']['tmpfs_opts']['max_fs_size'] = '25g'
config_opts['plugin_conf']['tmpfs_opts']['mode'] = '0755'
config_opts['plugin_conf']['tmpfs_opts']['keep_mounted'] = False
HEREDOC

View File

@ -1,11 +0,0 @@
#!/bin/sh
. "${SETUP_DATA_DIR}/common-data"
. "${SETUP_DATA_DIR}/common-functions"
#. "$SETUP_DATA_DIR/common-config"
if [ "${STAGE}" = "setup-start" ]; then
mount -t tmpfs overlay /var/lib/schroot/union/overlay
elif [ "${STAGE}" = "setup-recover" ]; then
mount -t tmpfs overlay /var/lib/schroot/union/overlay
elif [ "${STAGE}" = "setup-stop" ]; then
umount -f /var/lib/schroot/union/overlay
fi

View File

@ -1,41 +0,0 @@
FROM ubuntu:trusty
MAINTAINER dburmistrov@mirantis.com
ENV MIRROR http://mirror.yandex.ru/ubuntu
ENV NAMESERV 172.18.80.136
ENV DIST trusty
ENV DEBIAN_FRONTEND noninteractive
ENV DEBCONF_NONINTERACTIVE_SEEN true
VOLUME ["/srv/images", "/srv/source"]
COPY sbuild-key.pub /var/lib/sbuild/apt-keys/sbuild-key.pub
COPY sbuild-key.sec /var/lib/sbuild/apt-keys/sbuild-key.sec
RUN rm -f /etc/apt/sources.list.d/proposed.list && \
echo -e "\nnameserver $NAMESERV\n" >> /etc/resolv.conf && \
echo "deb $MIRROR $DIST main universe multiverse restricted" > /etc/apt/sources.list && \
echo "deb $MIRROR $DIST-updates main universe multiverse restricted" >> /etc/apt/sources.list && \
apt-get update && apt-get -y install sbuild debhelper && \
apt-get clean && \
mkdir -p /srv/build && \
sed -i '/^1/d' /etc/sbuild/sbuild.conf && \
echo "\$build_arch_all = 1;" >> /etc/sbuild/sbuild.conf && \
echo "\$log_colour = 0;" >> /etc/sbuild/sbuild.conf && \
echo "\$apt_allow_unauthenticated = 1;" >> /etc/sbuild/sbuild.conf && \
echo "\$apt_update = 0;" >> /etc/sbuild/sbuild.conf && \
echo "\$apt_clean = 0;" >> /etc/sbuild/sbuild.conf && \
echo "\$build_source = 1;" >> /etc/sbuild/sbuild.conf && \
echo "\$build_dir = '/srv/build';" >> /etc/sbuild/sbuild.conf && \
echo "\$log_dir = '/srv/build';" >> /etc/sbuild/sbuild.conf && \
echo "\$stats_dir = '/srv/build';" >> /etc/sbuild/sbuild.conf && \
echo "\$verbose = 100;" >> /etc/sbuild/sbuild.conf && \
echo "\$mailprog = '/bin/true';" >> /etc/sbuild/sbuild.conf && \
echo "\$purge_build_deps = 'never';" >> /etc/sbuild/sbuild.conf && \
echo "1;" >> /etc/sbuild/sbuild.conf
COPY ./04tmpfs /etc/schroot/setup.d/04tmpfs
RUN chmod +x /etc/schroot/setup.d/04tmpfs
COPY ./precise-amd64-sbuild /etc/schroot/chroot.d/precise-amd64-sbuild
COPY ./trusty-amd64-sbuild /etc/schroot/chroot.d/trusty-amd64-sbuild

View File

@ -1,8 +0,0 @@
[precise-amd64-sbuild]
type=directory
description=Ubuntu precise/amd64 build environment
directory=/srv/images/precise-amd64
groups=root,sbuild
root-groups=root,sbuild
profile=sbuild
union-type=aufs

View File

@ -1,8 +0,0 @@
[trusty-amd64-sbuild]
type=directory
description=Ubuntu trusty/amd64 build environment
directory=/srv/images/trusty-amd64
groups=root,sbuild
root-groups=root,sbuild
profile=sbuild
union-type=aufs

View File

@ -1,21 +0,0 @@
#!/bin/bash
set -ex
BIN="${0%/*}"
source "${BIN}/config"
CONTAINERNAME=sbuild:latest
CACHEPATH=/var/cache/docker-builder/sbuild
# Use trusty distro by default
[ -z "${DIST}" ] && DIST=trusty
if [ "${DIST}" != "precise" ] && [ "${DIST}" != "trusty" ]; then
echo "Unknown dist version: ${DIST}"
exit 1
fi
docker run ${DNSPARAM} --privileged --rm -v ${CACHEPATH}:/srv/images ${CONTAINERNAME} \
bash -c "sbuild-update -udcar ${DIST}"

View File

@ -1,23 +0,0 @@
#!/bin/bash
set -ex
BIN="${0%/*}"
source "${BIN}/config"
CONTAINERNAME=mockbuild:latest
CACHEPATH=/var/cache/docker-builder/mock
# check DIST=centos6 which can be passed from upstream job or defined in env
DIST_VERSION=${DIST/centos/}
# by default we init env for CentOS 7
[ -z "${DIST_VERSION}" ] && DIST_VERSION=7
if [ "${DIST_VERSION}" != 6 ] && [ "${DIST_VERSION}" != 7 ]; then
echo "Unknown dist version: ${DIST_VERSION}"
exit 1
fi
docker run ${DNSPARAM} --privileged --rm -v ${CACHEPATH}/cache:/var/cache/mock ${CONTAINERNAME} \
bash -c "su - abuild -c 'mock -r centos-${DIST_VERSION}-x86_64 -v --update'"

View File

@ -1,33 +0,0 @@
#!/bin/bash -xe
export LANG=C
function exit_with_error() {
echo "$@"
exit 1
}
function job_lock() {
[ -z "$1" ] && exit_with_error "Lock file is not specified"
local LOCKFILE=$1
shift
local fd=1000
eval "exec $fd>>$LOCKFILE"
case $1 in
"set")
flock -x -n $fd \
|| exit_with_error "Process already running. Lockfile: $LOCKFILE"
;;
"unset")
flock -u $fd
rm -f $LOCKFILE
;;
"wait")
local TIMEOUT=${2:-3600}
echo "Waiting of concurrent process (lockfile: $LOCKFILE, timeout = $TIMEOUT seconds) ..."
flock -x -w $TIMEOUT $fd \
&& echo DONE \
|| exit_with_error "Timeout error (lockfile: $LOCKFILE)"
;;
esac
}

View File

@ -1,102 +0,0 @@
#!/bin/bash
#[ -z "$RESYNCONLY" ] && RESYNCONLY=false
[ -z "$REPO_BASE_PATH" ] && REPO_BASE_PATH=${HOME}/pubrepos
[ -z "$PKG_PATH" ] && echo "ERROR: Remote path to built packages is not defined" && exit 1
WRK_DIR=`pwd`
TMP_DIR=${WRK_DIR}/.tmpdir
error () {
echo
echo -e "ERROR: $*"
echo
exit 1
}
info () {
echo
echo -e "INFO: $*"
echo
}
check-gpg() {
local RESULT=0
[ -z "$SIGKEYID" ] && echo "WARNING: No secret keys given" && RESULT=1
# Test secret keys
[ $RESULT -eq 0 ] && [ `gpg --list-secret-keys | grep ^sec | grep -c "$SIGKEYID"` -eq 0 ] && error "No secret keys found"
# Check for password
if [ $RESULT -eq 0 ] ; then
timeout 5s bash -c "echo test | gpg -q --no-tty --batch --no-verbose --local-user $SIGKEYID -so - &>/dev/null" \
|| error "Unable to sign with $SIGKEYID key. Passphrase needed!"
fi
[ $RESULT -ne 0 ] && echo "WARNING: Fall back to unsigned mode"
return $RESULT
}
sync-repo() {
local LOCAL_DIR=$1
local REMOTE_DIR=$2
local REQUEST_PATH_PREFIX=$3
[ -n "$4" ] && local REQUEST_NUM=$4
[ -n "$5" ] && local LP_BUG=$5
RSYNC_USER=${RSYNC_USER:-"mirror-sync"}
[ -z "$REMOTE_REPO_HOST" ] && error "Remote host to sync is not defined."
[ ! -d "${LOCAL_DIR}" ] && error "Repository ${LOCAL_DIR} doesn't exist!"
## SYNC
source $(dirname `readlink -e $0`)/functions/rsync_functions.sh
mirrors_fail=""
for host in $REMOTE_REPO_HOST; do
# sync files to remote host
# $1 - remote host
# $2 - rsync user
# $3 - local dir
# $4 - remote dir
if [ "$GERRIT_CHANGE_STATUS" == "NEW" ] ; then
rsync_create_dir $host $RSYNC_USER ${REQUEST_PATH_PREFIX}
if [ -n "$LP_BUG" ] && [ -n "$REQUEST_NUM" ] ; then
# Remove existing REQUEST_NUM repository and set it as symlink to LP_BUG one
if [ $(rsync_list_links $host $RSYNC_USER ${REQUEST_PATH_PREFIX} | grep -c "^${REQUEST_NUM} ") -eq 0 ] ; then
rsync_delete_dir $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM}
else
rsync_delete_file $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM}
fi
rsync_create_symlink $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM} ${LP_BUG}
REMOTE_DIR=${REQUEST_PATH_PREFIX}${LP_BUG}/${REMOTE_DIR}
else
# Symlinked REQUEST_NUM repository should be removed in order to not affect LP_BUG one
[ $(rsync_list_links $host $RSYNC_USER ${REQUEST_PATH_PREFIX} | grep -c "^${REQUEST_NUM} ") -gt 0 ] \
&& rsync_delete_file $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM}
REMOTE_DIR=${REQUEST_PATH_PREFIX}${REQUEST_NUM}/${REMOTE_DIR}
fi
elif [ -n "$REQUEST_PATH_PREFIX" ] ; then
# Remove unused request repos
if [ -n "$REQUEST_NUM" ] ; then
if [ $(rsync_list_links $host $RSYNC_USER ${REQUEST_PATH_PREFIX} | grep -c "^${REQUEST_NUM} ") -eq 0 ] ; then
rsync_delete_dir $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM}
else
rsync_delete_file $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM}
fi
[ $(rsync_list_files $host $RSYNC_USER ${REQUEST_PATH_PREFIX} | grep -cF $REQUEST_NUM) -gt 0 ] \
&& rsync_delete_file $host $RSYNC_USER ${REQUEST_PATH_PREFIX}${REQUEST_NUM}.target.txt
fi
# Do not remove LP_BUG repo until all linked repos removed
[ -n "$LP_BUG" ] \
&& [ $(rsync_list_links $host $RSYNC_USER ${REQUEST_PATH_PREFIX} | grep -cF $LP_BUG) -eq 0 ] \
&& rsync_delete_dir $host $RSYNC_USER ${REQUEST_PATH_PREFIX}/$LP_BUG
fi
rsync_transfer $host $RSYNC_USER $LOCAL_DIR $REMOTE_DIR || mirrors_fail+=" ${host}"
done
#if [[ -n "$mirrors_fail" ]]; then
# echo Some mirrors failed to update: $mirrors_fail
# exit 1
#else
# export MIRROR_VERSION="${TGTDIR}"
# export MIRROR_BASE="http://$RSYNCHOST_MSK/fwm/files/${MIRROR_VERSION}"
# echo "MIRROR = ${mirror}" > ${WORKSPACE:-"."}/mirror_staging.txt
# echo "MIRROR_VERSION = ${MIRROR_VERSION}" >> ${WORKSPACE:-"."}/mirror_staging.txt
# echo "MIRROR_BASE = $MIRROR_BASE" >> ${WORKSPACE:-"."}/mirror_staging.txt
# echo "FUEL_MAIN_BRANCH = ${FUEL_MAIN_BRANCH}" >> ${WORKSPACE:-"."}/mirror_staging.txt
# echo "Updated: ${MIRROR_VERSION}<br> <a href='http://mirror.fuel-infra.org//${FILESROOT}/${TGTDIR}'>ext</a> <a href='http://${RSYNCHOST_MSK}/${FILESROOT}/${TGTDIR}'>msk</a> <a href='http://${RSYNCHOST_SRT}/${FILESROOT}/${TGTDIR}'>srt</a> <a href='http://${RSYNCHOST_KHA}/${FILESROOT}/${TGTDIR}'>kha</a>"
#fi
}

View File

@ -1,193 +0,0 @@
#!/bin/bash -xe
export LANG=C
# define this vars before use
SNAPSHOT_FOLDER=${SNAPSHOT_FOLDER:-"snapshots"}
LATESTSUFFIX=${LATESTSUFFIX:-"-latest"}
export DATE=$(date "+%Y-%m-%d-%H%M%S")
export SAVE_LAST_DAYS=${SAVE_LAST_DAYS:-61}
export WARN_DATE=$(date "+%Y%m%d" -d "$SAVE_LAST_DAYS days ago")
function get_empty_dir() {
echo $(mktemp -d)
}
function get_symlink() {
local LINKDEST=$1
local LINKNAME=$(mktemp -u)
ln -s --force $LINKDEST $LINKNAME && echo $LINKNAME
}
function rsync_delete_file() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local FILENAME=$(basename $3)
local FILEPATH=$(dirname $3)
local EMPTYDIR=$(get_empty_dir)
rsync -rv --delete --include=$FILENAME '--exclude=*' \
$EMPTYDIR/ $RSYNCHOST::$RSYNCUSER/$FILEPATH/
[ ! -z "$EMPTYDIR" ] && rm -rf $EMPTYDIR
}
function rsync_delete_dir() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local DIR=$3
local EMPTYDIR=$(get_empty_dir)
rsync --delete -a $EMPTYDIR/ $RSYNCHOST::$RSYNCUSER/$DIR/ \
&& rsync_delete_file $RSYNCHOST $RSYNCUSER $DIR
[ ! -z "$EMPTYDIR" ] && rm -rf $EMPTYDIR
}
function rsync_create_dir() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local DIR=$3
local EMPTYDIR=$(get_empty_dir)
local OIFS="$IFS"
IFS='/'
local dir=''
local _dir=''
for _dir in $DIR ; do
IFS="$OIFS"
dir="${dir}/${_dir}"
rsync -a $EMPTYDIR/ $RSYNCHOST::$RSYNCUSER/$dir/
IFS='/'
done
IFS="$OIFS"
[ ! -z "$EMPTYDIR" ] && rm -rf $EMPTYDIR
}
function rsync_create_symlink() {
# Create symlink $3 -> $4
# E.g. "create_symlink repos/6.1 files/6.1-stable"
# wll create symlink repos/6.1 -> repos/files/6.1-stable
local RSYNCHOST=$1
local RSYNCUSER=$2
local LINKNAME=$3
local LINKDEST=$4
local SYMLINK_FILE=$(get_symlink "$LINKDEST")
rsync -vl $SYMLINK_FILE $RSYNCHOST::$RSYNCUSER/$LINKNAME
rm $SYMLINK_FILE
# Make text file for dereference symlinks
local TARGET_TXT_FILE=$(mktemp)
echo "$LINKDEST" > $TARGET_TXT_FILE
rsync -vl $TARGET_TXT_FILE $RSYNCHOST::$RSYNCUSER/${LINKNAME}.target.txt
rm $TARGET_TXT_FILE
}
function rsync_list() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local DIR=$3
local TEMPFILE=$(mktemp)
set +e
rsync -l $RSYNCHOST::$RSYNCUSER/$DIR/ 2>/dev/null > $TEMPFILE
local RESULT=$?
[ "$RESULT" == "0" ] && cat $TEMPFILE | grep -v '\.$'
rm $TEMPFILE
set -e
return $RESULT
}
function rsync_list_links() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local DIR=$3
local TEMPFILE=$(mktemp)
set +e
rsync_list $RSYNCHOST $RSYNCUSER $DIR > $TEMPFILE
local RESULT=$?
[ "$RESULT" == "0" ] && cat $TEMPFILE | grep '^l' | awk '{print $(NF-2)" "$NF}'
rm $TEMPFILE
set -e
return $RESULT
}
function rsync_list_dirs() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local DIR=$3
local TEMPFILE=$(mktemp)
set +e
rsync_list $RSYNCHOST $RSYNCUSER $DIR > $TEMPFILE
local RESULT=$?
[ "$RESULT" == "0" ] && cat $TEMPFILE | grep '^d' | awk '{print $NF}'
rm $TEMPFILE
set -e
return $RESULT
}
function rsync_list_files() {
local RSYNCHOST=$1
local RSYNCUSER=$2
local DIR=$3
local TEMPFILE=$(mktemp)
set +e
rsync_list $RSYNCHOST $RSYNCUSER ${DIR} > $TEMPFILE
local RESULT=$?
[ "$RESULT" == "0" ] && cat $TEMPFILE | grep -vE '^d|^l' | awk '{print $NF}'
rm $TEMPFILE
set -e
return $RESULT
}
######################################################
function rsync_remove_old_versions() {
# Remove mirrors older then $SAVE_LAST_DAYS and w/o symlinks on it
local RSYNCHOST=$1
local RSYNCUSER=$2
local REMOTEPATH=$3
local FOLDERNAME=$4
DIRS=$(rsync_list_dirs $RSYNCHOST $RSYNCUSER $REMOTEPATH | grep "^$FOLDERNAME\-" )
for dir in $DIRS; do
ddate=$(echo $dir | awk -F '[-]' '{print $(NF-3)$(NF-2)$(NF-1)}')
[ "$ddate" -gt "$WARN_DATE" ] && continue
LINKS=$(rsync_list_links $RSYNCHOST $RSYNCUSER $REMOTEPATH | grep -F $dir ; rsync_list_links $RSYNCHOST $RSYNCUSER $(dirname $REMOTEPATH) | grep -F "$(basename $REMOTEPATH)/$dir")
if [ "$LINKS" = "" ]; then
rsync_delete_dir $RSYNCHOST $RSYNCUSER $REMOTEPATH/$dir
continue
fi
echo "Skip because symlinks $LINKS points to $dir"
done
}
######################################################
function rsync_transfer() {
# sync files to remote host
# $1 - remote host
# $2 - rsync module
# $3 - source dir 1/
# $4 - remote dir 1/2/3/4/5
# snapshots dir 1/2/3/4/snapshots
local RSYNC_HOST=$1
local RSYNC_USER=$2
local SOURCE_DIR=$3
local REMOTE_DIR=$4
local SNAPSHOT_DIR=$(echo $REMOTE_DIR | sed "s|$(basename ${REMOTE_DIR})$|${SNAPSHOT_FOLDER}|")
local SNAPSHOT_FOLDER=$(basename $SNAPSHOT_DIR) # snapshots
local SNAPSHOT_PATH=$(dirname $SNAPSHOT_DIR) # 1/2
local REMOTE_ROOT=$(echo $REMOTE_DIR | sed "s|^$SNAPSHOT_PATH/||")
local REMOTE_ROOT=${REMOTE_ROOT%%/*} # 3
rsync_list_dirs $RSYNC_HOST $RSYNC_USER $SNAPSHOT_DIR/${REMOTE_ROOT}-${DATE} \
|| rsync_create_dir $RSYNC_HOST $RSYNC_USER $SNAPSHOT_DIR/${REMOTE_ROOT}-${DATE}
OPTIONS="--archive --verbose --force --ignore-errors --delete-excluded --no-owner --no-group \
--delete --link-dest=/${SNAPSHOT_DIR}/${REMOTE_ROOT}${LATESTSUFFIX}"
rsync ${OPTIONS} ${SOURCE_DIR}/ ${RSYNC_HOST}::${RSYNC_USER}/${SNAPSHOT_DIR}/${REMOTE_ROOT}-${DATE}/ \
&& rsync_delete_file $RSYNC_HOST $RSYNC_USER ${SNAPSHOT_DIR}/${REMOTE_ROOT}${LATESTSUFFIX} \
&& rsync_create_symlink $RSYNC_HOST $RSYNC_USER ${SNAPSHOT_DIR}/${REMOTE_ROOT}${LATESTSUFFIX} ${REMOTE_ROOT}-${DATE} \
&& rsync_delete_file $RSYNC_HOST $RSYNC_USER ${SNAPSHOT_PATH}/${REMOTE_ROOT} \
&& rsync_create_symlink $RSYNC_HOST $RSYNC_USER ${SNAPSHOT_PATH}/${REMOTE_ROOT} ${SNAPSHOT_FOLDER}/${REMOTE_ROOT}-${DATE} \
&& rsync_remove_old_versions $RSYNC_HOST $RSYNC_USER ${SNAPSHOT_DIR} ${REMOTE_ROOT}
RESULT=$?
[ $RESULT -ne 0 ] && rsync_delete_dir $RSYNC_HOST $RSYNC_USER ${SNAPSHOT_DIR}/${REMOTE_ROOT}-${DATE}
return $RESULT
}

View File

@ -1,208 +0,0 @@
#!/bin/bash -ex
[ -f ".publisher-defaults-deb" ] && source .publisher-defaults-deb
source $(dirname $(readlink -e $0))/functions/publish-functions.sh
source $(dirname $(readlink -e $0))/functions/locking.sh
main() {
local SIGN_STRING=""
check-gpg && SIGN_STRING="true"
## Download sources from worker
[ -d $TMP_DIR ] && rm -rf $TMP_DIR
mkdir -p $TMP_DIR
rsync -avPzt \
-e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SSH_OPTS}" \
${SSH_USER}${BUILD_HOST}:${PKG_PATH}/ ${TMP_DIR}/ || error "Can't download packages"
## Resign source package
## FixMe: disabled for discussion: does it really need to sign
#[ -n "${SIGN_STRING}" ] && \
# for _dscfile in $(find ${TMP_DIR} -name "*.dsc") ; do
# debsign -pgpg --re-sign -k${SIGKEYID} ${_dscfile}
# done
# Create all repositories
# Paths
local URL_PREFIX=""
if [ "${GERRIT_CHANGE_STATUS}" = "NEW" ] ; then
REPO_BASE_PATH=${REPO_BASE_PATH}/${REPO_REQUEST_PATH_PREFIX}
URL_PREFIX=${REPO_REQUEST_PATH_PREFIX}
if [ -n "${LP_BUG}" ] ; then
REPO_BASE_PATH=${REPO_BASE_PATH}${LP_BUG}
URL_PREFIX=${URL_PREFIX}${LP_BUG}/
else
REPO_BASE_PATH=${REPO_BASE_PATH}${REQUEST_NUM}
URL_PREFIX=${URL_PREFIX}${REQUEST_NUM}/
fi
fi
# Repos
for repo_path in ${DEB_REPO_PATH} ; do
local LOCAL_REPO_PATH=${REPO_BASE_PATH}/${repo_path}
local DBDIR="+b/db"
local CONFIGDIR="${LOCAL_REPO_PATH}/conf"
local DISTDIR="${LOCAL_REPO_PATH}/public/dists/"
local OUTDIR="+b/public/"
if [ ! -d "${CONFIGDIR}" ] ; then
mkdir -p ${CONFIGDIR}
job_lock ${CONFIGDIR}.lock wait 3600
for dist_name in ${DEB_DIST_NAME} ${DEB_PROPOSED_DIST_NAME} ${DEB_UPDATES_DIST_NAME} \
${DEB_SECURITY_DIST_NAME} ${DEB_HOLDBACK_DIST_NAME} ; do
cat >> ${CONFIGDIR}/distributions <<- EOF
Origin: ${ORIGIN}
Label: ${DEB_DIST_NAME}
Suite: ${dist_name}
Codename: ${dist_name}
Version: ${PRODUCT_VERSION}
Architectures: amd64 i386 source
Components: main restricted
UDebComponents: main restricted
Contents: . .gz .bz2
EOF
reprepro --basedir ${LOCAL_REPO_PATH} --dbdir ${DBDIR} \
--outdir ${OUTDIR} --distdir ${DISTDIR} --confdir ${CONFIGDIR} \
export ${dist_name}
# Fix Codename field
local release_file="${DISTDIR}/${dist_name}/Release"
sed "s|^Codename:.*$|Codename: ${DEB_DIST_NAME}|" \
-i ${release_file}
rm -f ${release_file}.gpg
# ReSign Release file
[ -n "${SIGN_STRING}" ] \
&& gpg --sign --local-user ${SIGKEYID} -ba \
-o ${release_file}.gpg ${release_file}
done
job_lock ${CONFIGDIR}.lock unset
fi
done
DEB_BASE_DIST_NAME=${DEB_DIST_NAME}
[ -z "${DEB_UPDATES_DIST_NAME}" ] && DEB_UPDATES_DIST_NAME=${DEB_DIST_NAME}
[ -z "${DEB_PROPOSED_DIST_NAME}" ] && DEB_PROPOSED_DIST_NAME=${DEB_DIST_NAME}
[ -z "${DEB_SECURITY_DIST_NAME}" ] && DEB_SECURITY_DIST_NAME=${DEB_DIST_NAME}
[ -z "${DEB_HOLDBACK_DIST_NAME}" ] && DEB_HOLDBACK_DIST_NAME=${DEB_DIST_NAME}
[ -z "${DEB_UPDATES_COMPONENT}" ] && DEB_UPDATES_COMPONENT=${DEB_COMPONENT}
[ -z "${DEB_PROPOSED_COMPONENT}" ] && DEB_PROPOSED_COMPONENT=${DEB_COMPONENT}
[ -z "${DEB_SECURITY_COMPONENT}" ] && DEB_SECURITY_COMPONENT=${DEB_COMPONENT}
[ -z "${DEB_HOLDBACK_COMPONENT}" ] && DEB_HOLDBACK_COMPONENT=${DEB_COMPONENT}
if [ "${IS_UPDATES}" = 'true' ] ; then
DEB_DIST_NAME=${DEB_PROPOSED_DIST_NAME}
DEB_COMPONENT=${DEB_PROPOSED_COMPONENT}
fi
if [ "${IS_HOLDBACK}" = 'true' ] ; then
DEB_DIST_NAME=${DEB_HOLDBACK_DIST_NAME}
DEB_COMPONENT=${DEB_HOLDBACK_COMPONENT}
fi
if [ "${IS_SECURITY}" = 'true' ] ; then
DEB_DIST_NAME=${DEB_SECURITY_DIST_NAME}
DEB_COMPONENT=${DEB_SECURITY_COMPONENT}
fi
[ -z "${DEB_COMPONENT}" ] && local DEB_COMPONENT=main
[ "${IS_RESTRICTED}" = 'true' ] && DEB_COMPONENT=restricted
local LOCAL_REPO_PATH=${REPO_BASE_PATH}/${DEB_REPO_PATH}
local CONFIGDIR="${LOCAL_REPO_PATH}/conf"
local DBDIR="+b/db"
local DISTDIR="${LOCAL_REPO_PATH}/public/dists/"
local OUTDIR="${LOCAL_REPO_PATH}/public/"
local REPREPRO_OPTS="--verbose --basedir ${LOCAL_REPO_PATH} --dbdir ${DBDIR} \
--outdir ${OUTDIR} --distdir ${DISTDIR} --confdir ${CONFIGDIR}"
local REPREPRO_COMP_OPTS="${REPREPRO_OPTS} --component ${DEB_COMPONENT}"
# Parse incoming files
local BINDEBLIST=""
local BINDEBNAMES=""
local BINUDEBLIST=""
local BINSRCLIST=""
for binary in ${TMP_DIR}/* ; do
case ${binary##*.} in
deb) BINDEBLIST="${BINDEBLIST} ${binary}"
BINDEBNAMES="${BINDEBNAMES} ${binary##*/}"
;;
udeb) BINUDEBLIST="${BINUDEBLIST} ${binary}" ;;
dsc) BINSRCLIST="${binary}" ;;
esac
done
job_lock ${CONFIGDIR}.lock wait 3600
local SRC_NAME=$(awk '/^Source:/ {print $2}' ${BINSRCLIST})
local NEW_VERSION=$(awk '/^Version:/ {print $2}' ${BINSRCLIST} | head -n 1)
local OLD_VERSION=$(reprepro ${REPREPRO_OPTS} --list-format '${version}\n' \
listfilter ${DEB_DIST_NAME} "Package (==${SRC_NAME})" | sort -u | head -n 1)
[ "${OLD_VERSION}" == "" ] && OLD_VERSION=none
# Remove existing packages for requests-on-review and downgrades
# TODO: Get rid of removing. Just increase version properly
if [ "${GERRIT_CHANGE_STATUS}" = "NEW" -o "$IS_DOWNGRADE" == "true" ] ; then
reprepro ${REPREPRO_OPTS} removesrc ${DEB_DIST_NAME} ${SRC_NAME} ${OLD_VERSION} || :
fi
# Add .deb binaries
if [ "${BINDEBLIST}" != "" ]; then
reprepro ${REPREPRO_COMP_OPTS} includedeb ${DEB_DIST_NAME} ${BINDEBLIST} \
|| error "Can't include packages"
fi
# Add .udeb binaries
if [ "${BINUDEBLIST}" != "" ]; then
reprepro ${REPREPRO_COMP_OPTS} includeudeb ${DEB_DIST_NAME} ${BINUDEBLIST} \
|| error "Can't include packages"
fi
# Replace sources
# TODO: Get rid of replacing. Just increase version properly
if [ "${BINSRCLIST}" != "" ]; then
reprepro ${REPREPRO_COMP_OPTS} --architecture source \
remove ${DEB_DIST_NAME} ${SRC_NAME} || :
reprepro ${REPREPRO_COMP_OPTS} includedsc ${DEB_DIST_NAME} ${BINSRCLIST} \
|| error "Can't include packages"
fi
# Cleanup files from previous version
[ "${OLD_VERSION}" != "${NEW_VERSION}" ] \
&& reprepro ${REPREPRO_OPTS} removesrc ${DEB_DIST_NAME} ${SRC_NAME} ${OLD_VERSION}
# Fix Codename field
local release_file="${DISTDIR}/${DEB_DIST_NAME}/Release"
sed "s|^Codename:.*$|Codename: ${DEB_BASE_DIST_NAME}|" -i ${release_file}
# Resign Release file
rm -f ${release_file}.gpg
local pub_key_file="${LOCAL_REPO_PATH}/public/archive-${PROJECT_NAME}${PROJECT_VERSION}.key"
if [ -n "${SIGN_STRING}" ] ; then
gpg --sign --local-user ${SIGKEYID} -ba -o ${release_file}.gpg ${release_file}
[ ! -f "${pub_key_file}" ] && touch ${pub_key_file}
gpg -o ${pub_key_file}.tmp --armor --export ${SIGKEYID}
if diff -q ${pub_key_file} ${pub_key_file}.tmp &>/dev/null ; then
rm ${pub_key_file}.tmp
else
mv ${pub_key_file}.tmp ${pub_key_file}
fi
else
rm -f ${pub_key_file}
fi
sync-repo ${OUTDIR} ${DEB_REPO_PATH} ${REPO_REQUEST_PATH_PREFIX} ${REQUEST_NUM} ${LP_BUG}
job_lock ${CONFIGDIR}.lock unset
rm -f ${WRK_DIR}/deb.publish.setenvfile
cat > ${WRK_DIR}/deb.publish.setenvfile<<-EOF
DEB_PUBLISH_SUCCEEDED=true
DEB_DISTRO=${DIST}
DEB_REPO_URL="http://${REMOTE_REPO_HOST}/${URL_PREFIX}${DEB_REPO_PATH} ${DEB_DIST_NAME} ${DEB_COMPONENT}"
DEB_PACKAGENAME=${SRC_NAME}
DEB_VERSION=${NEW_VERSION}
DEB_BINARIES=$(cat ${BINSRCLIST} | grep ^Binary | sed 's|^Binary:||; s| ||g')
DEB_CHANGE_REVISION=${GERRIT_PATCHSET_REVISION}
LP_BUG=${LP_BUG}
EOF
}
main "$@"
exit 0

View File

@ -1,236 +0,0 @@
#!/bin/bash -ex
[ -f ".publisher-defaults-rpm" ] && source .publisher-defaults-rpm
source $(dirname $(readlink -e $0))/functions/publish-functions.sh
source $(dirname $(readlink -e $0))/functions/locking.sh
[ -z "${DEFAULTCOMPSXML}" ] && DEFAULTCOMPSXML=http://mirror.fuel-infra.org/fwm/6.0/centos/os/x86_64/comps.xml
main() {
if [ -n "${SIGKEYID}" ] ; then
check-gpg || :
gpg --export -a ${SIGKEYID} > RPM-GPG-KEY
if [ $(rpm -qa | grep gpg-pubkey | grep -ci ${SIGKEYID}) -eq 0 ]; then
rpm --import RPM-GPG-KEY
fi
fi
# Get built binaries
[ -d ${TMP_DIR} ] && rm -rf ${TMP_DIR}
mkdir -p ${TMP_DIR}
rsync -avPzt -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ${SSH_OPTS}" \
${SSH_USER}${BUILD_HOST}:${PKG_PATH}/ ${TMP_DIR}/ || error "Can't download packages"
[ $(ls -1 ${TMP_DIR}/ | wc -l) -eq 0 ] && error "Can't download packages"
## Prepare repository
local URL_PREFIX=''
if [ "${GERRIT_CHANGE_STATUS}" == "NEW" ] ; then
REPO_BASE_PATH=${REPO_BASE_PATH}/${REPO_REQUEST_PATH_PREFIX}
URL_PREFIX=${REPO_REQUEST_PATH_PREFIX}
if [ -n "${LP_BUG}" ] ; then
REPO_BASE_PATH=${REPO_BASE_PATH}${LP_BUG}
URL_PREFIX=${URL_PREFIX}${LP_BUG}/
else
REPO_BASE_PATH=${REPO_BASE_PATH}${REQUEST_NUM}
URL_PREFIX=${URL_PREFIX}${REQUEST_NUM}/
fi
fi
# Create all repositories
for repo_path in ${RPM_OS_REPO_PATH} ${RPM_PROPOSED_REPO_PATH} ${RPM_UPDATES_REPO_PATH} ${RPM_SECURITY_REPO_PATH} ${RPM_HOLDBACK_REPO_PATH} ; do
local LOCAL_REPO_PATH=${REPO_BASE_PATH}/${repo_path}
if [ ! -d "${LOCAL_REPO_PATH}" ] ; then
mkdir -p ${LOCAL_REPO_PATH}/{x86_64/Packages,Source/SPackages,x86_64/repodata}
job_lock ${LOCAL_REPO_PATH}.lock wait 3600
createrepo --pretty --database --update -o ${LOCAL_REPO_PATH}/x86_64/ ${LOCAL_REPO_PATH}/x86_64
createrepo --pretty --database --update -o ${LOCAL_REPO_PATH}/Source/ ${LOCAL_REPO_PATH}/Source
job_lock ${LOCAL_REPO_PATH}.lock unset
fi
done
[ -z "${RPM_UPDATES_REPO_PATH}" ] && RPM_UPDATES_REPO_PATH=${RPM_OS_REPO_PATH}
[ -z "${RPM_PROPOSED_REPO_PATH}" ] && RPM_PROPOSED_REPO_PATH=${RPM_OS_REPO_PATH}
[ -z "${RPM_SECURITY_REPO_PATH}" ] && RPM_SECURITY_REPO_PATH=${RPM_OS_REPO_PATH}
[ -z "${RPM_HOLDBACK_REPO_PATH}" ] && RPM_HOLDBACK_REPO_PATH=${RPM_OS_REPO_PATH}
RPM_REPO_PATH=${RPM_OS_REPO_PATH}
[ "${IS_UPDATES}" == 'true' ] && RPM_REPO_PATH=${RPM_PROPOSED_REPO_PATH}
[ "${IS_HOLDBACK}" == 'true' ] && RPM_REPO_PATH=${RPM_HOLDBACK_REPO_PATH}
[ "${IS_SECURITY}" == 'true' ] && RPM_REPO_PATH=${RPM_SECURITY_REPO_PATH}
local LOCAL_REPO_PATH=${REPO_BASE_PATH}/${RPM_REPO_PATH}
# Parse binary list
local BINRPMLIST=""
local BINSRCLIST=""
local BINSRCNAMES=""
local BINRPMNAMES=""
for binary in ${TMP_DIR}/* ; do
if [ "${binary:(-7)}" == "src.rpm" ] ; then
BINSRCLIST="${binary}"
BINSRCNAMES="${binary##*/}"
elif [ "${binary##*.}" == "rpm" ]; then
BINRPMLIST="${BINRPMLIST} ${binary}"
BINRPMNAMES="${BINRPMNAMES} ${binary##*/}"
fi
done
BINNAMES="${BINSRCNAMES} ${BINRPMNAMES}"
local PACKAGENAMES=""
# Get existing srpm filename
local SRPM_NAME=$(rpm -qp --queryformat "%{NAME}" ${BINSRCLIST})
local _repoid_source=$(mktemp -u XXXXXXXX)
local repoquery_opts="--repofrompath=${_repoid_source},file://${LOCAL_REPO_PATH}/Source/ --repoid=${_repoid_source}"
local EXIST_SRPM_FILE=$(repoquery ${repoquery_opts} --archlist=src --location ${SRPM_NAME})
local EXIST_SRPM_FILE=${EXIST_SRPM_FILE##*/}
# Get existing rpm files
local repoquerysrpm_py="$(dirname $(readlink -e $0))/repoquerysrpm.py"
local EXIST_RPM_FILES=$(python ${repoquerysrpm_py} --srpm=${EXIST_SRPM_FILE} --path=${LOCAL_REPO_PATH}/x86_64/ | awk -F'/' '{print $NF}')
# Cleanup `repoquery` data
find /var/tmp/yum-${USER}-* -type d -name $_repoid_source -exec rm -rf {} \; 2>/dev/null || :
job_lock ${LOCAL_REPO_PATH}.lock wait 3600
# Sign and publish binaries
for binary in ${BINRPMLIST} ${BINSRCLIST} ; do
local PACKAGEFOLDER=x86_64/Packages
[ "${binary:(-7)}" == "src.rpm" ] && PACKAGEFOLDER=Source/SPackages
# Get package info
local NEWBINDATA=$(rpm -qp --queryformat "%{EPOCH} %{NAME} %{VERSION} %{RELEASE} %{SHA1HEADER}\n" ${binary} 2>/dev/null)
local NEWBINEPOCH=$(echo ${NEWBINDATA} | cut -d' ' -f1)
[ "${NEWBINEPOCH}" == "(none)" ] && NEWBINEPOCH='0'
local BINNAME=$(echo ${NEWBINDATA} | cut -d' ' -f2)
[ "${binary:(-7)}" != "src.rpm" ] && local PACKAGENAMES="${PACKAGENAMES} ${BINNAME}"
local NEWBINVERSION=$(echo ${NEWBINDATA} | cut -d' ' -f3)
local NEWBINRELEASE=$(echo ${NEWBINDATA} | cut -d' ' -f4)
local NEWBINSHA=$(echo ${NEWBINDATA} | cut -d' ' -f5)
# EXISTBINDATA format pkg-name-epoch:version-release.arch (NEVRA)
local _repoid_os=$(mktemp -u XXXXXXXX)
local _repoid_updates=$(mktemp -u XXXXXXXX)
local _repoid_proposed=$(mktemp -u XXXXXXXX)
local _repoid_holdback=$(mktemp -u XXXXXXXX)
local _repoid_security=$(mktemp -u XXXXXXXX)
local repoquery_cmd="repoquery --repofrompath=${_repoid_os},file://${REPO_BASE_PATH}/${RPM_OS_REPO_PATH}/${PACKAGEFOLDER%/*} --repoid=${_repoid_os}"
local repoquery_cmd="${repoquery_cmd} --repofrompath=${_repoid_updates},file://${REPO_BASE_PATH}/${RPM_UPDATES_REPO_PATH}/${PACKAGEFOLDER%/*} --repoid=${_repoid_updates}"
local repoquery_cmd="${repoquery_cmd} --repofrompath=${_repoid_proposed},file://${REPO_BASE_PATH}/${RPM_PROPOSED_REPO_PATH}/${PACKAGEFOLDER%/*} --repoid=${_repoid_proposed}"
local repoquery_cmd="${repoquery_cmd} --repofrompath=${_repoid_holdback},file://${REPO_BASE_PATH}/${RPM_HOLDBACK_REPO_PATH}/${PACKAGEFOLDER%/*} --repoid=${_repoid_holdback}"
local repoquery_cmd="${repoquery_cmd} --repofrompath=${_repoid_security},file://${REPO_BASE_PATH}/${RPM_SECURITY_REPO_PATH}/${PACKAGEFOLDER%/*} --repoid=${_repoid_security}"
[ "${binary:(-7)}" == "src.rpm" ] && repoquery_cmd="${repoquery_cmd} --archlist=src"
local EXISTBINDATA=$(${repoquery_cmd} ${BINNAME} 2>/dev/null)
# Cleanup `repoquery` data
for _repoid in $_repoid_os $_repoid_updates $_repoid_proposed $_repoid_holdback $_repoid_security ; do
find /var/tmp/yum-${USER}-* -type d -name $_repoid -exec rm -rf {} \; 2>/dev/null || :
done
# Get arch
local EXISTBINARCH=${EXISTBINDATA##*.}
# Skip arch
local EXISTBINDATA=${EXISTBINDATA%.*}
# Get epoch
local EXISTBINEPOCH=$(echo ${EXISTBINDATA} | cut -d':' -f1 | awk -F'-' '{print $NF}')
# Skip "pkg-name-epoch:"
local EXISTBINDATA=${EXISTBINDATA#*:}
# Get version
local EXISTBINVERSION=${EXISTBINDATA%%-*}
# Get release
local EXISTBINRELEASE=${EXISTBINDATA#*-}
## FixMe: Improve packages removing
# Remove existing packages from repo (for new change requests and downgrades)
if [ "${GERRIT_CHANGE_STATUS}" == "NEW" -o "$IS_DOWNGRADE" == "true" ] ; then
find ${LOCAL_REPO_PATH} -name "${BINNAME}-${EXISTBINVERSION}-${EXISTBINRELEASE}.${EXISTBINARCH}*" \
-exec rm -f {} \;
unset EXISTBINVERSION
fi
# Compare versions of new and existring packages
local SKIPPACKAGE=0
if [ ! -z "${EXISTBINVERSION}" ] ; then
############################################################
## Comparing versions before including package to the repo
##
CMPVER=$(python <(cat <<-HERE
from rpmUtils import miscutils
print miscutils.compareEVR(("${EXISTBINEPOCH}", "${EXISTBINVERSION}", "${EXISTBINRELEASE}"),
("${NEWBINEPOCH}", "${NEWBINVERSION}", "${NEWBINRELEASE}"))
HERE
))
# Results:
# 1 - EXISTBIN is newer than NEWBIN
# 0 - EXISTBIN and NEWBIN have the same version
# -1 - EXISTBIN is older than NEWBIN
case ${CMPVER} in
1) error "Can't publish ${binary#*/}. Existing ${BINNAME}-${EXISTBINEPOCH}:${EXISTBINVERSION}-${EXISTBINRELEASE} has newer version" ;;
0) # Check sha for identical package names
EXISTRPMFILE=$(${repoquery_cmd} --location ${BINNAME})
EXISTBINSHA=$(rpm -qp --queryformat "%{SHA1HEADER}" ${EXISTRPMFILE})
if [ "${NEWBINSHA}" == "${EXISTBINSHA}" ]; then
SKIPPACKAGE=1
echo "Skipping including of ${binary}. Existing ${BINNAME}-${EXISTBINEPOCH}:${EXISTBINVERSION}-${EXISTBINRELEASE} has the same version and checksum"
else
error "Can't publish ${binary#*/}. Existing ${BINNAME}-${EXISTBINEPOCH}:${EXISTBINVERSION}-${EXISTBINRELEASE} has the same version but different checksum"
fi
;;
*) : ;;
esac
##
############################################################
fi
############
## Signing
##
if [ -n "${SIGKEYID}" ] ; then
# rpmsign requires pass phrase. use `expect` to skip it
LANG=C expect <<EOL
spawn rpmsign --define "%__gpg_check_password_cmd /bin/true" --define "%_signature gpg" --define "%_gpg_name ${SIGKEYID}" --resign ${binary}
expect -exact "Enter pass phrase:"
send -- "Doesn't matter\r"
expect eof
lassign [wait] pid spawnid os_error_flag value
puts "exit status: \$value"
exit \$value
EOL
[ $? -ne 0 ] && error "Something went wrong. Can't sign package ${binary#*/}"
fi
##
###########
[ "${SKIPPACKAGE}" == "0" ] && cp ${binary} ${LOCAL_REPO_PATH}/${PACKAGEFOLDER}
done
# Remove old packages
for file in ${EXIST_SRPM_FILE} ${EXIST_RPM_FILES} ; do
[ "${BINNAMES}" == "${BINNAMES/$file/}" ] \
&& find ${LOCAL_REPO_PATH} -type f -name ${file} -exec rm {} \; 2>/dev/null
done
rm -f $(repomanage --keep=1 --old ${LOCAL_REPO_PATH}/x86_64)
rm -f $(repomanage --keep=1 --old ${LOCAL_REPO_PATH}/Source)
# Update and sign repository metadata
[ ! -e ${LOCAL_REPO_PATH}/comps.xml ] && wget ${DEFAULTCOMPSXML} -O ${LOCAL_REPO_PATH}/comps.xml
createrepo --pretty --database --update -g ${LOCAL_REPO_PATH}/comps.xml -o ${LOCAL_REPO_PATH}/x86_64/ ${LOCAL_REPO_PATH}/x86_64
createrepo --pretty --database --update -o ${LOCAL_REPO_PATH}/Source/ ${LOCAL_REPO_PATH}/Source
if [ -n "${SIGKEYID}" ] ; then
rm -f ${LOCAL_REPO_PATH}/x86_64/repodata/repomd.xml.asc
rm -f ${LOCAL_REPO_PATH}/Source/repodata/repomd.xml.asc
gpg --armor --local-user ${SIGKEYID} --detach-sign ${LOCAL_REPO_PATH}/x86_64/repodata/repomd.xml
gpg --armor --local-user ${SIGKEYID} --detach-sign ${LOCAL_REPO_PATH}/Source/repodata/repomd.xml
[ -f "RPM-GPG-KEY" ] && cp RPM-GPG-KEY ${LOCAL_REPO_PATH}/RPM-GPG-KEY-${PROJECT_NAME}${PROJECT_VERSION}
fi
# Sync repo to remote host
sync-repo ${LOCAL_REPO_PATH}/ ${RPM_REPO_PATH} ${REPO_REQUEST_PATH_PREFIX} ${REQUEST_NUM} ${LP_BUG}
job_lock ${LOCAL_REPO_PATH}.lock unset
rm -f ${WRK_DIR}/rpm.publish.setenvfile
cat > ${WRK_DIR}/rpm.publish.setenvfile <<-EOF
RPM_PUBLISH_SUCCEEDED=true
RPM_DISTRO=${DIST}
RPM_VERSION=${NEWBINEPOCH}:${NEWBINVERSION}-${NEWBINRELEASE}
RPM_REPO_URL=http://${REMOTE_REPO_HOST}/${URL_PREFIX}${RPM_REPO_PATH}/x86_64
RPM_BINARIES=$(echo ${PACKAGENAMES} | sed 's|^ ||; s| |,|g')
RPM_CHANGE_REVISION=${GERRIT_PATCHSET_REVISION}
LP_BUG=${LP_BUG}
EOF
}
main "$@"
exit 0

View File

@ -1,66 +0,0 @@
#!/usr/bin/python
from __future__ import print_function
import argparse
import gzip
import os
from lxml import etree as ET
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
'-s', '--srpm', dest='srpm', action='store', type=str,
help='srpm', required=True, default='none'
)
parser.add_argument(
'-p', '--path', dest='path', action='store', type=str,
help='path', required=True, default='.'
)
params, other_params = parser.parse_known_args()
repomdpath = os.path.join(params.path, 'repodata', 'repomd.xml')
tree = ET.parse(repomdpath)
repomd = tree.getroot()
xmlpath = {}
for data in repomd.findall(ET.QName(repomd.nsmap[None], 'data')):
filetype = data.attrib['type']
xmlpath[filetype] = data.find(
ET.QName(repomd.nsmap[None], 'location')).attrib['href']
primaryfile = os.path.join(params.path, xmlpath['primary'])
with gzip.open(primaryfile, 'rb') as f:
primary_content = f.read()
primary = ET.fromstring(primary_content)
filtered = primary.xpath('//rpm:sourcerpm[text()="' + params.srpm + '"]',
namespaces={'rpm': primary.nsmap['rpm']})
for item in filtered:
name = item.getparent().getparent().find(
ET.QName(primary.nsmap[None], 'name')).text
arch = item.getparent().getparent().find(
ET.QName(primary.nsmap[None], 'arch')).text
epoch = item.getparent().getparent().find(
ET.QName(primary.nsmap[None], 'version')).attrib['epoch']
ver = item.getparent().getparent().find(
ET.QName(primary.nsmap[None], 'version')).attrib['ver']
rel = item.getparent().getparent().find(
ET.QName(primary.nsmap[None], 'version')).attrib['rel']
location = item.getparent().getparent().find(
ET.QName(primary.nsmap[None], 'location')).attrib['href']
print('{name} {epoch} {ver} {rel} {arch} {location}'.format(
name=name, epoch=epoch, ver=ver, rel=rel,
arch=arch, location=location))
if __name__ == "__main__":
main()

View File

@ -1,13 +1,13 @@
[metadata]
name = packetary
version = 8.0.0
summary = The chain of tools to manage package`s lifecycle.
version = 0.1.0
summary = Package allows to build and clone deb and rpm repositories
description-file =
README.rst
author = Mirantis Inc.
author_email = product@mirantis.com
url = http://mirantis.com
home-page = http://mirantis.com
url = https://github.com/openstack/packetary
home-page = https://github.com/openstack/packetary
classifier =
Development Status :: 4 - Beta
Environment :: OpenStack

View File

@ -1,96 +0,0 @@
%define name fuel-mirror
%{!?version: %define version 8.0.0}
%{!?release: %define release 1}
Name: %{name}
Version: %{version}
Release: %{release}
Source0: %{name}-%{version}.tar.gz
Summary: Utility to create RPM and DEB mirror
URL: http://mirantis.com
License: GPLv2
Group: Utilities
BuildRoot: %{_tmppath}/%{name}-%{version}-buildroot
Prefix: %{_prefix}
BuildRequires: git
BuildRequires: python-setuptools
BuildRequires: python-pbr
BuildArch: noarch
Requires: python
Requires: python-babel >= 1.3
Requires: python-cliff >= 1.7.0
Requires: python-fuelclient >= 7.0.0
Requires: python-packetary == %{version}
Requires: python-pbr >= 0.8
Requires: python-six >= 1.5.2
Requires: PyYAML >= 3.10
# Workaroud for babel bug
Requires: pytz
Obsoletes: fuel-createmirror
%description
Provides two commands fuel-mirror and fuel-createmirror.
Second one is for backward compatibility with the previous
generation of the utility. These commands could be used
to create local copies of MOS and upstream deb and rpm
repositories.
%package -n python-packetary
Summary: Library that allows to build and clone deb and rpm repositories
Group: Development/Libraries
Requires: createrepo
Requires: python
Requires: python-babel >= 1.3
Requires: python-bintrees >= 2.0.2
Requires: python-chardet >= 2.0.1
Requires: python-cliff >= 1.7.0
Requires: python-debian >= 0.1.21
Requires: python-eventlet >= 0.15
Requires: python-lxml >= 1.1.23
Requires: python-pbr >= 0.8
Requires: python-six >= 1.5.2
Requires: python-stevedore >= 1.1.0
# Workaroud for babel bug
Requires: pytz
%description -n python-packetary
Provides object model and API for dealing with deb
and rpm repositories. One can use this framework to
implement operations like building repository
from a set of packages, clone repository, find package
dependencies, mix repositories, pull out a subset of
packages into a separate repository, etc.
%prep
%setup -cq -n %{name}-%{version}
%build
cd %{_builddir}/%{name}-%{version} && python setup.py build
cd %{_builddir}/%{name}-%{version}/contrib/fuel_mirror && python setup.py build
%install
cd %{_builddir}/%{name}-%{version} && python setup.py install --single-version-externally-managed -O1 --root=$RPM_BUILD_ROOT --record=%{_builddir}/%{name}-%{version}/INSTALLED_FILES
cd %{_builddir}/%{name}-%{version}/contrib/fuel_mirror && python setup.py install --single-version-externally-managed -O1 --root=$RPM_BUILD_ROOT --record=%{_builddir}/%{name}-%{version}/contrib/fuel_mirror/INSTALLED_FILES
mkdir -p %{buildroot}/etc/%{name}
mkdir -p %{buildroot}/usr/bin
mkdir -p %{buildroot}/usr/share/%{name}
install -m 755 %{_builddir}/%{name}-%{version}/contrib/fuel_mirror/scripts/fuel-createmirror %{buildroot}/usr/bin/fuel-createmirror
install -m 755 %{_builddir}/%{name}-%{version}/contrib/fuel_mirror/etc/config.yaml %{buildroot}/etc/%{name}/config.yaml
%clean
rm -rf $RPM_BUILD_ROOT
%files -f %{_builddir}/%{name}-%{version}/contrib/fuel_mirror/INSTALLED_FILES
%defattr(0755,root,root)
/usr/bin/fuel-createmirror
%attr(0644,root,root) /etc/%{name}/config.yaml
%files -n python-packetary -f %{_builddir}/%{name}-%{version}/INSTALLED_FILES
%defattr(-,root,root)