Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: I690b3e5db601e52d24776215aec45746405fa03b
This commit is contained in:
Andreas Jaeger 2019-12-18 19:32:25 +01:00
parent b1832986ff
commit 8cbbdaa892
133 changed files with 8 additions and 8139 deletions

3
.gitmodules vendored
View File

@ -1,3 +0,0 @@
[submodule "plugin-test-examples/plugin_test/fuel-qa"]
path = plugin-test-examples/plugin_test/fuel-qa
url = https://github.com/stackforge/fuel-qa/

View File

@ -1,247 +1,10 @@
Fuel Plugin CI
==============
This project is no longer maintained.
Overview
--------
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
Components, concept
.. image:: pics/Fuel-plugin-CI.png
Jenkins and web server for logs installation
--------------------------------------------
First you should install puppet master and run the manifests.
All nodes described in ``manifests/site.pp`` file.
CI needs the following amount of nodes:
* one node for jenkins master
* one more for jenkins slave
* one for log publication.
These nodes should be described in ``manifests/site.pp`` with necessary classes:
::
class { '::fuel_project::jenkins::slave':}
class { '::fuel_project::jenkins::master':}
class { '::fuel_project::web':}
Run install script ``sudo puppet-manifests/bin/install_puppet_master.sh`` on every node.
Gerrit Integration overview
+++++++++++++++++++++++++++
In general, the installation should meet the following
requirements:
* Anonymous users can read all projects.
* All registered users can perform informational code review (+/-1) on any project.
* Jenkins can perform verification (blocking or approving: +/-1).
* All registered users can create changes.
* Members of core group can perform full code review (blocking or approving: +/- 2)
and submit changes to be merged.
* Make sure you have gerrit account on review.openstack.org (`see this <http://docs.openstack.org/infra/system-config/gerrit.html>`_ for the reference)::
ssh -p 29418 review.openstack.org "gerrit create-account \
--group 'Third-Party CI' \
--full-name 'Some CI Bot' \
--email ci-bot@third-party.org \
--ssh-key 'ssh-rsa AAAAB3Nz...zaUCse1P ci-bot@third-party.org' \
some-ci-bot
Jenkins gerrit plugin configuration
+++++++++++++++++++++++++++++++++++
#. The settings look as follows:
.. image:: pics/settings.png
#. It's main gerrit configuration window. You should add a gerrit server.
.. image:: pics/settings-full.png
#. Vote configuration.
#. Log publication
The result of job are artifacts - logs and packages.
Logs should be published on special web servers, where it can be accessible via gerrit.
The web server deploys with puppet class ``fuel_project::web``.
Logs copy via ssh by job ``fuel-plugins.publish_logs``. You should add new user with rsa key installed and necessary path accessible for write (like ``/var/www/logs``).
The ``REPORTED_JOB_URL`` variable is responsible for url of logs in gerrit.
Jenkins plugins installation
-----------------------------
We recommend to install these plugins for Jenkins.
Some of them are necessary for CI and other is just useful and make your jenkins experience easier:
* `AnsiColor <https://wiki.jenkins-ci.org/display/JENKINS/AnsiColor+Plugin>`_
* `Ant Plugin <https://wiki.jenkins-ci.org/display/JENKINS/AnsiColor+Plugin>`_
* `build timeout plugin <https://wiki.jenkins-ci.org/display/JENKINS/Build-timeout+Plugin>`_
* `conditional buildstep <https://wiki.jenkins-ci.org/display/JENKINS/Conditional+BuildStep+Plugin>`_
* `Copy Artifact Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Copy+Artifact+Plugin>`_
* `Credentials Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Credentials+Plugin>`_
* `CVS Plug-in <https://wiki.jenkins-ci.org/display/JENKINS/CVS+Plugin>`_
* `description setter plugin <https://wiki.jenkins-ci.org/display/JENKINS/Description+Setter+Plugin>`_
* `Email Extension Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin>`_
* `Environment Injector Plugin <https://wiki.jenkins-ci.org/display/JENKINS/EnvInject+Plugin>`_
* `External Monitor Job Type Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Monitoring+external+jobs>`_
* `Gerrit Trigger <https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger>`_
* `GIT client plugin <https://wiki.jenkins-ci.org/display/JENKINS/Git+Client+Plugin>`_
* `GIT plugin <https://wiki.jenkins-ci.org/display/JENKINS/Git+Plugin>`_
* `Groovy <https://wiki.jenkins-ci.org/display/JENKINS/Groovy+plugin>`_
* `Heavy Job Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Heavy+Job+Plugin>`_
* `HTML Publisher plugin <https://wiki.jenkins-ci.org/display/JENKINS/HTML+Publisher+Plugin>`_
* `Javadoc Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Javadoc+Plugin>`_
* `Job Configuration History Plugin <https://wiki.jenkins-ci.org/display/JENKINS/JobConfigHistory+Plugin>`_
* `JUnit Plugin <https://wiki.jenkins-ci.org/display/JENKINS/JUnit+Plugin>`_
* `LDAP Plugin <https://wiki.jenkins-ci.org/display/JENKINS/LDAP+Plugin>`_
* `Locale plugin <https://wiki.jenkins-ci.org/display/JENKINS/Locale+Plugin>`_
* `Mailer Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Mailer>`_
* `MapDB API Plugin <https://wiki.jenkins-ci.org/display/JENKINS/MapDB+API+Plugin>`_
* `Matrix Authorization Strategy Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Authorization+Strategy+Plugin>`_
* `Matrix Project Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin>`_
* `Maven Integration plugin <https://wiki.jenkins-ci.org/display/JENKINS/Maven+Project+Plugin>`_
* `Multijob plugin <https://wiki.jenkins-ci.org/display/JENKINS/Multijob+Plugin>`_
* `Multiple SCMs plugin <https://wiki.jenkins-ci.org/display/JENKINS/Multiple+SCMs+Plugin>`_
* `OWASP Markup Formatter Plugin <https://wiki.jenkins-ci.org/display/JENKINS/OWASP+Markup+Formatter+Plugin>`_
* `PAM Authentication plugin <https://wiki.jenkins-ci.org/display/JENKINS/PAM+Authentication+Plugin>`_
* `Parameterized Trigger plugin <https://wiki.jenkins-ci.org/display/JENKINS/Parameterized+Trigger+Plugin>`_
* `Publish Over SSH <https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin>`_
* `PWauth Security Realm <http://wiki.hudson-ci.org/display/HUDSON/pwauth>`_
* `Run Condition Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Run+Condition+Plugin>`_
* `SCM API Plugin <https://wiki.jenkins-ci.org/display/JENKINS/SCM+API+Plugin>`_
* `Script Security Plugin <https://wiki.jenkins-ci.org/display/JENKINS/Script+Security+Plugin>`_
* `Self-Organizing Swarm Plug-in Modules <https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin>`_
* `Simple Theme Plugin <http://wiki.jenkins-ci.org/display/JENKINS/Simple+Theme+Plugin>`_
* `SSH Agent Plugin <https://wiki.jenkins-ci.org/display/JENKINS/SSH+Agent+Plugin>`_
* `SSH Credentials Plugin <https://wiki.jenkins-ci.org/display/JENKINS/SSH+Credentials+Plugin>`_
* `SSH Slaves plugin <http://wiki.jenkins-ci.org/display/JENKINS/SSH+Slaves+plugin>`_
* `Subversion Plug-in <http://wiki.jenkins-ci.org/display/JENKINS/Subversion+Plugin>`_
* `Throttle Concurrent Builds Plug-in <http://wiki.jenkins-ci.org/display/JENKINS/Throttle+Concurrent+Builds+Plugin>`_
* `Timestamper <https://wiki.jenkins-ci.org/display/JENKINS/Timestamper>`_
* `Token Macro Plugin <http://wiki.jenkins-ci.org/display/JENKINS/Token+Macro+Plugin>`_
* `Translation Assistance plugin <http://wiki.jenkins-ci.org/display/JENKINS/Translation+Assistance+Plugin>`_
* `Windows Slaves Plugin <http://wiki.jenkins-ci.org/display/JENKINS/Windows+Slaves+Plugin>`_
* `Workflow: Step API <https://wiki.jenkins-ci.org/display/JENKINS/Workflow+Plugin>`_
Jenkins jobs installation
-------------------------
`Jenkins Job Builder <http://docs.openstack.org/infra/jenkins-job-builder/>`_ takes simple descriptions of `Jenkins <http://jenkins-ci.org/>`_
jobs in `YAML <http://www.yaml.org/>`_ or `JSON <http://json.org/>`_
format and uses them to configure Jenkins.
To install JJB, run the following commands::
git clone https://git.openstack.org/openstack-infra/jenkins-job-builder
cd jenkins-job-builder && sudo python setup.py install
Before running JJB you need to prepare a config file with the following info (fill it with your own values)::
[jenkins]
user=jenkins
password=1234567890abcdef1234567890abcdef
url=https://jenkins.example.com
And update a JJB configuration using the file from the previous step::
jenkins-jobs --conf yourconf.ini update path_to_repo/jjb
You may find some examples in this repo. They're depersonalized copies of real
jobs, so dont install them without reworking. Please replace necessary paths and variables to make them work again.
Plugin test templates
---------------------
Most of necessary functions can be found in `fuel-qa <https://github.com/openstack/fuel-qa>`_
framework.
All functional tests should be stored in ` plugins git repository in a special folder named ``plugin_test``.
Fuel-qa framework should be submodule in the ``plugin_test`` folder. You can add submodule by this command:
git submodule add https://github.com/openstack/fuel-qa
<pic>
In the folder ``./plugin-test-examples/plugin_test`` you may find two simple tests.
The first one installs a test plugin, creates a cluster and enables the plugin for this cluster.
The second one deploys a cluster with the plugin enabled.
There are two subfolders here: ``helpers`` and ``tests``.
Helpers contains two files with important functions:
* prepare_test_plugin - installs the plugin to master node
* activate_plugin - activates the plugin
* assign_net_provider - allows to choose network type for cluster
* deploy_cluster - deploys a cluster
The next folder includes tests.
In the example provided with this repo there is only one important file named test_smoke_bvt.py.
It describes a class TestPlugin and 2 tests mentioned earlier.
Hardware test examples
----------------------
The main problem of hardware configuration is authorization.
SSH does not allow to enter a password in a script non interactively, so ``expect`` utility may be used to avoid the problem.
You should install the util on jenkins-slave first::
apt-get install expect
Here is an example of a script that uses expect for auth on a switch and shows its configuration::
spawn ssh "root@$switch_ip"
set timeout 500
expect "yes/no" {
send "yes\r"
expect "*?assword" { send "$switch_pass\r" }
} "*?assword" { send "$switch_pass\r" }
expect "# " { send "show run" }
expect "# " { send "exit\r" }
Fuel ISO updating
-----------------
There is a script ``fuel-plugin-ci/iso-updater/get_iso.sh``.
It should be added to cron and executed every 2-3 hours.
This script checks for a new community build of Fuel and if there is a new version available, it downloads such.
You can run the script on a jenkins-slave node or any web server if you have many slave nodes.
Here is how the script works:
#. Check for the latest community iso. Check the
``https://www.fuel-infra.org/release/status`` url using the ``w3m`` utility and chooses the right tab:
* the first tab is 8.0 now, so it needs the 2nd tab with Fuel 7.0.
* Then it parses the tab and gets a Fuel release string.
.. note:: if new Fuel version is available, you should fix the
script and change a tab number. Also output may change between
linux distros and last cut field may change.
#. Download torrent file from `http://seed.fuel-infra.org/fuelweb-iso/` via ``aria2`` console torrent client.
#. Check for errors and delete folder if there is an error.
#. Sync downloaded iso with a jenkins slave. You should have necessary users with rsa keys set.
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,11 +0,0 @@
set switch_ip [lindex $argv 0]
set switch_pass [lindex $argv 1]
spawn ssh "root@$switch_ip"
set timeout 500
expect "yes/no" {
send "yes\r"
expect "*?assword" { send "$switch_pass\r" }
} "*?assword" { send "$switch_pass\r" }
expect "# " { send "show run" }
expect "# " { send "exit\r" }

View File

@ -1,27 +0,0 @@
$fuel_iso_path='/var/lib/iso'
$jenkins_slave='jenkins-slave.test-company.org'
$fuel_remote_iso_path='/var/lib/iso'
[ -d $fuel_iso_path ] || mkdir $fuel_iso_path
last_rel=$(w3m -dump -cols 400 https://www.fuel-infra.org/release/status#tab_2 | grep -v community-8 | grep "ok ok ok ok" | head -1 | cut -d' ' -f 8)
rel=$last_rel
if [ ! -f "$fuel_iso_path/$rel.iso" ]; then
touch "$fuel_iso_path/$rel.iso.progress"
aria2c -x10 http://seed.fuel-infra.org/fuelweb-iso/$rel.iso -d $fuel_iso_path -l $fuel_iso_path/$rel.iso.progress
echo "http://seed.fuel-infra.org/fuelweb-iso/$rel.iso -b -o $fuel_iso_path$rel.iso.progress -P $fuel_iso_path"
fi
# make sure that previous finished successfully, if not, delete the directory that have been created for it
grep -i "error" $fuel_iso_path/$rel.iso.progress
res=$(echo $?)
if [ "$res" -eq 0 ]; then
# this means we had an error in it, delete folder created, then exit with error
echo "error has been detected while downloading this build.. check the above progress file."
rm -rf $fuel_iso_path
exit 1
fi
pathToIso="$fuel_iso_path/$rel.iso"
fi
rsync -av --progress --delete $fuel_iso_path $jenkins_slave:$fuel_remote_iso_path
exit 0

View File

@ -1,10 +0,0 @@
- project:
name: build
email_to: ''
tag: 'refs/tags/1.0.0'
plugin_name:
- testplugin:
plugin_repo: fuel-plugin-testplugin
jobs:
- 'fuel-plugin.{plugin_name}.build'

View File

@ -1,19 +0,0 @@
#!/bin/bash
set -ex
find . -name '*.erb' -print0 | xargs -0 -P1 -L1 -I '%' erb -P -x -T '-' % | ruby -c
find . -name '*.pp' -print0 | xargs -0 -P1 -L1 puppet parser validate --verbose
find . -name '*.pp' -print0 | xargs -0 -r -P1 -L1 puppet-lint \
--fail-on-warnings \
--with-context \
--with-filename \
--no-80chars-check \
--no-variable_scope-check \
--no-nested_classes_or_defines-check \
--no-autoloader_layout-check \
--no-class_inherits_from_params_class-check \
--no-documentation-check \
--no-arrow_alignment-check
fpb --check ./
fpb --build ./

View File

@ -1,11 +0,0 @@
#!/bin/bash
set -ex
LOGS="${WORKSPACE}/logs/"
rm -rf "${LOGS}"
mkdir -p "${LOGS}"
wget --no-check-certificate "${REPORTED_JOB_URL}/consoleText" -O "${LOGS}/consoleText.txt"

View File

@ -1,113 +0,0 @@
#!/bin/bash -e
# activate bash xtrace for script
[[ "${DEBUG}" == "true" ]] && set -x || set +x
# for manually run of this job
[ -z $ISO_FILE ] && export ISO_FILE=${ISO_FILE}
#remove old logs and test data
rm -f nosetests.xml
rm -rf logs/*
export ISO_VERSION=$(cut -d'-' -f3-3<<< $ISO_FILE)
echo iso build number is $ISO_VERSION
export REQUIRED_FREE_SPACE=200
export ISO_PATH="${ISO_STORAGE}/${ISO_FILE}"
export FUEL_RELEASE=$(cut -d'-' -f2-2 <<< $ISO_FILE | tr -d '.')
export VENV_PATH="${HOME}/${FUEL_RELEASE}-venv"
echo iso-version: $ISO_VERSION
echo fuel-release: $FUEL_RELEASE
echo virtual-env: $VENV_PATH
## For plugins we should get a valid version of requrements of python-venv
## This requirements could be got from the github repo
## but for each branch of a plugin we should map specific branch of the fuel-qa repo
## the fuel-qa branch is determined by a fuel-iso name.
case "${FUEL_RELEASE}" in
*70* ) export REQS_BRANCH="stable/7.0" ;;
*61* ) export REQS_BRANCH="stable/6.1" ;;
* ) export REQS_BRANCH="master"
esac
REQS_PATH="https://raw.githubusercontent.com/openstack/fuel-qa/${REQS_BRANCH}/fuelweb_test/requirements.txt"
###############################################################################
## We have limited disk resources, so before run of system tests a lab
## may have many deployed and runned envs, those may cause errors during test
function delete_envs {
[ -z $VIRTUAL_ENV ] && exit 1
dos.py sync
env_list=$(dos.py list | tail -n +3)
if [[ ! -z "${env_list}" ]]; then
for env in $env_list; do dos.py erase $env; done
fi
}
## We have limited cpu resources, because we use two hypervisors with heavy VMs, so
## we should poweroff all unused envs, if there're exist.
function destroy_envs {
[ -z $VIRTUAL_ENV ] && exit 1
dos.py sync
env_list=$(dos.py list | tail -n +3)
if [[ ! -z "${env_list}" ]]; then
for env in $env_list; do dos.py destroy $env; done
fi
}
## Delete all systest envs except the env with the same version of a fuel-build
## if it exists. This behaviour is needed to use restoring from snapshots.
function delete_systest_envs {
[ -z $VIRTUAL_ENV ] && exit 1
dos.py sync
for env in $(dos.py list | tail -n +3 | grep $ENV_PREFIX); do
[[ $env == *"$ENV_NAME"* ]] && continue || dos.py erase $env
done
}
function prepare_venv {
#rm -rf "${VENV_PATH}"
[ ! -d $VENV_PATH ] && virtualenv "${VENV_PATH}" || echo "${VENV_PATH} already exist"
source "${VENV_PATH}/bin/activate"
pip --version
[ $? -ne 0 ] && easy_install -U pip
pip install -r "${REQS_PATH}" --upgrade > /dev/null 2>/dev/null
django-admin.py syncdb --settings=devops.settings --noinput
django-admin.py migrate devops --settings=devops.settings --noinput
deactivate
}
function fix_logger {
config_path="${HOME}/.devops/log.yaml"
echo devops config path $config_path
sed -i '/disable_existing_loggers.*/d' $config_path
echo disable_existing_loggers: False >> $config_path
}
####################################################################################
prepare_venv
fix_logger
# determine free space before run the cleaner
free_space_exist=false
free_space=$(df -h | grep '/$' | awk '{print $4}' | tr -d G)
(( $free_space > $REQUIRED_FREE_SPACE )) && export free_space_exist=true
# activate a python virtual env
source "$VENV_PATH/bin/activate"
# free space
[ $free_space_exist ] && delete_systest_envs || delete_envs
# poweroff all envs
destroy_envs

View File

@ -1,34 +0,0 @@
#!/bin/bash -e
# activate bash xtrace for script
[[ "${DEBUG}" == "true" ]] && set -x || set +x
[ -z $PLUGIN_VERSION ] && exit 1 || echo testplugin version is $PLUGIN_VERSION
export ISO_PATH="${ISO_STORAGE}/${ISO_FILE}"
export ISO_VERSION=$(cut -d'-' -f3-3 <<< $ISO_FILE)
export FUEL_RELEASE=$(cut -d'-' -f2-2 <<< $ISO_FILE | tr -d '.')
export ENV_NAME="${ENV_PREFIX}.${ISO_VERSION}"
export VENV_PATH="${HOME}/${FUEL_RELEASE}-venv"
[[ -z ${PLUGIN_PATH} ]] && export PLUGIN_PATH=$(ls ${WORKSPACE}/testplugin*.rpm) \
|| echo PLUGIN_PATH=$PLUGIN_PATH
source $VENV_PATH/bin/activate
systest_parameters=''
[[ $USE_SNAPSHOTS == "true" ]] && systest_parameters+=' -k' || echo new env will be created
[[ $ERASE_AFTER == "true" ]] && echo the env will be erased after test || systest_parameters+=' -K'
echo test-group: $TEST_GROUP
echo env-name: $ENV_NAME
echo use-snapshots: $USE_SNAPSHOTS
echo fuel-release: $FUEL_RELEASE
echo venv-path: $VENV_PATH
echo env-name: $ENV_NAME
echo iso-path: $ISO_PATH
echo plugin-path: $PLUGIN_PATH
echo plugin-checksum: $(md5sum -b $PLUGIN_PATH)
./plugin_test/utils/jenkins/system_tests.sh -t test ${systest_parameters} -i ${ISO_PATH} -j ${JOB_NAME} -o --group=${TEST_GROUP}

View File

@ -1,46 +0,0 @@
- job:
name: 'fuel-plugins.publish_logs'
concurrent: true
description: Publish jobs artifacts to external host
logrotate:
artifactDaysToKeep: 30
node: plugins-ci
properties:
- heavy-job:
weight: '1'
- throttle:
max-per-node: 1
option: project
parameters:
- string:
name: REPORTED_JOB_URL
- string:
name: REPORTED_JOB_NAME
- string:
name: REPORTED_BUILD_ID
builders:
- shell:
!include-raw builders/publish_build_to_external.sh
- copyartifact:
project: $REPORTED_JOB_NAME
target: logs/
which-build: upstream-build
flatten: true
optional: true
wrappers:
- ansicolor:
colormap: xterm
- timeout:
fail: true
timeout: 10
write-description: true
publishers:
- ssh:
site: 'ci-logs.testcompany.org'
target: '$REPORTED_JOB_NAME/$REPORTED_BUILD_ID'
source: 'logs/*'
flatten: true
- email:
notify-every-unstable-build: false
recipients: devops@testcompany.org
send-to-individuals: false

View File

@ -1,480 +0,0 @@
## common git settings to get sources
- common-scm: &common-scm
name: 'common-scm'
scm:
- git:
name: ''
url: 'https://review.openstack.org/openstack/{gerrit-repo}'
refspec: $GERRIT_REFSPEC
branches:
- $GERRIT_BRANCH
choosing-strategy: gerrit
submodule:
disable: false
tracking: true
recursive: true
clean:
# we don't clean and re-initialize the repo
before: false
# we don't clean a workspace, so we need to remove rpms manually
wipe-workspace: false
## list of gerrit events to trigger build job
- build-gerrit-events: &build-gerrit-events
name: 'build-gerrit-events'
trigger-on:
- change-merged-event
- draft-published-event
- patchset-created-event:
exclude-trivial-rebase: true
exclude-no-code-change: true
## configuration of gerrit event for the smoke multijob
## smoke multijob should run on patchset, draft events
- smoke-gerrit-events: &smoke-gerrit-events
name: 'smoke-gerrit-events'
trigger-on:
- draft-published-event
- patchset-created-event:
exclude-trivial-rebase: true
exclude-no-code-change: true
## configuration of gerrit event for the bvt multijob
## bvt multijob should run only on merge event
- bvt-gerrit-events: &bvt-gerrit-events
name: 'bvt-gerrit-events'
trigger-on:
- change-merged-event
## the main part of gerrit section as yaml anchor
- generic-gerrit-projects: &generic-gerrit-projects
name: 'generic-gerrit-projects'
projects:
- project-compare-type: 'PLAIN'
project-pattern: 'openstack/{gerrit-repo}'
branches:
- branch-compare-type: 'ANT'
branch-pattern: '{gerrit-branch}'
forbidden-file-paths:
- compare-type: 'ANT'
pattern: 'docs/**'
- compare-type: 'ANT'
pattern: 'specs/**'
silent: false
override-votes: true
server-name: 'review.openstack.org'
custom-url: '* $JOB_NAME $BUILD_URL'
escape-quotes: true
readable-message: true
skip-vote:
successfull: false
failed: false
unstable: true
notbuilt: true
## properties for smoke, bvt and nightly multijobs
- runner-properties: &runner-properties
name: 'runner-properties'
properties:
- heavy-job:
weight: 1
- build-blocker:
use-build-blocker: true
blocking-jobs:
- '{build-name}'
block-level: 'GLOBAL'
queue-scanning: 'BUILDABLE'
## properties for test jobs
- test-properties: &test-properties
name: 'test-properties'
properties:
- heavy-job:
weight: 1
- throttle:
max-per-node: 0
max-total: 0
categories:
- testplugin
option: category
## parameters for smoke, bvt and nightly multijobs
- runner-parameters: &runner-parameters
name: 'runner-parameters'
parameters:
- bool:
name: DEBUG
default: true
description: "Set -x (xtrace) for jobs' bash scripts"
- bool:
name: UPDATE_MASTER
default: true
description: 'turns on update fuel master node to maintenance update'
- string:
name: MIRROR
default: 'http://mirror.seed-cz1.fuel-infra.org'
description: 'mirror for package repositories. this mirror is optimized for CZ'
- string:
name: UPDATE_FUEL_MIRROR
default: "${{MIRROR}}/mos-repos/centos/mos7.0-centos6-fuel/security/x86_64/ ${{MIRROR}}/mos-repos/centos/mos7.0-centos6-fuel/updates/x86_64/"
description: 'repositories to update fuel master node'
- string:
name: MIRROR_UBUNTU
default: 'deb ${{MIRROR}}/pkgs/ubuntu/ trusty main universe multiverse|deb ${{MIRROR}}/pkgs/ubuntu/ trusty-updates main universe multiverse'
description: 'proposed repositories to update ubuntu cluster'
- string:
name: DEB_UPDATES
default: "mos-updates,deb ${{MIRROR}}/mos/ubuntu/dists/mos7.0-updates main restricted"
description: 'ubuntu-updates repositories for master ui'
- string:
name: DEB_SECURITY
default: "mos-security,deb ${{MIRROR}}/mos/ubuntu/dists/mos7.0-security main restricted"
description: 'ubuntu-security repositories for master ui'
- string:
name: PLUGIN_VERSION
default: '1.0'
description: 'There is a version of plugin stored in common sotrage'
- string:
name: PLUGIN_PATH
default: ''
description: 'The path to the plugin package on storage'
- string:
name: GERRIT_REFSPEC
default: 'refs/heads/{gerrit-branch}'
description: 'Refspecs for commits in fuel-qa gerrit separated with spaces'
- string:
name: GERRIT_BRANCH
default: 'origin/{gerrit-branch}'
description: 'The branch for fuel-qa gerrit'
- string:
name: ENV_PREFIX
default: '{fuel-release}.{plugin-name}'
description: 'The name of devops env. Needed to properly work of existing mode of devops'
- bool:
name: BONDING
default: false
- bool:
name: USE_SNAPSHOTS
default: false
descrition: 'Will be used existing environment'
## parameters for jobs created per test-group and custom test job
- test-parameters: &test-parameters
name: 'test-parameters'
parameters:
- bool:
name: DEBUG
default: true
description: "Set -x (xtrace) for jobs' bash scripts"
- string:
name: TEST_GROUP
default: '{testgroup}'
- string:
name: ISO_FILE
default: '{iso-file}'
description: 'ISO file name that is on the tpi-s1 in /storage/downloads'
- string:
name: ISO_STORAGE
default: '/storage/downloads'
description: 'Storage for iso files'
- string:
name: ISO_VERSION
description: 'Contrainer for storing an iso build number to output it as job name'
- string:
name: PLUGIN_DISTRIBUTION
default: ''
- string:
name: PLUGIN_VERSION
default: '{plugin-version}'
description: 'There is a version of plugin packages stored in common storage'
- string:
name: PLUGIN_PATH
default: ''
- bool:
name: UPDATE_MASTER
default: true
description: 'turns on update fuel master node to maintenance update'
- string:
name: MIRROR
default: 'http://mirror.seed-cz1.fuel-infra.org'
description: 'mirror for package repositories. this mirror is optimized for CZ'
- string:
name: UPDATE_FUEL_MIRROR
default: "${{MIRROR}}/mos-repos/centos/mos7.0-centos6-fuel/security/x86_64/ ${{MIRROR}}/mos-repos/centos/mos7.0-centos6-fuel/updates/x86_64/"
description: 'repositories to update fuel master node'
- string:
name: MIRROR_UBUNTU
default: 'deb ${{MIRROR}}/pkgs/ubuntu/ trusty main universe multiverse|deb ${{MIRROR}}/pkgs/ubuntu/ trusty-updates main universe multiverse'
description: 'proposed repositories to update ubuntu cluster'
- string:
name: DEB_UPDATES
default: "mos-updates,deb ${{MIRROR}}/mos/ubuntu/dists/mos7.0-updates main restricted"
description: 'ubuntu-updates repositories for master ui'
- string:
name: DEB_SECURITY
default: "mos-security,deb ${{MIRROR}}/mos/ubuntu/dists/mos7.0-security main restricted"
description: 'ubuntu-security repositories for master ui'
- string:
name: OPENSTACK_RELEASE
default: 'Ubuntu'
description: 'Openstack release (CentOS, Ubuntu)'
- string:
name: GERRIT_REFSPEC
default: 'refs/heads/{gerrit-branch}'
description: 'Refspecs for commits in fuel-qa gerrit separated with spaces'
- string:
name: GERRIT_BRANCH
default: 'origin/{gerrit-branch}'
description: 'The branch for fuel-qa gerrit'
- string:
name: NODE_VOLUME_SIZE
default: '512'
- string:
name: NODES_COUNT
default: '10'
description: 'Amount of nodes in the test lab'
- string:
name: ADMIN_NODE_MEMORY
default: '4096'
description: 'Amount of vitrual RAM for admin node'
- string:
name: SLAVE_NODE_MEMORY
default: '4096'
description: 'Amount of vitrual RAM per slave node'
- string:
name: ADMIN_NODE_CPU
default: '4'
description: 'Amount of vitrual CPUs for admin node'
- string:
name: SLAVE_NODE_CPU
default: '4'
description: 'Amount of vitrual CPUs per slave node'
- string:
name: ENV_PREFIX
default: '{fuel-release}.{plugin-name}'
description: 'The name of devops env. Needed to properly work of existing mode of devops'
- bool:
name: BONDING
default: false
- bool:
name: USE_SNAPSHOTS
default: false
descrition: 'Will be used existing environment'
- project:
name: 'predefined_parameters'
fuel-release: '7.0'
plugin-version: '1.0'
plugin-name: 'testplugin'
build-name: '{fuel-release}.{plugin-name}.{plugin-version}.build'
smoke-name: '{fuel-release}.{plugin-name}.{plugin-version}.smoke'
bvt-name: '{fuel-release}.{plugin-name}.{plugin-version}.bvt'
regression-name: '{fuel-release}.{plugin-name}.{plugin-version}.regression'
nightly-name: '{fuel-release}.{plugin-name}.{plugin-version}.nightly'
custom-name: '{fuel-release}.{plugin-name}.{plugin-version}.custom'
nightly-timer: 'H 21 * * *'
regression-timer: 'H 19 * * *'
iso-file: 'fuel-7.0-301-2015-09-22_20-01-53.iso'
gerrit-repo: 'fuel-plugin-testplugin'
gerrit-branch: 'master'
email-to: 'devops@testcompany.com'
released-plugin-path: '/storage/testplugin/released/testplugin-1.0.noarch.rpm'
testgroup:
- testplugin_bvt
- testplugin_smoke
- install_testplugin
jobs:
- '{build-name}'
- '{custom-name}'
- '{smoke-name}'
- '{bvt-name}'
- defaults:
name: global
disabled: false
node: 'testplugin'
logrotate:
daysToKeep: 7
numToKeep: 10
artifactDaysToKeep: 7
artifactNumToKeep: 10
<<: *test-properties
<<: *test-parameters
<<: *common-scm
builders:
- copyartifact:
project: '{build-name}'
which-build: last-successful
- shell:
!include-raw-escape builders/testplugin.prepare.sh
- shell:
!include-raw-escape builders/testplugin.test.sh
wrappers:
- ansicolor:
colormap: xterm
- timeout:
fail: true
timeout: 240
publishers:
- postbuildscript:
builders:
- shell: env > properties
# need to delete packages, because we don't wipe workspace,
# but packages could be duplicated
- shell: rm -f .*rpm.*
script-only-if-succeeded: False
- archive:
artifacts: 'build.properties'
allow-empty: false
- archive:
artifacts: 'properties'
allow-empty: false
- archive:
artifacts: '**/nosetests.xml'
allow-empty: true
fingerprint: true
- archive:
artifacts: 'logs/*'
allow-empty: true
fingerprint: true
- xunit:
types:
- junit:
pattern: '**/nosetest.xml'
skip-if-no-test-files: true
- email:
recipients: '{email-to}'
# job for building plugin package
- job-template:
name: '{build-name}'
node: 'runner'
concurrent: true
disabled: false
description: |
'<a href=https://github.com/openstack/{gerrit-repo}>
Build {plugin-name} plugin from fuel-plugins project</a>'
<<: *common-scm
triggers:
- gerrit:
<<: *build-gerrit-events
<<: *generic-gerrit-projects
parameters:
- string:
name: 'GERRIT_REFSPEC'
default: 'refs/heads/{gerrit-branch}'
properties:
- heavy-job:
weight: 1
builders:
- shell:
!include-raw-escape './builders/build-plugin.sh'
- shell:
!include-raw-escape './builders/rpm-check.sh'
publishers:
- postbuildscript:
builders:
- shell: env > build.properties
script-only-if-succeeded: False
- archive:
artifacts: '*.rpm'
allow-empty: false
- archive:
artifacts: 'build.properties'
allow-empty: false
- email:
recipients: '{email-to}'
# jobs for system tests
- job-template:
name: '{custom-name}'
description: 'The custom test for {fuel-release}.{plugin-name}.{plugin-version}'
concurrent: true
- job-template:
name: '{smoke-name}'
disabled: false
description: 'The Smoke test for {fuel-release}.{plugin-name}.{plugin-version}'
concurrent: true
project-type: multijob
node: runner
<<: *runner-parameters
<<: *runner-properties
scm: []
triggers:
- gerrit:
<<: *smoke-gerrit-events
<<: *generic-gerrit-projects
builders:
- copyartifact:
project: '{build-name}'
which-build: last-successful
stable: true
- multijob:
name: 'Smoke tests for open contrail'
condition: SUCCESSFUL
projects:
- name: '{fuel-release}.{plugin-name}.{plugin-version}.testplugin_smoke'
current-parameters: true
kill-phase-on: NEVER
publishers:
- postbuildscript:
script-only-if-succeeded: False
builders:
- shell: env > smoke.properties
- archive:
artifacts: 'build.properties'
allow-empty: false
- archive:
artifacts: 'smoke.properties'
allow-empty: false
- email:
recipients: '{email-to}'
- job-template:
name: '{bvt-name}'
description: 'The Smoke test for {fuel-release}.{plugin-name}.{plugin-version}'
disabled: false
concurrent: true
project-type: multijob
node: runner
<<: *runner-parameters
<<: *runner-properties
scm: []
triggers:
- gerrit:
<<: *bvt-gerrit-events
<<: *generic-gerrit-projects
builders:
- copyartifact:
project: '{build-name}'
which-build: last-successful
stable: true
- multijob:
name: 'Smoke tests for testplugin'
condition: SUCCESSFUL
projects:
- name: '{fuel-release}.{plugin-name}.{plugin-version}.testplugin'
current-parameters: true
kill-phase-on: NEVER
publishers:
- postbuildscript:
script-only-if-succeeded: False
builders:
- shell: env > bvt.properties
- archive:
artifacts: '*.rpm'
allow-empty: false
- archive:
artifacts: 'build.properties'
allow-empty: false
- archive:
artifacts: 'bvt.properties'
allow-empty: false
- email:
recipients: '{email-to}'

Binary file not shown.

Before

Width:  |  Height:  |  Size: 163 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

View File

@ -1,13 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

@ -1 +0,0 @@
Subproject commit 381e3848da092b7e143be8ab3bb689af62f442bc

View File

@ -1,13 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,52 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import time
from fuelweb_test import logger
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test.helpers.checkers import check_repo_managment
def assign_net_provider(obj, pub_all_nodes=False, ceph_value=False):
"""Assign neutron with tunneling segmentation"""
segment_type = 'tun'
obj.cluster_id = obj.fuel_web.create_cluster(
name=obj.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings={
"net_provider": 'neutron',
"net_segment_type": segment_type,
"assign_to_all_nodes": pub_all_nodes,
"images_ceph": ceph_value
}
)
return obj.cluster_id
def deploy_cluster(obj):
"""
Deploy cluster with additional time for waiting on node's availability
"""
try:
obj.fuel_web.deploy_cluster_wait(
obj.cluster_id, check_services=False)
except:
nailgun_nodes = obj.env.fuel_web.client.list_cluster_nodes(
obj.env.fuel_web.get_last_created_cluster())
time.sleep(420)
for n in nailgun_nodes:
check_repo_managment(
obj.env.d_env.get_ssh_to_remote(n['ip']))
logger.info('ip is {0}'.format(n['ip'], n['name']))

View File

@ -1,53 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import time
from fuelweb_test import logger
from fuelweb_test.helpers import checkers
from fuelweb_test.settings import PLUGIN_PATH
from proboscis.asserts import assert_true
import openstack
def prepare_test_plugin(
obj, slaves=None, pub_all_nodes=False, ceph_value=False):
"""Copy necessary packages to the master node and install them"""
obj.env.revert_snapshot("ready_with_%d_slaves" % slaves)
# copy plugin to the master node
checkers.upload_tarball(
obj.env.d_env.get_admin_remote(),
PLUGIN_PATH, '/var')
# install plugin
checkers.install_plugin_check_code(
obj.env.d_env.get_admin_remote(),
plugin=os.path.basename(PLUGIN_PATH))
# prepare fuel
openstack.assign_net_provider(obj, pub_all_nodes, ceph_value)
def activate_plugin(obj):
"""Enable plugin in settings"""
plugin_name = 'testplugin'
msg = "Plugin couldn't be enabled. Check plugin version. Test aborted"
assert_true(
obj.fuel_web.check_plugin_exists(obj.cluster_id, plugin_name),
msg)
logger.debug('we have plugin element')
option = {'metadata/enabled': True, }
obj.fuel_web.update_plugin_data(obj.cluster_id, plugin_name, option)

View File

@ -1,67 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sys
import os
import re
from nose.plugins import Plugin
from paramiko.transport import _join_lingering_threads
class CloseSSHConnectionsPlugin(Plugin):
"""Closes all paramiko's ssh connections after each test case
Plugin fixes proboscis disability to run cleanup of any kind.
'afterTest' calls _join_lingering_threads function from paramiko,
which stops all threads (set the state to inactive and joins for 10s)
"""
name = 'closesshconnections'
def options(self, parser, env=os.environ):
super(CloseSSHConnectionsPlugin, self).options(parser, env=env)
def configure(self, options, conf):
super(CloseSSHConnectionsPlugin, self).configure(options, conf)
self.enabled = True
def afterTest(self, *args, **kwargs):
_join_lingering_threads()
def import_tests():
from tests import test_smoke_bvt
from tests import integration_tests
def run_tests():
from proboscis import TestProgram # noqa
import_tests()
# Run Proboscis and exit.
TestProgram(
addplugins=[CloseSSHConnectionsPlugin()]
).run_and_exit()
if __name__ == '__main__':
sys.path.append(sys.path[0]+"/fuel-qa")
import_tests()
from fuelweb_test.helpers.patching import map_test
if any(re.search(r'--group=patching_master_tests', arg)
for arg in sys.argv):
map_test('master')
elif any(re.search(r'--group=patching.*', arg) for arg in sys.argv):
map_test('environment')
run_tests()

View File

@ -1,13 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,13 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.

View File

@ -1,89 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
import os.path
from proboscis import test
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test import logger
from fuelweb_test.tests.base_test_case import SetupEnvironment
from fuelweb_test.tests.base_test_case import TestBasic
from helpers import plugin
from helpers import openstack
@test(groups=["plugins"])
class TestPlugin(TestBasic):
ostf_msg = 'OSTF tests passed successfully.'
cluster_id = ''
@test(depends_on=[SetupEnvironment.prepare_slaves_2],
groups=["install_testplugin"])
@log_snapshot_after_test
def install_testplugin(self):
"""Install Plugin and create cluster
Scenario:
1. Revert snapshot "ready_with_2_slaves"
2. Upload a plugin to the master node
3. Install a plugin
4. Create a cluster
5. Enable athe plugin in the cluster's settings
Duration 20 min
"""
plugin.prepare_test_plugin(self, slaves=2)
@test(depends_on=[SetupEnvironment.prepare_slaves_2],
groups=["plugin_smoke"])
@log_snapshot_after_test
def plugin_smoke(self):
"""Deploy a cluster with a plugin
Scenario:
1. Revert snapshot "ready_with_2_slaves"
2. Create a cluster
3. Add a node with controller role
4. Add a node with compute role
6. Enable a plugin
5. Deploy a cluster with the plugin enabled
Duration 90 min
"""
plugin.prepare_test_plugin(self, slaves=2)
# enable plugin in settings
plugin.activate_plugin(self)
self.fuel_web.update_nodes(
self.cluster_id,
{
'slave-01': ['controller'],
'slave-02': ['compute'],
})
# deploy cluster
openstack.deploy_cluster(self)
self.fuel_web.run_ostf(
cluster_id=self.cluster_id,
should_fail=2,
failed_test_name=[('Check network connectivity from instance via floating IP'),('Launch instance with file injection')]
)

View File

@ -1,496 +0,0 @@
#!/bin/sh
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# functions
INVALIDOPTS_ERR=100
NOJOBNAME_ERR=101
NOISOPATH_ERR=102
NOTASKNAME_ERR=103
NOWORKSPACE_ERR=104
DEEPCLEAN_ERR=105
MAKEISO_ERR=106
NOISOFOUND_ERR=107
COPYISO_ERR=108
SYMLINKISO_ERR=109
CDWORKSPACE_ERR=110
ISODOWNLOAD_ERR=111
INVALIDTASK_ERR=112
# Defaults
export REBOOT_TIMEOUT=${REBOOT_TIMEOUT:-5000}
export ALWAYS_CREATE_DIAGNOSTIC_SNAPSHOT=${ALWAYS_CREATE_DIAGNOSTIC_SNAPSHOT:-true}
# Export specified settings
if [ -z $NODE_VOLUME_SIZE ]; then export NODE_VOLUME_SIZE=350; fi
if [ -z $OPENSTACK_RELEASE ]; then export OPENSTACK_RELEASE=Ubuntu; fi
if [ -z $ENV_NAME ]; then export ENV_NAME="contrail"; fi
if [ -z $ADMIN_NODE_MEMORY ]; then export ADMIN_NODE_MEMORY=4096; fi
if [ -z $ADMIN_NODE_CPU ]; then export ADMIN_NODE_CPU=4; fi
if [ -z $SLAVE_NODE_MEMORY ]; then export SLAVE_NODE_MEMORY=4096; fi
if [ -z $SLAVE_NODE_CPU ]; then export SLAVE_NODE_CPU=4; fi
# Init and update submodule
git submodule init && git submodule update
sudo /sbin/iptables -F
sudo /sbin/iptables -t nat -F
sudo /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
ShowHelp() {
cat << EOF
System Tests Script
It can perform several actions depending on Jenkins JOB_NAME it's ran from
or it can take names from exported environment variables or command line options
if you do need to override them.
-w (dir) - Path to workspace where fuelweb git repository was checked out.
Uses Jenkins' WORKSPACE if not set
-e (name) - Directly specify environment name used in tests
Uses ENV_NAME variable is set.
-j (name) - Name of this job. Determines ISO name, Task name and used by tests.
Uses Jenkins' JOB_NAME if not set
-v - Do not use virtual environment
-V (dir) - Path to python virtual environment
-i (file) - Full path to ISO file to build or use for tests.
Made from iso dir and name if not set.
-t (name) - Name of task this script should perform. Should be one of defined ones.
Taken from Jenkins' job's suffix if not set.
-o (str) - Allows you any extra command line option to run test job if you
want to use some parameters.
-a (str) - Allows you to path NOSE_ATTR to the test job if you want
to use some parameters.
-A (str) - Allows you to path NOSE_EVAL_ATTR if you want to enter attributes
as python expressions.
-m (name) - Use this mirror to build ISO from.
Uses 'srt' if not set.
-U - ISO URL for tests.
Null by default.
-r (yes/no) - Should built ISO file be places with build number tag and
symlinked to the last build or just copied over the last file.
-b (num) - Allows you to override Jenkins' build number if you need to.
-l (dir) - Path to logs directory. Can be set by LOGS_DIR evironment variable.
Uses WORKSPACE/logs if not set.
-d - Dry run mode. Only show what would be done and do nothing.
Useful for debugging.
-k - Keep previously created test environment before tests run
-K - Keep test environment after tests are finished
-h - Show this help page
Most variables uses guesing from Jenkins' job name but can be overriden
by exported variable before script is run or by one of command line options.
You can override following variables using export VARNAME="value" before running this script
WORKSPACE - path to directory where Fuelweb repository was checked out by Jenkins or manually
JOB_NAME - name of Jenkins job that determines which task should be done and ISO file name.
If task name is "iso" it will make iso file
Other defined names will run Nose tests using previously built ISO file.
ISO file name is taken from job name prefix
Task name is taken from job name suffix
Separator is one dot '.'
For example if JOB_NAME is:
mytest.somestring.iso
ISO name: mytest.iso
Task name: iso
If ran with such JOB_NAME iso file with name mytest.iso will be created
If JOB_NAME is:
mytest.somestring.node
ISO name: mytest.iso
Task name: node
If script was run with this JOB_NAME node tests will be using ISO file mytest.iso.
First you should run mytest.somestring.iso job to create mytest.iso.
Then you can ran mytest.somestring.node job to start tests using mytest.iso and other tests too.
EOF
}
GlobalVariables() {
# where built iso's should be placed
# use hardcoded default if not set before by export
ISO_DIR="${ISO_DIR:=/var/www/fuelweb-iso}"
# name of iso file
# taken from jenkins job prefix
# if not set before by variable export
if [ -z "${ISO_NAME}" ]; then
ISO_NAME="${JOB_NAME%.*}.iso"
fi
# full path where iso file should be placed
# make from iso name and path to iso shared directory
# if was not overriden by options or export
if [ -z "${ISO_PATH}" ]; then
ISO_PATH="${ISO_DIR}/${ISO_NAME}"
fi
# what task should be ran
# it's taken from jenkins job name suffix if not set by options
if [ -z "${TASK_NAME}" ]; then
TASK_NAME="${JOB_NAME##*.}"
fi
# do we want to keep iso's for each build or just copy over single file
ROTATE_ISO="${ROTATE_ISO:=yes}"
# choose mirror to build iso from. Default is 'srt' for Saratov's mirror
# you can change mirror by exporting USE_MIRROR variable before running this script
USE_MIRROR="${USE_MIRROR:=srt}"
# only show what commands would be executed but do nothing
# this feature is usefull if you want to debug this script's behaviour
DRY_RUN="${DRY_RUN:=no}"
VENV="${VENV:=yes}"
}
GetoptsVariables() {
while getopts ":w:j:i:t:o:a:A:m:U:r:b:V:l:dkKe:v:h" opt; do
case $opt in
w)
WORKSPACE="${OPTARG}"
;;
j)
JOB_NAME="${OPTARG}"
;;
i)
ISO_PATH="${OPTARG}"
;;
t)
TASK_NAME="${OPTARG}"
;;
o)
TEST_OPTIONS="${TEST_OPTIONS} ${OPTARG}"
;;
a)
NOSE_ATTR="${OPTARG}"
;;
A)
NOSE_EVAL_ATTR="${OPTARG}"
;;
m)
USE_MIRROR="${OPTARG}"
;;
U)
ISO_URL="${OPTARG}"
;;
r)
ROTATE_ISO="${OPTARG}"
;;
b)
BUILD_NUMBER="${OPTARG}"
;;
V)
VENV_PATH="${OPTARG}"
;;
l)
LOGS_DIR="${OPTARG}"
;;
k)
KEEP_BEFORE="yes"
;;
K)
KEEP_AFTER="yes"
;;
e)
ENV_NAME="${OPTARG}"
;;
d)
DRY_RUN="yes"
;;
v)
VENV="no"
;;
h)
ShowHelp
exit 0
;;
\?)
echo "Invalid option: -$OPTARG"
ShowHelp
exit $INVALIDOPTS_ERR
;;
:)
echo "Option -$OPTARG requires an argument."
ShowHelp
exit $INVALIDOPTS_ERR
;;
esac
done
}
CheckVariables() {
if [ -z "${JOB_NAME}" ]; then
echo "Error! JOB_NAME is not set!"
exit $NOJOBNAME_ERR
fi
if [ -z "${ISO_PATH}" ]; then
echo "Error! ISO_PATH is not set!"
exit $NOISOPATH_ERR
fi
if [ -z "${TASK_NAME}" ]; then
echo "Error! TASK_NAME is not set!"
exit $NOTASKNAME_ERR
fi
if [ -z "${WORKSPACE}" ]; then
echo "Error! WORKSPACE is not set!"
exit $NOWORKSPACE_ERR
fi
}
MakeISO() {
# Create iso file to be used in tests
# clean previous garbage
if [ "${DRY_RUN}" = "yes" ]; then
echo make deep_clean
else
make deep_clean
fi
ec="${?}"
if [ "${ec}" -gt "0" ]; then
echo "Error! Deep clean failed!"
exit $DEEPCLEAN_ERR
fi
# create ISO file
export USE_MIRROR
if [ "${DRY_RUN}" = "yes" ]; then
echo make iso
else
make iso
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error making ISO!"
exit $MAKEISO_ERR
fi
if [ "${DRY_RUN}" = "yes" ]; then
ISO="${WORKSPACE}/build/iso/fuel.iso"
else
ISO="`ls ${WORKSPACE}/build/iso/*.iso | head -n 1`"
# check that ISO file exists
if [ ! -f "${ISO}" ]; then
echo "Error! ISO file not found!"
exit $NOISOFOUND_ERR
fi
fi
# copy ISO file to storage dir
# if rotation is enabled and build number is aviable
# save iso to tagged file and symlink to the last build
# if rotation is not enabled just copy iso to iso_dir
if [ "${ROTATE_ISO}" = "yes" -a "${BUILD_NUMBER}" != "" ]; then
# copy iso file to shared dir with revision tagged name
NEW_BUILD_ISO_PATH="${ISO_PATH#.iso}_${BUILD_NUMBER}.iso"
if [ "${DRY_RUN}" = "yes" ]; then
echo cp "${ISO}" "${NEW_BUILD_ISO_PATH}"
else
cp "${ISO}" "${NEW_BUILD_ISO_PATH}"
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Copy ${ISO} to ${NEW_BUILD_ISO_PATH} failed!"
exit $COPYISO_ERR
fi
# create symlink to the last built ISO file
if [ "${DRY_RUN}" = "yes" ]; then
echo ln -sf "${NEW_BUILD_ISO_PATH}" "${ISO_PATH}"
else
ln -sf "${NEW_BUILD_ISO_PATH}" "${ISO_PATH}"
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Create symlink from ${NEW_BUILD_ISO_PATH} to ${ISO_PATH} failed!"
exit $SYMLINKISO_ERR
fi
else
# just copy file to shared dir
if [ "${DRY_RUN}" = "yes" ]; then
echo cp "${ISO}" "${ISO_PATH}"
else
cp "${ISO}" "${ISO_PATH}"
fi
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Copy ${ISO} to ${ISO_PATH} failed!"
exit $COPYISO_ERR
fi
fi
if [ "${ec}" -gt "0" ]; then
echo "Error! Copy ISO from ${ISO} to ${ISO_PATH} failed!"
exit $COPYISO_ERR
fi
echo "Finished building ISO: ${ISO_PATH}"
exit 0
}
CdWorkSpace() {
# chdir into workspace or fail if could not
if [ "${DRY_RUN}" != "yes" ]; then
cd "${WORKSPACE}"
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Cannot cd to WORKSPACE!"
exit $CDWORKSPACE_ERR
fi
else
echo cd "${WORKSPACE}"
fi
}
RunTest() {
# Run test selected by task name
# check if iso file exists
if [ ! -f "${ISO_PATH}" ]; then
if [ -z "${ISO_URL}" -a "${DRY_RUN}" != "yes" ]; then
echo "Error! File ${ISO_PATH} not found and no ISO_URL (-U key) for downloading!"
exit $NOISOFOUND_ERR
else
if [ "${DRY_RUN}" = "yes" ]; then
echo wget -c ${ISO_URL} -O ${ISO_PATH}
else
echo "No ${ISO_PATH} found. Trying to download file."
wget -c ${ISO_URL} -O ${ISO_PATH}
rc=$?
if [ $rc -ne 0 ]; then
echo "Failed to fetch ISO from ${ISO_URL}"
exit $ISODOWNLOAD_ERR
fi
fi
fi
fi
if [ -z "${VENV_PATH}" ]; then
VENV_PATH="/home/jenkins/venv-nailgun-tests"
fi
# run python virtualenv
if [ "${VENV}" = "yes" ]; then
if [ "${DRY_RUN}" = "yes" ]; then
echo . $VENV_PATH/bin/activate
else
. $VENV_PATH/bin/activate
fi
fi
if [ "${ENV_NAME}" = "" ]; then
ENV_NAME="${JOB_NAME}_system_test"
fi
if [ "${LOGS_DIR}" = "" ]; then
LOGS_DIR="${WORKSPACE}/logs"
fi
if [ ! -f "$LOGS_DIR" ]; then
mkdir -p $LOGS_DIR
fi
export ENV_NAME
export LOGS_DIR
export ISO_PATH
if [ "${KEEP_BEFORE}" != "yes" ]; then
# remove previous environment
if [ "${DRY_RUN}" = "yes" ]; then
echo dos.py erase "${ENV_NAME}"
else
if [ $(dos.py list | grep "^${ENV_NAME}\$") ]; then
dos.py erase "${ENV_NAME}"
fi
fi
fi
# gather additional option for this nose test run
OPTS=""
if [ -n "${NOSE_ATTR}" ]; then
OPTS="${OPTS} -a ${NOSE_ATTR}"
fi
if [ -n "${NOSE_EVAL_ATTR}" ]; then
OPTS="${OPTS} -A ${NOSE_EVAL_ATTR}"
fi
if [ -n "${TEST_OPTIONS}" ]; then
OPTS="${OPTS} ${TEST_OPTIONS}"
fi
# run python test set to create environments, deploy and test product
if [ "${DRY_RUN}" = "yes" ]; then
echo export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}${WORKSPACE}"
echo python plugin_test/run_tests.py -q --nologcapture --with-xunit ${OPTS}
else
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}${WORKSPACE}"
echo ${PYTHONPATH}
python plugin_test/run_tests.py -q --nologcapture --with-xunit ${OPTS}
fi
ec=$?
if [ "${KEEP_AFTER}" != "yes" ]; then
# remove environment after tests
if [ "${DRY_RUN}" = "yes" ]; then
echo dos.py destroy "${ENV_NAME}"
else
dos.py destroy "${ENV_NAME}"
fi
fi
exit "${ec}"
}
RouteTasks() {
# this selector defines task names that are recognised by this script
# and runs corresponding jobs for them
# running any jobs should exit this script
case "${TASK_NAME}" in
test)
RunTest
;;
iso)
MakeISO
;;
*)
echo "Unknown task: ${TASK_NAME}!"
exit $INVALIDTASK_ERR
;;
esac
exit 0
}
# MAIN
# first we want to get variable from command line options
GetoptsVariables ${@}
# then we define global variables and there defaults when needed
GlobalVariables
# check do we have all critical variables set
CheckVariables
# first we chdir into our working directory unless we dry run
CdWorkSpace
# finally we can choose what to do according to TASK_NAME
RouteTasks

View File

@ -1,139 +0,0 @@
#!/bin/bash
# Copyright 2014 OpenStack Foundation.
# Copyright 2014 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
MODULE_PATH=/usr/share/puppet/modules
function remove_module {
local SHORT_MODULE_NAME=$1
if [ -n "$SHORT_MODULE_NAME" ]; then
rm -Rf "$MODULE_PATH/$SHORT_MODULE_NAME"
else
echo "ERROR: remove_module requires a SHORT_MODULE_NAME."
fi
}
# Array of modules to be installed key:value is module:version.
declare -A MODULES
# Array of modues to be installed from source and without dependency resolution.
# key:value is source location, revision to checkout
declare -A SOURCE_MODULES
#NOTE: if we previously installed kickstandproject-ntp we nuke it here
# since puppetlabs-ntp and kickstandproject-ntp install to the same dir
if grep kickstandproject-ntp /etc/puppet/modules/ntp/Modulefile &> /dev/null; then
remove_module "ntp"
fi
# freenode #puppet 2012-09-25:
# 18:25 < jeblair> i would like to use some code that someone wrote,
# but it's important that i understand how the author wants me to use
# it...
# 18:25 < jeblair> in the case of the vcsrepo module, there is
# ambiguity, and so we are trying to determine what the author(s)
# intent is
# 18:30 < jamesturnbull> jeblair: since we - being PL - are the author
# - our intent was not to limit it's use and it should be Apache
# licensed
MODULES["puppetlabs-vcsrepo"]="1.2.0"
MODULES["puppetlabs-apt"]="1.6.0"
MODULES["puppetlabs-firewall"]="1.1.3"
MODULES["puppetlabs-concat"]="1.1.0"
MODULES["puppetlabs-mysql"]="2.3.1"
MODULES["puppetlabs-ntp"]="3.1.2"
MODULES["puppetlabs-postgresql"]="3.4.2"
MODULES["puppetlabs-rsync"]="0.3.1"
MODULES["puppetlabs-stdlib"]="4.5.1"
MODULES["puppetlabs-java_ks"]="1.2.6"
MODULES["puppetlabs-nodejs"]="0.7.1"
MODULES["puppetlabs-apache"]="1.4.1"
MODULES["maestrodev-rvm"]="1.11.0"
MODULES["thias-sysctl"]="1.0.0"
MODULES["thias-php"]="1.1.0"
MODULES["darin-zypprepo"]="1.0.1"
MODULES["elasticsearch/elasticsearch"]="0.4.0"
MODULES["ripienaar-module_data"]="0.0.3"
MODULES["rodjek-logrotate"]="1.1.1"
MODULES["saz-sudo"]="3.0.9"
MODULES["golja-gnupg"]="1.2.1"
MODULES["gnubilafrance-atop"]="0.0.4"
SOURCE_MODULES["https://github.com/iberezovskiy/puppet-mongodb"]="0.1"
SOURCE_MODULES["https://github.com/monester/puppet-bacula"]="v0.4.0.1"
SOURCE_MODULES["https://github.com/monester/puppet-libvirt"]="0.3.2-3"
SOURCE_MODULES["https://github.com/SergK/puppet-display"]="0.5.0"
SOURCE_MODULES["https://github.com/SergK/puppet-glusterfs"]="0.0.4"
SOURCE_MODULES["https://github.com/SergK/puppet-sshuserconfig"]="0.0.1"
SOURCE_MODULES["https://github.com/SergK/puppet-znc"]="0.0.9"
SOURCE_MODULES["https://github.com/teran/puppet-bind"]="0.5.1-hiera-debian-keys-controls-support"
SOURCE_MODULES["https://github.com/teran/puppet-mailman"]="0.1.4+user-fix"
SOURCE_MODULES["https://github.com/teran/puppet-nginx"]="0.1.1+ssl_ciphers(renew)"
MODULE_LIST=`puppet module list`
# Install all the modules
for MOD in ${!MODULES[*]} ; do
# If the module at the current version does not exist upgrade or install it.
if ! echo $MODULE_LIST | grep "$MOD ([^v]*v${MODULES[$MOD]}" >/dev/null 2>&1
then
# Attempt module upgrade. If that fails try installing the module.
if ! puppet module upgrade $MOD --version ${MODULES[$MOD]} >/dev/null 2>&1
then
# This will get run in cron, so silence non-error output
echo "Installing ${MOD} ..."
puppet module install --target-dir $MODULE_PATH $MOD --version ${MODULES[$MOD]} >/dev/null
fi
fi
done
MODULE_LIST=`puppet module list`
# Make a second pass, just installing modules from source
for MOD in ${!SOURCE_MODULES[*]} ; do
# get the name of the module directory
if [ `echo $MOD | awk -F. '{print $NF}'` = 'git' ]; then
echo "Remote repos of the form repo.git are not supported: ${MOD}"
exit 1
fi
MODULE_NAME=`echo $MOD | awk -F- '{print $NF}'`
# set up git base command to use the correct path
GIT_CMD_BASE="git --git-dir=${MODULE_PATH}/${MODULE_NAME}/.git --work-tree ${MODULE_PATH}/${MODULE_NAME}"
# treat any occurrence of the module as a match
if ! echo $MODULE_LIST | grep "${MODULE_NAME}" >/dev/null 2>&1; then
# clone modules that are not installed
git clone $MOD "${MODULE_PATH}/${MODULE_NAME}"
else
if [ ! -d ${MODULE_PATH}/${MODULE_NAME}/.git ]; then
echo "Found directory ${MODULE_PATH}/${MODULE_NAME} that is not a git repo, deleting it and reinstalling from source"
remove_module $MODULE_NAME
echo "Cloning ${MODULE_PATH}/${MODULE_NAME} ..."
git clone $MOD "${MODULE_PATH}/${MODULE_NAME}"
elif [ `${GIT_CMD_BASE} remote show origin | grep 'Fetch URL' | awk -F'URL: ' '{print $2}'` != $MOD ]; then
echo "Found remote in ${MODULE_PATH}/${MODULE_NAME} that does not match desired remote ${MOD}, deleting dir and re-cloning"
remove_module $MODULE_NAME
git clone $MOD "${MODULE_PATH}/${MODULE_NAME}"
fi
fi
# fetch the latest refs from the repo
$GIT_CMD_BASE fetch
# make sure the correct revision is installed, I have to use rev-list b/c rev-parse does not work with tags
if [ `${GIT_CMD_BASE} rev-list HEAD --max-count=1` != `${GIT_CMD_BASE} rev-list ${SOURCE_MODULES[$MOD]} --max-count=1` ]; then
# checkout correct revision
$GIT_CMD_BASE checkout ${SOURCE_MODULES[$MOD]}
fi
done

View File

@ -1,36 +0,0 @@
#!/bin/sh
set -xe
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get upgrade -y
apt-get install -y git puppet apt-transport-https tar
if [ -z "${PUPPET_MODULES_ARCHIVE}" ]; then
/etc/puppet/bin/install_modules.sh
else
MODULEPATH=$(puppet config print | awk -F':' '/^modulepath/{print $NF}')
if [ -f "${PUPPET_MODULES_ARCHIVE}" ]; then
tar xvf "${PUPPET_MODULES_ARCHIVE}" --strip-components=1 -C "${MODULEPATH}"
else
echo "${PUPPET_MODULES_ARCHIVE} is not a file. Quitting!"
exit 2
fi
fi
expect_hiera=$(puppet apply -vd --genconfig | awk '/ hiera_config / {print $3}')
if [ ! -f "${expect_hiera}" ]; then
echo "File ${expect_hiera} not found!"
if [ ! -f /etc/hiera.yaml ]; then
ln -s /etc/puppet/hiera/hiera-stub.yaml "${expect_hiera}"
else
echo "Found default /etc/hiera.yaml"
ln -s /etc/hiera.yaml "${expect_hiera}"
fi
fi
FACTER_PUPPET_APPLY=true FACTER_ROLE=puppetmaster puppet apply -vd /etc/puppet/manifests/site.pp
puppet agent --enable
puppet agent -vd --no-daemonize --onetime

View File

@ -1,20 +0,0 @@
Deployment in isolated environment
==================================
Requirements
------------
#) Already prepared tar.bz2 archive containing Puppet modules with following structure::
modules
module1
module2
moduleN
Usage
-----
Call ``install_puppet_master.sh`` with PUPPET_MODULES_ARCHIVE set to path to archive::
PUPPET_MODULES_ARCHIVE="/home/test/archive.tar.bz2" ./install_puppet_master.sh
It's going to install modules from archive and then run regular scripts used for environment deployment.

View File

@ -1,291 +0,0 @@
---
apt::always_apt_update: true
apt::disable_keys: false
apt::purge_sources_list: true
apt::purge_sources_list_d: true
apt::purge_preferences_d: true
apt::update_timeout: 300
apt::sources:
mirror:
location: 'http://archive.ubuntu.com/ubuntu/'
release: "%{::lsbdistcodename}"
key: 'C0B21F32'
key_server: 'keyserver.ubuntu.com'
repos: 'main restricted universe multiverse'
include_src: false
include_deb: true
mirror_updates:
location: 'http://archive.ubuntu.com/ubuntu/'
release: "%{::lsbdistcodename}-updates"
key: 'C0B21F32'
key_server: 'keyserver.ubuntu.com'
repos: 'main restricted universe multiverse'
include_src: false
include_deb: true
devops:
location: 'http://mirror.fuel-infra.org/devops/ubuntu/'
release: '/'
key: '62BF6A9C1D2B45A2'
key_server: 'keyserver.ubuntu.com'
repos: ''
include_src: false
include_deb: true
docker:
location: 'https://get.docker.io/ubuntu'
release: 'docker'
key: 'A88D21E9'
key_server: 'keyserver.ubuntu.com'
repos: 'main'
include_src: false
include_deb: true
jenkins:
location: 'http://pkg.jenkins-ci.org/debian-stable/'
release: 'binary/'
key: 'D50582E6'
key_server: 'keyserver.ubuntu.com'
repos: ''
include_src: false
include_deb: true
elasticsearch:
location: 'http://packages.elasticsearch.org/elasticsearch/1.3/debian'
release: 'stable'
repos: 'main'
key: 'D88E42B4'
key_server: 'keyserver.ubuntu.com'
include_src: false
include_deb: true
atop::service: true
atop::interval: 60
yum::default:
'enabled': true
yum::purge: true
yum::repos:
'base':
'descr': 'CentOS-$releasever - Base'
'baseurl': 'http://mirror.centos.org/centos/$releasever/os/$basearch/'
'gpgcheck': true
'centosplus':
'descr': 'CentOS-$releasever - Plus'
'baseurl': 'http://mirror.centos.org/centos/$releasever/centosplus/$basearch/'
'gpgcheck': true
'contrib':
'descr': 'CentOS-$releasever - Contrib'
'baseurl': 'http://mirror.centos.org/centos/$releasever/contrib/$basearch/'
'gpgcheck': true
'epel':
'descr': 'epel $releasever'
'mirrorlist': 'https://mirrors.fedoraproject.org/metalink?repo=epel-6&arch=$basearch'
'gpgcheck': true
'gpgkey': 'https://dl.fedoraproject.org/pub/epel/RPM-GPG-KEY-EPEL-6'
'extras':
'descr': 'CentOS-$releasever - Extras'
'baseurl': 'http://mirror.centos.org/centos/$releasever/extras/$basearch/'
'gpgcheck': true
'jpackage':
'descr': 'JPackage'
'mirrorlist': 'http://www.jpackage.org/mirrorlist.php?dist=generic&type=free&release=5.0'
'gpgcheck': true
'gpgkey': 'http://www.jpackage.org/jpackage.asc'
'updates':
'descr': 'CentOS-$releasever - Updates'
'baseurl': 'http://mirror.centos.org/centos/$releasever/updates/$basearch/'
'gpgcheck': true
'zabbix':
'descr': 'Zabbix Official Repository - $basearch'
'baseurl': 'http://repo.zabbix.com/zabbix/2.2/rhel/6/$basearch/'
'gpgcheck': true
'gpgkey': 'http://repo.zabbix.com/RPM-GPG-KEY-ZABBIX'
firewall:
known_networks:
- 10.108.0.0/16
external_hosts:
- 10.0.0.0/16
internal_networks:
- 172.18.0.0/16
local_networks:
- 192.168.1.0/24
mysql:
root_password: 'peNTZ7GA2Zr90y'
system::root_email: 'root@example.com'
system::mta_local_only: true
system::timezone: 'UTC'
system::root_password: '$6$rqlo82B/$nKaHJ2oNy08spMfByg1Pk.U/fnJvhOdWAMe2MS53zW8yw3ZIGGMoiqz98s/DDeeOzKrc2iR7WWoOfN5RoVnd9/'
system::install_tools: true
fuel_project::jenkins::slave::nailgun_db:
- 'nailgun'
- 'nailgun0'
- 'nailgun1'
- 'nailgun2'
- 'nailgun3'
- 'nailgun4'
- 'nailgun5'
- 'nailgun6'
- 'nailgun7'
fuel_project::jenkins::slave::seed_cleanup_dirs:
-
dir: '/var/www/fuelweb-iso'
ttl: 10
pattern: 'fuel-*'
-
dir: '/srv/downloads'
ttl: 1
pattern: 'fuel-*'
fuel_project::jenkins::slave::docker_package: 'lxc-docker-1.5.0'
fuel_project::jenkins::slave::jenkins_swarm_slave: true
fuel_project::jenkins::slave::ruby_version: '2.1.5'
jenkins::slave::authorized_keys:
'jenkins@mytestserver':
type: ssh-rsa
key: 'AAAAB3NzaC1yc2EAAAADAQABAAABAQDNWgMf6IisSY0HK0mpHkgVhRxHsDom81PJ6W3jAgcSBWY1Kz/2vL98SK91ppgYmnDa2uLbchY2Xk9ciefMpm7Qq5EO6oSPKJJhADyCYAX/7YomZIy4Xu7HxEh0Z6VCLt0DymwN4tBS9JuTISvEm17BLgtis/AemA2eRIl0JAdPf9rmQps4KP5AhG60ucdtTKD0y8TFK95ateplgcq9JLRInhrdg/vnJLbKnV7lP1g5dfY1rm6bum7P+Jwf2tdTOa0b5ucK/+iWVbyPO4Z2afPpblh4Vynfe2wMzzpGAp3n5MwtH2EZmSXm/B6/CkgOFROsmWH8MzQEvNBGHhw+ONR9'
jenkins::swarm_slave::master: 'https://jenkins.test-company.org/'
jenkins::swarm_slave::user: 'jenkins-robotson'
jenkins::swarm_slave::password: 'BTRfeHyibQlM2M'
jenkins::swarm_slave::labels: '14_04'
fuel_project::jenkins::slave::known_hosts:
'review.openstack.org':
host: 'review.openstack.org'
port: 29418
mysql::client::package_name: 'percona-server-client-5.6'
mysql::server::package_name: 'percona-server-server-5.6'
mysql::server::root_password: 'WpUrXaC92cZQ4XHMLpfraTRsl16ZtoTu'
puppet::master::autosign: true
puppet::master::firewall_allow_sources:
'1000 - puppet master connections from 10.0.0.0/8':
source: '10.0.0.0/8'
'1000 - puppet master connections from 172.16.0.0/12':
source: '172.16.0.0/12'
'1000 - puppet master connections from 192.168.0.0/16':
source: '192.168.0.0/16'
sysctl::base::values:
net.ipv4.ip_forward:
value: '0'
net.ipv4.tcp_syncookies:
value: 1
net.ipv4.tcp_window_scaling:
value: 1
net.ipv4.tcp_congestion_control:
value: cubic
net.ipv4.tcp_no_metrics_save:
value: 1
net.ipv4.tcp_moderate_rcvbuf:
value: 1
fs.inotify.max_user_instances:
value: 256
#passed to nginx::package class
nginx::package_name: nginx-full
nginx::package_source: nginx
nginx::package_ensure: present
nginx::manage_repo: false
#passed to nginx::service class
nginx::configtest_enable: true
nginx::service_ensure: running
nginx::service_restart: 'nginx -t && /etc/init.d/nginx restart'
nginx::config::temp_dir: /tmp
nginx::config::run_dir: /var/nginx
nginx::config::conf_template: fuel_project/nginx/nginx.conf.erb
nginx::config::proxy_conf_template: nginx/conf.d/proxy.conf.erb
nginx::config::confd_purge: true
nginx::config::vhost_purge: true
nginx::config::worker_processes: "%{processorcount}"
nginx::config::worker_connections: 1024
nginx::config::worker_rlimit_nofile: 1024
nginx::config::types_hash_max_size: 1024
nginx::config::types_hash_bucket_size: 512
nginx::config::names_hash_bucket_size: 64
nginx::config::names_hash_max_size: 512
nginx::config::multi_accept: 'off'
nginx::config::events_use: false
nginx::config::sendfile: 'on'
nginx::config::keepalive_timeout: 65
nginx::config::http_tcp_nodelay: 'on'
nginx::config::http_tcp_nopush: 'off'
nginx::config::gzip: 'on'
nginx::config::server_tokens: 'off'
nginx::config::spdy: 'off'
nginx::config::ssl_stapling: 'off'
nginx::config::proxy_redirect: 'off'
nginx::config::proxy_set_header:
- 'Host $host'
- 'X-Real-IP $remote_addr'
- 'X-Forwarded-For $proxy_add_x_forwarded_for'
nginx::config::proxy_cache_path: '/var/lib/nginx/cache'
nginx::config::proxy_cache_levels: '2'
nginx::config::proxy_cache_keys_zone: 'static:500m'
nginx::config::proxy_cache_max_size: 500m
nginx::config::proxy_cache_inactive: 20m
nginx::config::fastcgi_cache_path: false
nginx::config::fastcgi_cache_levels: '1'
nginx::config::fastcgi_cache_keys_zone: 'd3:100m'
nginx::config::fastcgi_cache_max_size: 500m
nginx::config::fastcgi_cache_inactive: 20m
nginx::config::fastcgi_cache_key: false
nginx::config::fastcgi_cache_use_stale: false
nginx::config::client_body_temp_path: /var/nginx/client_body_temp
nginx::config::client_body_buffer_size: 128k
nginx::config::client_max_body_size: 10m
nginx::config::proxy_temp_path: /var/nginx/proxy_temp
nginx::config::proxy_connect_timeout: '90'
nginx::config::proxy_send_timeout: '90'
nginx::config::proxy_read_timeout: '90'
nginx::config::proxy_buffers: '32 4k'
nginx::config::proxy_http_version: '1.0'
nginx::config::proxy_buffer_size: 8k
nginx::config::proxy_headers_hash_bucket_size: '256'
nginx::config::logdir: /var/log/nginx
nginx::config::mail: false
# Used to set conn_limit
nginx::config::http_cfg_append:
'limit_conn_zone': '$binary_remote_addr zone=addr:10m'
nginx::config::nginx_error_log: /var/log/nginx/error.log
nginx::config::http_access_log: /var/log/nginx/access.log
nginx::config::root_group: root
# Specific owner for sites-available directory
nginx::config::sites_available_owner: root
nginx::config::sites_available_group: root
nginx::config::sites_available_mode: '0644'
# Owner for all other files
nginx::config::global_owner: root
nginx::config::global_group: root
nginx::config::global_mode: '0644'
nginx::config::pid: /var/run/nginx.pid
nginx::config::conf_dir: /etc/nginx
nginx::config::super_user: true
nginx::config::daemon_user: www-data
logrotate::rules:
'upstart':
path: '/var/log/upstart/*.log'
rotate_every: 'day'
rotate: '7'
missingok: true
compress: true
ifempty: false
create: false
delaycompress: true

View File

@ -1,15 +0,0 @@
---
:backends:
- yaml
:yaml:
:datadir: /var/lib/hiera
:json:
:datadir: /var/lib/hiera
:hierarchy:
- nodes/%{::clientcert}
- roles/%{::role}
- locations/%{::location}
- common
:logger: console
:merge_behavior: deeper

View File

@ -1,50 +0,0 @@
---
apt::sources:
mirror:
location: 'http://mirrors.kha.mirantis.net/ubuntu/'
release: "%{::lsbdistcodename}"
key: 'C0B21F32'
key_server: 'keyserver.ubuntu.com'
repos: 'main restricted universe multiverse'
include_src: false
include_deb: true
mirror_updates:
location: 'http://mirrors.kha.mirantis.net/ubuntu/'
release: "%{::lsbdistcodename}-updates"
key: 'C0B21F32'
key_server: 'keyserver.ubuntu.com'
repos: 'main restricted universe multiverse'
include_src: false
include_deb: true
devops:
location: 'http://osci-mirror-kha.kha.mirantis.net/devops/ubuntu/'
release: '/'
key: '62BF6A9C1D2B45A2'
key_server: 'keyserver.ubuntu.com'
repos: ''
include_src: false
include_deb: true
docker:
location: 'https://get.docker.io/ubuntu'
release: 'docker'
key: 'A88D21E9'
key_server: 'keyserver.ubuntu.com'
repos: 'main'
include_src: false
include_deb: true
jenkins:
location: 'http://pkg.jenkins-ci.org/debian-stable/'
release: 'binary/'
key: 'D50582E6'
key_server: 'keyserver.ubuntu.com'
repos: ''
include_src: false
include_deb: true
elasticsearch:
location: 'http://packages.elasticsearch.org/elasticsearch/1.3/debian'
release: 'stable'
repos: 'main'
key: 'D88E42B4'
key_server: 'keyserver.ubuntu.com'
include_src: false
include_deb: true

View File

@ -1,50 +0,0 @@
---
apt::sources:
mirror:
location: 'http://mirrors.msk.mirantis.net/ubuntu/'
release: "%{::lsbdistcodename}"
key: 'C0B21F32'
key_server: 'keyserver.ubuntu.com'
repos: 'main restricted universe multiverse'
include_src: false
include_deb: true
mirror_updates:
location: 'http://mirrors.msk.mirantis.net/ubuntu/'
release: "%{::lsbdistcodename}-updates"
key: 'C0B21F32'
key_server: 'keyserver.ubuntu.com'
repos: 'main restricted universe multiverse'
include_src: false
include_deb: true
devops:
location: 'http://osci-mirror-msk.msk.mirantis.net/devops/ubuntu/'
release: '/'
key: '62BF6A9C1D2B45A2'
key_server: 'keyserver.ubuntu.com'
repos: ''
include_src: false
include_deb: true
docker:
location: 'https://get.docker.io/ubuntu'
release: 'docker'
key: 'A88D21E9'
key_server: 'keyserver.ubuntu.com'
repos: 'main'
include_src: false
include_deb: true
jenkins:
location: 'http://pkg.jenkins-ci.org/debian-stable/'
release: 'binary/'
key: 'D50582E6'
key_server: 'keyserver.ubuntu.com'
repos: ''
include_src: false
include_deb: true
elasticsearch:
location: 'http://packages.elasticsearch.org/elasticsearch/1.3/debian'
release: 'stable'
repos: 'main'
key: 'D88E42B4'
key_server: 'keyserver.ubuntu.com'
include_src: false
include_deb: true

View File

@ -1,93 +0,0 @@
---
classes:
- '::fuel_project::jenkins::master'
fuel_project::jenkins::master::install_label_dumper: true
fuel_project::jenkins::master::install_plugins: true
fuel_project::jenkins::master::service_fqdn: 'jenkins.test-company.org'
jenkins::master::install_groovy: true
jenkins::master::jenkins_cli_file: '/var/cache/jenkins/war/WEB-INF/jenkins-cli.jar'
jenkins::master::jenkins_cli_tries: '6'
jenkins::master::jenkins_cli_try_sleep: '30'
jenkins::master::jenkins_libdir: '/var/lib/jenkins'
jenkins::master::jenkins_management_email: 'jenkin@example.com'
jenkins::master::jenkins_management_login: 'jenkins-manager'
jenkins::master::jenkins_management_name: 'Jenkins Master'
jenkins::master::jenkins_management_password: 'jenkins_password'
jenkins::master::jenkins_s2m_acl: true
jenkins::master::security_model: 'ldap'
jenkins::master::security_opt_params: ''
jenkins::master::service_fqdn: 'jenkins-product.test.local'
jenkins::master::ssl_key_file: '/etc/ssl/jenkins.key'
jenkins::master::ssl_key_file_contents: |
-----BEGIN PRIVATE KEY-----
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC2OB+nAmxkHPht
j9CBXr1LU/n7nh37WUDGahYN775RLcR3NUZZHz6hoc7hyvEPO1PI5Mm2y0L8yREJ
meDFRl1yknP1Pe/vSlP+1+73l9UlpfV8uNwJ3DfAPUgxwYjOO0zMMu6Nih4zuZ2N
H2LHM3laJAWeeBCTCp4SxCW1XeMlKqfdT4/T3eXp5WdJ1+EtP6rya9Zivx+HHh6X
dIfKTypGiZiPiCewQnd0a2MM0X0IjtcvalldV4M9llAojkVze1idIBRu9c7t914C
fZAsbSSe2Q8s8YYAmymvxWrchz+CVs8GPoGx1iPSM4zBZFikJXaWT8IVk3TcFTHo
k9AzFtYdAgMBAAECggEBAJdwr4W6suDFXwaXhp9uYH4xbcpbz+ksdXQxiODORmrr
UaQNR8kb+Y6Vjv4DDzMsiGanFqnv5l12sc078R2jbFijNPI2JqnGKWbciYOG0aO3
eP3OGTmspz0C8XRAt3VGvX3cOnyxtIYilrlJw2tw8UMkOvNIL+Y05ckM8ZX5UKV6
lVJ30HO2jR6T5yM/Gc3s3gL/X5bHcaQDLWjhqZP411zULQPsWP8+bbXv8f+jZqcg
jg4oK1mC2MtGjy83DU5CqxZKPiISXm48RDDe8HAUrnkEMQAnHPdQymMv+d9kDv2y
6sp1ov3BQCfZm0mHkSW+wdnzwjNnPHZZ2FdvRz7V3GECgYEA5eFYdA9vuTbFhtod
foxHzmqZXBQM6ToXYEKFgdNYvHDISXNdsB4DyUT4V46bxpTMLynMqvM59/u16jaP
lo4DkkRLG/GxvGeFM/0odPMnoGTL0HBMJiYr7U3tgtEu2t2RqmVc2tpDPzQ0Mwaj
dqHPFId1p1AHeeX1MxeuTLPkA5UCgYEAyuxvfgoBfsDad5E3EbFrilrRJRbb3yxQ
hgilaISaSDn0MWZ3zE+pTCwuA9HYmjwr4GCeO8kSCpnhI4BKASMa4p0SLsTr0i/9
OUulLi3ZieWA2mqekqUo/CaccMhMfGr4AVQ3WeK3cjKXj/j/WnKTfHSB5uL2bvFg
XoqfXcOUZmkCgYBjkkdBBkqrXBkU/zcVUGft9eh1pM2u3BWyAT5Y7JWcEfH/NrRX
C7kyHei/7Cp3So5iw2U+itoKGwJB794kJWFQorox4W/OHrzotvgmKAh7Bg3uPCYP
xCr0v/Nn3XnBHYXx27Prq+zC3Lbbfz2grhfHWaFRlm2WlE+wEMrTuHvEPQKBgEM9
XSRShHRPyxRblffS5mON/Edh77Ffqb8AFm8voT/VlEjaP1AABYUsDoNNgYx568AJ
w+Tjl4rTunpdBCikTUBR87hzoAChzjKyEiXfI3pCBhRZx/mnqJEE6kmk1VNUzqEC
GuU57rd0dCxMwbBizuQqZvDuu+G/McOiA3S6Xe4hAoGAYs01BdHeksEyZK+q0TIR
cHJOyX0ae4ClfXyJ6moQbPr9uoDs0g+3p8IZtiEwVatpmQB2DIoE6jF81rsKBK68
tHQtn8ywdYDgJbqhx2Y4XP+9CeNhsRAya8SxFmQMirdtWNltMNvTXHFEoVWbf9Yz
Sb2NcH2bS0mjAlLmBCPYqsA=
-----END PRIVATE KEY-----
jenkins::master::jenkins_ssh_private_key_contents: |
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAzVoDH+iIrEmNBytJqR5IFYUcR7A6JvNTyelt4wIHEgVmNSs/
9ry/fEivdaaYGJpw2tri23IWNl5PXInnzKZu0KuRDuqEjyiSYQA8gmAF/+2KJmSM
uF7ux8RIdGelQi7dA8psDeLQUvSbkyErxJtewS4LYrPwHpgNnkSJdCQHT3/a5kKb
OCj+QIRutLnHbUyg9MvExSveWrXqZYHKvSS0SJ4a3YP75yS2yp1e5T9YOXX2Na5u
m7puz/icH9rXUzmtG+bnCv/ollW8jzuGdmnz6W5YeFcp33tsDM86RgKd5+TMLR9h
GZkl5vwevwpIDhUTrJlh/DM0BLzQRh4cPjjUfQIDAQABAoIBAGQO0OjyR+4S5Imy
uPCTlbIOqunvX1ZtR81hVS7AZSuNv/B2Q3N5IqBvVjcwVnneftDUyKb+nv4c0/SW
KYEZM3OvtT2cXbzXmwNytwkburCqUJ9GbR7E+voRlPBLNEXcScq4DhByDOnu0ANP
rWDeB7x/MAMHBCAUHMaaRJN3nqxIEvvzKK0B3GpRsVgGLDTQ4wX9ojmPQ7H8QQVV
ZnfiJxhXoXbcQUudwn2etMOQpnOzq+fUSj2U6U+pxnkQBcdb2TUqLVOdKqzV4Xwc
u/mqmtMRb6cjRpH+J1ajZqgbn6yw756TmP/LT5Jb0l/tI4b/HrPlXuXSJHtLFvQE
D00tK+ECgYEA+Gk447CteVDmkKU/kvDh9PVbZRsuF24w+LK6VLLxSp94gGIlHyNN
WdamBZviBIOnyz8x3WPd8u2LnkBla7L4iJgh/v5XgAK4I5ES94VGiEnEWJDXVKOY
JW9mRH7CElmhRbhVuMQoEDonhiLNLnRwwwjF79dSlANpJxioMCVOMkUCgYEA06AH
sx5gzdCt1OAgR2XPANMLdOgufjWsQiZtCIlLQTxzEjmF3uwsWy+ugVQIFZnwIUxw
5O41uDji1lwE/ond15oBMFB97unzsgFW3uHSV7yWJv1SVP7LSXZnBIRhwqsozYNL
3py9k/EvuZ4P+EoR8F3COC5gg62qxO5L2P3O2NkCgYAJ+e/W9RmCbcVUuc470IDC
nbf174mCV2KQGl1xWV5naNAmF8r13S0WFpDEWOZS2Ba9CuStx3z6bJ/W0y8/jAh/
M9zpqL1K3tEWXJUua6PRhWTlSavcMlXB6x9oUM7qfb8EVcrbiMUzIaLEuFEVNIfy
zT9lynf+icSHVW4rwNPLIQKBgCJ0VYyWD5Cyvvp/mwHE05UAx0a7XoZx2p/SfcH8
CGKQovN+pgsLTJV0B+dKdR5/N5dUSLUdC2X47QWVacK/U3z8t+DT2g0BzglXKnuT
LJnYPGIQsEziRtqpClCz9O6qyzPagom13y+s/uYrk9IKzSzjNvHKqzAFIF57paGo
gPrRAoGAClmcMYF4m48mnMAj5htFQg1UlE8abKygoWRZO/+0uh9BrZeQ3jsWnUWW
3TWXEjB/RazdPB0PWfc3kjruz8IhDsLKQYPX+h8JuLO8ZL20Mxo7o3bs/GQnDrw1
g/PCKBJscu0RQxsa16tt5aX/IM82cJR6At3tTUyUpiwqNsVClJs=
-----END RSA PRIVATE KEY-----
jenkins::master::jenkins_ssh_public_key_contents: 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDNWgMf6IisSY0HK0mpHkgVhRxHsDom81PJ6W3jAgcSBWY1Kz/2vL98SK91ppgYmnDa2uLbchY2Xk9ciefMpm7Qq5EO6oSPKJJhADyCYAX/7YomZIy4Xu7HxEh0Z6VCLt0DymwN4tBS9JuTISvEm17BLgtis/AemA2eRIl0JAdPf9rmQps4KP5AhG60ucdtTKD0y8TFK95ateplgcq9JLRInhrdg/vnJLbKnV7lP1g5dfY1rm6bum7P+Jwf2tdTOa0b5ucK/+iWVbyPO4Z2afPpblh4Vynfe2wMzzpGAp3n5MwtH2EZmSXm/B6/CkgOFROsmWH8MzQEvNBGHhw+ONR9'
jenkins::master::jenkins_address: '127.0.0.1'
jenkins::master::jenkins_proto: 'http'
jenkins::master::jenkins_port: '8080'
jenkins::master::jenkins_java_args: '-Xmx1500m -Xms1024m -Dorg.apache.commons.jelly.tags.fmt.timeZone=Europe/Moscow'
jenkins::master::jjb_username: 'jjb_user'
jenkins::master::jjb_password: 'jjb_pass'
jenkins::master::firewall_allow_sources:
'1000 - jenkins connections from 0.0.0.0/0':
source: '0.0.0.0/0'
#jenkins::master::nginx_log_format: 'proxy'

View File

@ -1,13 +0,0 @@
---
classes:
- '::fuel_project::jenkins::slave'
- '::sudo'
# keep current sudo configuration
sudo::purge: false
sudo::config_file_replace: false
# https://bugs.launchpad.net/fuel/+bug/1458842
sudo::configs:
'tcpdump':
'content': '%sudo ALL=(ALL) NOPASSWD: /usr/sbin/tcpdump'

View File

@ -1,3 +0,0 @@
---
classes:
- '::fuel_project::puppet::master'

View File

@ -1,62 +0,0 @@
# Defaults
Exec {
path => '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
provider => 'shell',
}
File {
replace => true,
}
if($::osfamily == 'Debian') {
Exec['apt_update'] -> Package <| |>
}
stage { 'pre' :
before => Stage['main'],
}
$gitrevision = '$Id$'
notify { "Revision : ${gitrevision}" :}
file { '/var/lib/puppet' :
ensure => 'directory',
owner => 'puppet',
group => 'puppet',
mode => '0755',
}
file { '/var/lib/puppet/gitrevision.txt' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0444',
content => $gitrevision,
require => File['/var/lib/puppet'],
}
# Nodes definitions
node /jenkins-slave\.test-company\.org/ {
class { '::fuel_project::jenkins::slave' :
external_host => true,
}
}
node /jenkins\.test-company\.org/ {
class { '::fuel_project::jenkins::master' :}
}
# Default
node default {
$classes = hiera('classes', '')
if ($classes) {
validate_array($classes)
hiera_include('classes')
} else {
notify { 'Default node invocation' :}
}
}

View File

@ -1,9 +0,0 @@
# Class: firewall_defaults::post
#
class firewall_defaults::post {
firewall { '9999 drop all':
proto => 'all',
action => 'drop',
before => undef,
}
}

View File

@ -1,39 +0,0 @@
# Class: firewall_defaults::pre
#
class firewall_defaults::pre {
include firewall_defaults::post
case $::osfamily {
'Debian': {
package { 'iptables-persistent' :
ensure => 'present',
before => Resources['firewall']
}
}
default: { }
}
resources { 'firewall' :
purge => true,
}
Firewall {
before => Class['firewall_defaults::post'],
}
firewall { '000 accept all icmp':
proto => 'icmp',
action => 'accept',
require => undef,
}->
firewall { '001 accept all to lo interface':
proto => 'all',
iniface => 'lo',
action => 'accept',
}->
firewall { '002 accept related established rules':
proto => 'all',
ctstate => ['RELATED', 'ESTABLISHED'],
action => 'accept',
}
}

View File

@ -1,52 +0,0 @@
# Class: fuel_project::apps::firewall
#
class fuel_project::apps::firewall {
$rules = hiera_hash('fuel_project::apps::firewall::rules', undef)
if ($rules) {
case $::osfamily {
'Debian': {
package { 'iptables-persistent' :
ensure => 'present',
before => Resources['firewall']
}
}
default: { }
}
resources { 'firewall' :
purge => true,
}
firewall { '0000 - accept all icmp' :
proto => 'icmp',
action => 'accept',
require => undef,
}->
firewall { '0001 - accept all to lo interface' :
proto => 'all',
iniface => 'lo',
action => 'accept',
}->
firewall { '0002 - accept related established rules' :
proto => 'all',
ctstate => ['RELATED', 'ESTABLISHED'],
action => 'accept',
}
create_resources(firewall, $rules, {
before => Firewall['9999 - drop all'],
require => [
Firewall['0000 - accept all icmp'],
Firewall['0001 - accept all to lo interface'],
Firewall['0002 - accept related established rules'],
]
})
firewall { '9999 - drop all' :
proto => 'all',
action => 'drop',
before => undef,
}
}
}

View File

@ -1,105 +0,0 @@
# Class: fuel_project::apps::lodgeit
#
class fuel_project::apps::lodgeit (
$ssl_certificate_contents,
$ssl_key_contents,
$ssl_certificate_file = '/etc/ssl/certs/paste.crt',
$ssl_key_file = '/etc/ssl/private/paste.key',
$service_fqdn = [$::fqdn],
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$paste_header_contents = '<h1>Lodge It</h1>',
) {
if (! defined(Class['::nginx'])) {
class { '::fuel_project::nginx' :}
}
class { '::lodgeit::web' :}
file { $ssl_certificate_file :
ensure => 'present',
mode => '0700',
owner => 'root',
group => 'root',
content => $ssl_certificate_contents,
}
file { $ssl_key_file :
ensure => 'present',
mode => '0700',
owner => 'root',
group => 'root',
content => $ssl_key_contents,
}
file { '/usr/share/lodgeit/lodgeit/views/header.html' :
ensure => 'present',
content => $paste_header_contents,
require => Class['::lodgeit::web'],
}
::nginx::resource::vhost { 'paste' :
ensure => 'present',
server_name => $service_fqdn,
listen_port => 80,
www_root => '/var/www',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
location_cfg_append => {
return => "301 https://${service_fqdn}\$request_uri",
},
}
::nginx::resource::vhost { 'paste-ssl' :
ensure => 'present',
listen_port => 443,
ssl_port => 443,
server_name => $service_fqdn,
ssl => true,
ssl_cert => $ssl_certificate_file,
ssl_key => $ssl_key_file,
ssl_cache => 'shared:SSL:10m',
ssl_session_timeout => '10m',
ssl_stapling => true,
ssl_stapling_verify => true,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
uwsgi => '127.0.0.1:4634',
location_cfg_append => {
uwsgi_intercept_errors => 'on',
'error_page 403' => '/fuel-infra/403.html',
'error_page 404' => '/fuel-infra/404.html',
'error_page 500 502 504' => '/fuel-infra/5xx.html',
},
require => [
File[$ssl_certificate_file],
File[$ssl_key_file],
],
}
::nginx::resource::location { 'paste-ssl-static' :
ensure => 'present',
vhost => 'paste-ssl',
ssl => true,
ssl_only => true,
location => '/static/',
www_root => '/usr/share/lodgeit/lodgeit',
location_cfg_append => {
'error_page 403' => '/fuel-infra/403.html',
'error_page 404' => '/fuel-infra/404.html',
'error_page 500 502 504' => '/fuel-infra/5xx.html',
},
}
::nginx::resource::location { 'paste-error-pages' :
ensure => 'present',
vhost => 'paste-ssl',
location => '~ ^\/(mirantis|fuel-infra)\/(403|404|5xx)\.html$',
ssl => true,
ssl_only => true,
www_root => '/usr/share/error_pages',
}
}

View File

@ -1,111 +0,0 @@
# Class: fuel_project::apps::mirror
#
class fuel_project::apps::mirror (
$autoindex = 'on',
$dir = '/var/www/mirror',
$dir_group = 'www-data',
$dir_owner = 'www-data',
$firewall_allow_sources = {},
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$port = 80,
$rsync_mirror_lockfile = '/var/run/rsync_mirror.lock',
$rsync_mirror_lockfile_rw = '/var/run/rsync_mirror_sync.lock',
$rsync_rw_share_comment = 'Fuel mirror sync',
$rsync_share_comment = 'Fuel mirror rsync share',
$rsync_writable_share = true,
$service_aliases = [],
$service_fqdn = "mirror.${::fqdn}",
$sync_hosts_allow = [],
) {
if(!defined(Class['rsync'])) {
class { 'rsync' :
package_ensure => 'present',
}
}
ensure_resource('user', $dir_owner, {
ensure => 'present',
})
ensure_resource('group', $dir_group, {
ensure => 'present',
})
file { $dir :
ensure => 'directory',
owner => $dir_owner,
group => $dir_group,
mode => '0755',
require => [
Class['nginx'],
User[$dir_owner],
Group[$dir_group],
],
}
if (!defined(Class['::rsync::server'])) {
class { '::rsync::server' :
gid => 'root',
uid => 'root',
use_chroot => 'yes',
use_xinetd => false,
}
}
::rsync::server::module{ 'mirror':
comment => $rsync_share_comment,
uid => 'nobody',
gid => 'nogroup',
list => 'yes',
lock_file => $rsync_mirror_lockfile,
max_connections => 100,
path => $dir,
read_only => 'yes',
write_only => 'no',
require => File[$dir],
}
if ($rsync_writable_share) {
::rsync::server::module{ 'mirror-sync':
comment => $rsync_rw_share_comment,
uid => $dir_owner,
gid => $dir_group,
hosts_allow => $sync_hosts_allow,
hosts_deny => ['*'],
incoming_chmod => '0755',
outgoing_chmod => '0644',
list => 'yes',
lock_file => $rsync_mirror_lockfile_rw,
max_connections => 100,
path => $dir,
read_only => 'no',
write_only => 'no',
require => [
File[$dir],
User[$dir_owner],
Group[$dir_group],
],
}
}
if (!defined(Class['::fuel_project::nginx'])) {
class { '::fuel_project::nginx' :}
}
::nginx::resource::vhost { 'mirror' :
ensure => 'present',
www_root => $dir,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
server_name => [
$service_fqdn,
"mirror.${::fqdn}",
join($service_aliases, ' ')
],
location_cfg_append => {
autoindex => $autoindex,
},
}
}

View File

@ -1,135 +0,0 @@
# == Class: fuel_project::apps::mirror_npm
#
class fuel_project::apps::mirror_npm (
$cron_frequency = '*/5',
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$npm_dir = '/var/www/npm_mirror',
$parallelism = 10,
$recheck = false,
$service_fqdn = $::fqdn,
$upstream_mirror = 'http://registry.npmjs.org/',
) {
validate_bool(
$recheck,
)
$packages = [
'ruby',
'ruby-dev',
]
package { $packages :
ensure => installed,
}
package { 'npm-mirror' :
ensure => '0.0.1',
provider => gem,
require => Package[$packages],
}
ensure_resource('file', '/var/www', {
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
})
file { $npm_dir :
ensure => 'directory',
owner => 'npmuser',
group => 'www-data',
require => [
User['npmuser'],
File['/var/www'],
]
}
user { 'npmuser' :
ensure => 'present',
home => '/var/lib/npm',
comment => 'Service used to run npm mirror synchronization',
managehome => true,
system => true,
}
file { '/etc/npm_mirror/' :
ensure => 'directory',
owner => 'npmuser',
group => 'npmuser',
require => User['npmuser'],
}
file { '/etc/npm_mirror/config.yml' :
ensure => 'present',
owner => 'npmuser',
group => 'npmuser',
mode => '0644',
content => template('fuel_project/apps/npm_mirror.erb'),
replace => true,
require => [
User['npmuser'],
File['/etc/npm_mirror/'],
],
}
::nginx::resource::vhost { 'npm_mirror' :
ensure => 'present',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
www_root => $npm_dir,
server_name => [$service_fqdn],
index_files => ['index.json'],
use_default_location => false,
}
::nginx::resource::location { 'etag' :
ensure => present,
location => '~ \.etag$',
vhost => 'npm_mirror',
location_custom_cfg => {
return => '404',
},
}
::nginx::resource::location { 'json' :
ensure => present,
location => '~ /index\.json$',
vhost => 'npm_mirror',
location_custom_cfg => {
default_type => 'application/json',
},
}
::nginx::resource::location { 'all' :
ensure => present,
location => '= /-/all/since',
vhost => 'npm_mirror',
location_custom_cfg => {
rewrite => '^ /-/all/',
},
}
file { '/var/run/npm' :
ensure => 'directory',
owner => 'npmuser',
group => 'root',
require => User['npmuser'],
}
cron { 'npm-mirror' :
minute => $cron_frequency,
command => 'flock -n /var/run/npm/mirror.lock timeout -k 2m 30m npm-mirror /etc/npm_mirror/config.yml 2>&1 | logger -t npm-mirror',
environment => 'PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin',
user => 'npmuser',
require => [
User['npmuser'],
File['/etc/npm_mirror/config.yml'],
],
}
}

View File

@ -1,113 +0,0 @@
# Class: fuel_project::apps::mirror_pypi
#
class fuel_project::apps::mirror_pypi (
$cron_frequency = '*/5',
$mirror_delete_packages = true,
$mirror_dir = '/var/www/pypi_mirror',
$mirror_master = 'https://pypi.python.org',
$mirror_stop_on_error = true,
$mirror_timeout = 10,
$mirror_workers = 5,
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$service_fqdn = $::fqdn,
) {
validate_bool(
$mirror_delete_packages,
$mirror_stop_on_error,
)
$packages = [
'python-bandersnatch-wrapper',
'python-pip',
]
ensure_packages($packages)
package { 'bandersnatch' :
ensure => '1.8',
provider => pip,
require => Package[$packages],
}
ensure_resource('file', '/var/www', {
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
})
file { $mirror_dir :
ensure => 'directory',
owner => 'pypi',
group => 'www-data',
require => [
User['pypi'],
File['/var/www'],
]
}
user { 'pypi' :
ensure => 'present',
home => '/var/lib/pypi',
comment => 'Service used to run pypi mirror synchronization',
managehome => true,
system => true,
}
file { '/etc/bandersnatch.conf' :
ensure => 'present',
owner => 'pypi',
group => 'pypi',
mode => '0600',
content => template('fuel_project/apps/bandersnatch.conf.erb'),
require => [
User['pypi'],
Package[$packages],
]
}
# Configure webserver to serve the web/ sub-directory of the mirror.
::nginx::resource::vhost { $service_fqdn :
ensure => 'present',
autoindex => 'on',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
www_root => "${mirror_dir}/web",
server_name => [$service_fqdn],
vhost_cfg_append => {
charset => 'utf-8',
}
}
::nginx::resource::location { 'pypi_mirror_root' :
ensure => 'present',
vhost => $service_fqdn,
www_root => "${mirror_dir}/web",
}
file { '/var/run/bandersnatch' :
ensure => 'directory',
owner => 'pypi',
group => 'root',
require => [
User['pypi'],
Package[$packages],
]
}
cron { 'pypi-mirror' :
minute => $cron_frequency,
command => 'flock -n /var/run/bandersnatch/mirror.lock timeout -k 2m 30m /usr/bin/run-bandersnatch 2>&1 | logger -t pypi-mirror',
user => 'pypi',
environment => 'PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin',
require => [
User['pypi'],
Package[$packages],
]
}
}

View File

@ -1,94 +0,0 @@
# == Class: fuel_project::apps::mirror_rubygems
#
class fuel_project::apps::mirror_rubygems (
$cron_frequency = '*/5',
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$parallelism = '10',
$rubygems_dir = '/var/www/rubygems_mirror',
$service_fqdn = $::fqdn,
$upstream_mirror = 'http://rubygems.org',
) {
package { 'rubygems-mirror' :
ensure => '1.0.1',
provider => gem,
}
ensure_resource('file', '/var/www', {
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
})
file { $rubygems_dir :
ensure => 'directory',
owner => 'rubygems',
group => 'www-data',
require => [
User['rubygems'],
File['/var/www'],
]
}
user { 'rubygems' :
ensure => 'present',
home => '/var/lib/rubygems',
comment => 'Service used to run rubygems mirror synchronization',
managehome => true,
system => true,
}
file { '/var/lib/rubygems/.gem' :
ensure => 'directory',
owner => 'rubygems',
group => 'rubygems',
require => User['rubygems'],
}
file { '/var/lib/rubygems/.gem/.mirrorrc' :
ensure => 'present',
owner => 'rubygems',
group => 'rubygems',
mode => '0600',
content => template('fuel_project/apps/rubygems_mirrorrc.erb'),
replace => true,
require => [
User['rubygems'],
File['/var/lib/rubygems/.gem'],
],
}
::nginx::resource::vhost { $service_fqdn :
ensure => 'present',
autoindex => 'on',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
www_root => $rubygems_dir,
server_name => [$service_fqdn]
}
::nginx::resource::location { 'rubygems_mirror_root' :
ensure => present,
vhost => $service_fqdn,
www_root => $rubygems_dir,
}
file { '/var/run/rubygems' :
ensure => 'directory',
owner => 'rubygems',
group => 'root',
require => User['rubygems'],
}
cron { 'rubygems-mirror' :
minute => $cron_frequency,
command => 'flock -n /var/run/rubygems/mirror.lock timeout -k 2m 30m gem mirror 2>&1 | logger -t rubygems-mirror',
environment => 'PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin',
user => 'rubygems',
}
}

View File

@ -1,16 +0,0 @@
# Class: fuel_project::apps::monitoring::mysql::server
#
class fuel_project::apps::monitoring::mysql::server {
zabbix::item { 'mysql' :
content => 'puppet:///modules/fuel_project/apps/monitoring/mysql/mysql_items.conf',
}
file { '/var/lib/zabbix/.my.cnf' :
ensure => 'present',
source => '/root/.my.cnf',
require => Class['::mysql::server'],
owner => 'zabbix',
group => 'zabbix',
mode => '0600',
}
}

View File

@ -1,64 +0,0 @@
# Class: fuel_project::apps::partnerappliance
#
class fuel_project::apps::partnerappliance (
$authorized_keys,
$group = 'appliance',
$home_dir = '/var/www/appliance',
$data_dir = "${home_dir}/data",
$user = 'appliance',
$vhost = 'appliance',
$service_fqdn = "${vhost}.${::domain}",
) {
# manage user $HOME manually, since we don't need .bash* stuff
# but only ~/.ssh/
file { $home_dir :
ensure => 'directory',
owner => $user,
group => $group,
mode => '0755',
require => User[$user]
}
file { $data_dir :
ensure => 'directory',
owner => $user,
group => $group,
mode => '0755',
require => [
File[$home_dir],
]
}
user { $user :
ensure => 'present',
system => true,
managehome => false,
home => $home_dir,
shell => '/bin/sh',
}
$opts = [
"command=\"rsync --server -rlpt --delete . ${data_dir}\"",
'no-agent-forwarding',
'no-port-forwarding',
'no-user-rc',
'no-X11-forwarding',
'no-pty',
]
create_resources(ssh_authorized_key, $authorized_keys, {
ensure => 'present',
user => $user,
require => [
File[$home_dir],
User[$user],
],
options => $opts,
})
::nginx::resource::vhost { $vhost :
server_name => [ $service_fqdn ],
www_root => $data_dir,
}
}

View File

@ -1,70 +0,0 @@
# Class: fuel_project::apps::partnershare
#
class fuel_project::apps::partnershare (
$authorized_key,
$htpasswd_content = '',
) {
# used to download magnet links
ensure_packages(['python-seed-client'])
if (!defined(Class['::fuel_project::common'])) {
class { '::fuel_project::common':
external_host => $apply_firewall_rules,
}
}
if (!defined(Class['::fuel_project::nginx'])) {
class { '::fuel_project::nginx': }
}
user { 'partnershare':
ensure => 'present',
home => '/var/www/partnershare',
managehome => true,
system => true,
require => File['/var/www'],
}
ssh_authorized_key { 'partnershare':
user => 'partnershare',
type => 'ssh-rsa',
key => $authorized_key,
require => User['partnershare'],
}
file { '/etc/nginx/partners.htpasswd':
ensure => 'file',
owner => 'root',
group => 'www-data',
mode => '0640',
content => $htpasswd_content,
}
cron { 'cleaner':
command => 'find /var/www/partnershare -mtime +30 -delete > /dev/null 2>&1',
user => 'www-data',
hour => '*/1',
minute => '0',
}
::nginx::resource::vhost { 'partnershare' :
server_name => ['share.fuel-infra.org'],
www_root => '/var/www/partnershare',
vhost_cfg_append => {
'autoindex' => 'on',
'auth_basic' => '"Restricted access!"',
'auth_basic_user_file' => '/etc/nginx/partners.htpasswd',
}
}
::nginx::resource::location { 'partnershare_root':
ensure => present,
vhost => 'partnershare',
www_root => '/var/www/partnershare',
location => '~ /\.',
location_cfg_append => {
deny => 'all',
}
}
}

View File

@ -1,58 +0,0 @@
# Class: fuel_project::apps::plugins
#
class fuel_project::apps::plugins (
$apply_firewall_rules = false,
$firewall_allow_sources = {},
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$plugins_dir = '/var/www/plugins',
$service_fqdn = "plugins.${::fqdn}",
$sync_hosts_allow = [],
) {
if (!defined(Class['::fuel_project::nginx'])) {
class { '::fuel_project::nginx' :}
}
::nginx::resource::vhost { 'plugins' :
ensure => 'present',
autoindex => 'on',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
www_root => $plugins_dir,
server_name => [$service_fqdn, "plugins.${::fqdn}"]
}
file { $plugins_dir :
ensure => 'directory',
owner => 'www-data',
group => 'www-data',
require => Class['::nginx'],
}
if (!defined(Class['::rsync::server'])) {
class { '::rsync::server' :
gid => 'root',
uid => 'root',
use_chroot => 'yes',
use_xinetd => false,
}
}
::rsync::server::module{ 'plugins':
comment => 'Fuel plugins sync',
uid => 'www-data',
gid => 'www-data',
hosts_allow => $sync_hosts_allow,
hosts_deny => ['*'],
incoming_chmod => '0755',
outgoing_chmod => '0644',
list => 'yes',
lock_file => '/var/run/rsync_plugins_sync.lock',
max_connections => 100,
path => $plugins_dir,
read_only => 'no',
write_only => 'no',
require => File[$plugins_dir],
}
}

View File

@ -1,64 +0,0 @@
# Class: fuel_project::apps::seed
#
class fuel_project::apps::seed (
$apply_firewall_rules = false,
$client_max_body_size = '5G',
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$seed_cleanup_dirs = undef,
$seed_dir = '/var/www/seed',
$seed_port = 17333,
$service_fqdn = "seed.${::fqdn}",
# FIXME: Make one list for hosts on L3 and L7
$vhost_acl_allow = [],
) {
if (!defined(Class['::fuel_project::nginx'])) {
class { '::fuel_project::nginx' :}
}
::nginx::resource::vhost { 'seed' :
ensure => 'present',
autoindex => 'off',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
www_root => $seed_dir,
server_name => [$service_fqdn, $::fqdn]
}
::nginx::resource::vhost { 'seed-upload' :
ensure => 'present',
autoindex => 'off',
www_root => $seed_dir,
listen_port => $seed_port,
server_name => [$::fqdn],
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
location_cfg_append => {
dav_methods => 'PUT',
client_max_body_size => $client_max_body_size,
allow => $vhost_acl_allow,
deny => 'all',
}
}
ensure_resource('file', '/var/www', {
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
before => File[$seed_dir],
})
file { $seed_dir :
ensure => 'directory',
owner => 'www-data',
group => 'www-data',
require => Class['nginx'],
}
class {'::devopslib::downloads_cleaner' :
cleanup_dirs => $seed_cleanup_dirs,
}
}

View File

@ -1,60 +0,0 @@
# Class: fuel_project::apps::static
#
class fuel_project::apps::static (
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = undef,
$packages = ['javascript-bundle'],
$service_fqdn = $::fqdn,
$ssl_certificate = '/etc/ssl/certs/static.crt',
$ssl_certificate_content = '',
$ssl_key = '/etc/ssl/private/static.key',
$ssl_key_content = '',
$static_dir = '/usr/share/javascript',
) {
ensure_packages(['javascript-bundle'])
if($ssl_certificate and $ssl_certificate_content) {
file { $ssl_certificate :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0400',
content => $ssl_certificate_content,
}
}
if($ssl_key and $ssl_key_content) {
file { $ssl_key :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0400',
content => $ssl_key_content,
}
}
::nginx::resource::vhost { 'static' :
ensure => 'present',
autoindex => 'off',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
ssl => true,
listen_port => 80,
ssl_port => 443,
ssl_cert => $ssl_certificate,
ssl_key => $ssl_key,
www_root => $static_dir,
server_name => [$service_fqdn, "static.${::fqdn}"],
gzip_types => 'text/css application/x-javascript',
vhost_cfg_append => {
'add_header' => "'Access-Control-Allow-Origin' '*'",
},
require => [
Package[$packages],
File[$ssl_certificate],
File[$ssl_key],
],
}
}

View File

@ -1,58 +0,0 @@
# Class: fuel_project::apps::update
#
class fuel_project::apps::updates (
$apply_firewall_rules = false,
$firewall_allow_sources = {},
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = 'proxy',
$service_fqdn = "updates.${::fqdn}",
$sync_hosts_allow = [],
$updates_dir = '/var/www/updates',
) {
if (!defined(Class['::fuel_project::nginx'])) {
class { '::fuel_project::nginx' :}
}
::nginx::resource::vhost { 'updates' :
ensure => 'present',
autoindex => 'on',
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
www_root => $updates_dir,
server_name => [$service_fqdn, "updates.${::fqdn}"]
}
file { $updates_dir :
ensure => 'directory',
owner => 'www-data',
group => 'www-data',
require => Class['::nginx'],
}
if (!defined(Class['::rsync::server'])) {
class { '::rsync::server' :
gid => 'root',
uid => 'root',
use_chroot => 'yes',
use_xinetd => false,
}
}
::rsync::server::module{ 'updates':
comment => 'Fuel updates sync',
uid => 'www-data',
gid => 'www-data',
hosts_allow => $sync_hosts_allow,
hosts_deny => ['*'],
incoming_chmod => '0755',
outgoing_chmod => '0644',
list => 'yes',
lock_file => '/var/run/rsync_updates_sync.lock',
max_connections => 100,
path => $updates_dir,
read_only => 'no',
write_only => 'no',
require => File[$updates_dir],
}
}

View File

@ -1,70 +0,0 @@
# Class: fuel_project::apps::web_share
#
class fuel_project::apps::web_share (
$authorized_keys,
$group = 'jenkins',
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_autoindex = 'on',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = undef,
$nginx_server_name = $::fqdn,
$share_root = '/var/www/share_logs',
$user = 'jenkins',
) {
ensure_resource('file', '/var/www', {
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
})
file { $share_root :
ensure => 'directory',
owner => $user,
group => $group,
mode => '0755',
require => [
User[$user],
File['/var/www'],
],
}
::nginx::resource::vhost { 'share-http' :
ensure => 'present',
server_name => [$nginx_server_name],
listen_port => 80,
www_root => $share_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
autoindex => $nginx_autoindex,
require => File[$share_root],
}
# manage user $HOME manually, since we don't need .bash* stuff
# but only ~/.ssh/
file { "/var/lib/${user}" :
ensure => 'directory',
owner => $user,
group => $group,
mode => '0755',
}
user { $user :
ensure => 'present',
system => true,
managehome => false,
home => "/var/lib/${user}",
shell => '/usr/sbin/nologin',
}
create_resources(ssh_authorized_key, $authorized_keys, {
ensure => 'present',
user => $user,
require => [
User[$user],
],
})
}

View File

@ -1,92 +0,0 @@
# Class: fuel_project::common
#
class fuel_project::common (
$bind_policy = '',
$external_host = false,
$facts = {
'location' => $::location,
'role' => $::role,
},
$kernel_package = undef,
$ldap = false,
$ldap_base = '',
$ldap_ignore_users = '',
$ldap_uri = '',
$logrotate_rules = hiera_hash('logrotate::rules', {}),
$pam_filter = '',
$pam_password = '',
$root_password_hash = 'pa$$w0rd',
$root_shell = '/bin/bash',
$tls_cacertdir = '',
) {
class { '::atop' :}
class { '::ntp' :}
class { '::puppet::agent' :}
class { '::ssh::authorized_keys' :}
class { '::ssh::sshd' :
apply_firewall_rules => $external_host,
}
# TODO: remove ::system module
# ... by spliting it's functions to separate modules
# or reusing publically available ones
class { '::system' :}
::puppet::facter { 'facts' :
facts => $facts,
}
ensure_packages([
'apparmor',
'facter-facts',
'screen',
'tmux',
])
# install the exact version of kernel package
# please note, that reboot must be done manually
if($kernel_package) {
ensure_packages($kernel_package)
}
case $::osfamily {
'Debian': {
class { '::apt' :}
}
'RedHat': {
class { '::yum' :}
}
default: { }
}
# Logrotate items
create_resources('::logrotate::rule', $logrotate_rules)
mount { '/' :
ensure => 'present',
options => 'defaults,errors=remount-ro,noatime,nodiratime,barrier=0',
}
file { '/etc/hostname' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0644',
content => "${::fqdn}\n",
notify => Exec['/bin/hostname -F /etc/hostname'],
}
file { '/etc/hosts' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0644',
content => template('fuel_project/common/hosts.erb'),
}
exec { '/bin/hostname -F /etc/hostname' :
subscribe => File['/etc/hostname'],
refreshonly => true,
require => File['/etc/hostname'],
}
}

View File

@ -1,52 +0,0 @@
#Class fuel_project::devops_tools::lpbugmanage
#
class fuel_project::devops_tools::lpbugmanage (
$id = '',
$consumer_key = '',
$consumer_secret = '',
$access_token = '',
$access_secret = '',
$section = 'bugmanage',
$appname = 'lpbugmanage',
$credfile = '/etc/lpbugmanage/credentials.conf',
$cachedir = '/var/cache/launchpadlib/',
$logfile = 'lpbugmanage.log',
$env = 'staging',
$status = 'New, Confirmed, Triaged, In Progress, Incomplete',
$series = 'https://api.staging.launchpad.net/1.0/fuel',
$milestone = 'https://api.staging.launchpad.net/1.0/fuel/+milestone',
$distr = 'fuel',
$package_name = 'python-lpbugmanage',
) {
ensure_packages([$package_name])
file { '/etc/lpbugmanage/credentials.conf':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0400',
content => template('fuel_project/devops_tools/credentials.erb'),
require => Package['python-lpbugmanage'],
}
file { '/etc/lpbugmanage/lpbugmanage.conf':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0644',
content => template('fuel_project/devops_tools/lpbugmanage.erb'),
require => Package['python-lpbugmanage'],
}
cron { 'lpbugmanage':
user => 'root',
hour => '*/1',
command => '/usr/bin/flock -n -x /var/lock/lpbugmanage.lock /usr/bin/lpbugmanage.py test 2>&1 | logger -t lpbugmanage',
require => [
Package['python-lpbugmanage'],
File['/etc/lpbugmanage/credentials.conf'],
File['/etc/lpbugmanage/lpbugmanage.conf'],
],
}
}

View File

@ -1,68 +0,0 @@
# Class fuel_project::devops_tools::lpupdatebug
#
class fuel_project::devops_tools::lpupdatebug (
$access_token = '',
$access_secret = '',
$appname = 'lpupdatebug',
$cachedir = '/var/tmp/launchpadlib/',
$consumer_key = '',
$consumer_secret = '',
$credfile = '/etc/lpupdatebug/credentials.conf',
$env = 'production',
$host = 'localhost',
$id = '1',
$logfile = '/var/log/lpupdatebug.log',
$package_name = 'python-lpupdatebug',
$port = '29418',
$projects = [],
$sshprivkey = '/etc/lpupdatebug/lpupdatebug.key',
$sshprivkey_contents = undef,
$update_status = 'yes',
$username = 'lpupdatebug',
) {
ensure_packages([$package_name])
if ($sshprivkey_contents)
{
file { $sshprivkey :
owner => 'root',
group => 'root',
mode => '0400',
content => $sshprivkey_contents,
}
}
file { '/etc/lpupdatebug/credentials.conf':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0400',
content => template('fuel_project/devops_tools/credentials.erb'),
require => Package['python-lpupdatebug'],
}
file { '/etc/lpupdatebug/lpupdatebug.conf':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0644',
content => template('fuel_project/devops_tools/lpupdatebug.erb'),
require => Package['python-lpupdatebug'],
}
service { 'python-lpupdatebug' :
ensure => running,
enable => true,
hasrestart => false,
require => Package[$package_name]
}
ensure_packages(['tailnew'])
zabbix::item { 'lpupdatebug-zabbix-check' :
content => 'puppet:///modules/fuel_project/devops_tools/userparams-lpupdatebug.conf',
notify => Service[$::zabbix::params::agent_service],
require => Package['tailnew']
}
}

View File

@ -1,73 +0,0 @@
# Define: fuel_project::gerrit::replication
#
# Replication path consists of:
# uri: 'user@host:path'
# More docs:
# https://gerrit.libreoffice.org/plugins/replication/Documentation/config.html
#
define fuel_project::gerrit::replication (
$host,
$path,
$user,
$auth_group = undef,
$config_file_path = '/var/lib/gerrit/review_site/etc/replication.config',
$mirror = undef,
$private_key = undef,
$public_key = undef,
$replicate_permissions = undef,
$replication_delay = 0,
$threads = 3,
){
# define replication file
# Each resource must be uniq otherwise we will have duplicate declaration error,
# as we are using the SAME configuration file for adding replica points, we must to
# use ensure_resource which only creates the resource if it does not already exist
# and thus help us to avoid duplcate declaration problem
ensure_resource(
'concat',
$config_file_path,
{
ensure => present,
owner => 'gerrit',
group => 'gerrit',
mode => '0644',
order => 'numeric',
})
# add header with link to docs (to replication file)
# To avoid duplcate declaration error (because we have concat::fragment, named
# replication_config_header) we have to use ensure_resource, which only creates
# the resource if it does not already exist
ensure_resource(
'concat::fragment',
'replication_config_header',
{
target => $config_file_path,
content => "# This file is managed by puppet.\n#https://gerrit.libreoffice.org/plugins/replication/Documentation/config.html\n",
order => '01'
})
# add host to known_hosts
ssh::known_host { "${host}-known-hosts" :
host => $host,
user => 'gerrit',
require => User['gerrit'],
}
# add ssh key-pare for replication
sshuserconfig::remotehost { "${user}-${host}" :
unix_user => 'gerrit',
ssh_config_dir => '/var/lib/gerrit/.ssh',
remote_hostname => $host,
remote_username => $user,
private_key_content => $private_key,
public_key_content => $public_key,
}
# add replica configuration to gerrrit replication.conf
concat::fragment { "${user}-${host}-${path}":
target => $config_file_path,
content => template('fuel_project/gerrit/replication.config.erb'),
}
}

View File

@ -1,46 +0,0 @@
# Class fuel_project::gerrit::replication_slave
#
class fuel_project::gerrit::replication_slave (
$authorized_keys = {}
) {
if (!defined(User['gerrit-replicator'])) {
user { 'gerrit-replicator':
ensure => 'present',
name => 'gerrit-replicator',
shell => '/bin/bash',
home => '/var/lib/gerrit-replicator',
managehome => true,
comment => 'Gerrit Replicator User',
system => true,
}
}
file { '/var/lib/gerrit-replicator/.ssh/' :
ensure => 'directory',
owner => 'gerrit-replicator',
group => 'gerrit-replicator',
mode => '0700',
require => User['gerrit-replicator'],
}
file { '/var/lib/gerrit/review_site/git/' :
ensure => 'directory',
owner => 'gerrit-replicator',
group => 'gerrit-replicator',
recurse => true,
require => [
User['gerrit-replicator'],
Package['gerrit'],
],
}
create_resources(ssh_authorized_key, $authorized_keys, {
ensure => 'present',
user => 'gerrit-replicator',
require => [
User['gerrit-replicator'],
File['/var/lib/gerrit-replicator/.ssh/'],
],
})
}

View File

@ -1,25 +0,0 @@
# Class: fuel_project::jenkins::master
#
class fuel_project::jenkins::master (
$firewall_enable = false,
$install_label_dumper = false,
$install_plugins = false,
$install_zabbix_item = false,
$service_fqdn = $::fqdn,
) {
class { '::fuel_project::common':
external_host => $firewall_enable,
}
class { '::jenkins::master':
apply_firewall_rules => $firewall_enable,
install_zabbix_item => $install_zabbix_item,
install_label_dumper => $install_label_dumper,
service_fqdn => $service_fqdn,
}
if($install_plugins) {
package { 'jenkins-plugins' :
ensure => present,
require => Service['jenkins'],
}
}
}

View File

@ -1,854 +0,0 @@
# Class: fuel_project::jenkins::slave
#
class fuel_project::jenkins::slave (
$docker_package,
$ruby_version,
$bind_policy = '',
$build_fuel_iso = false,
$build_fuel_packages = false,
$build_fuel_npm_packages = ['grunt-cli', 'gulp'],
$build_fuel_plugins = false,
$check_tasks_graph = false,
$docker_service = '',
$external_host = false,
$fuel_web_selenium = false,
$http_share_iso = false,
$install_docker = false,
$jenkins_swarm_slave = false,
$known_hosts = {},
$known_hosts_overwrite = false,
$libvirt_default_network = false,
$ldap = false,
$ldap_base = '',
$ldap_ignore_users = '',
$ldap_sudo_group = undef,
$ldap_uri = '',
$local_ssh_private_key = undef,
$local_ssh_public_key = undef,
$nailgun_db = ['nailgun'],
$osc_apiurl = '',
$osc_pass_primary = '',
$osc_pass_secondary = '',
$osc_url_primary = '',
$osc_url_secondary = '',
$osc_user_primary = '',
$osc_user_secondary = '',
$osci_centos_image_name = 'centos6.4-x86_64-gold-master.img',
$osci_centos_job_dir = '/home/jenkins/vm-centos-test-rpm',
$osci_centos_remote_dir = 'vm-centos-test-rpm',
$osci_obs_jenkins_key = '',
$osci_obs_jenkins_key_contents = '',
$osci_rsync_source_server = '',
$osci_test = false,
$osci_trusty_image_name = 'trusty.qcow2',
$osci_trusty_job_dir = '/home/jenkins/vm-trusty-test-deb',
$osci_trusty_remote_dir = 'vm-trusty-test-deb',
$osci_ubuntu_image_name = 'ubuntu-deb-test.qcow2',
$osci_ubuntu_job_dir = '/home/jenkins/vm-ubuntu-test-deb',
$osci_ubuntu_remote_dir = 'vm-ubuntu-test-deb',
$osci_vm_centos_jenkins_key = '',
$osci_vm_centos_jenkins_key_contents = '',
$osci_vm_trusty_jenkins_key = '',
$osci_vm_trusty_jenkins_key_contents = '',
$osci_vm_ubuntu_jenkins_key = '',
$osci_vm_ubuntu_jenkins_key_contents = '',
$ostf_db = ['ostf'],
$pam_filter = '',
$pam_password = '',
$run_tests = false,
$seed_cleanup_dirs = [
{
'dir' => '/var/www/fuelweb-iso', # directory to poll
'ttl' => 10, # time to live in days
'pattern' => 'fuel-*', # pattern to filter files in directory
},
{
'dir' => '/srv/downloads',
'ttl' => 1,
'pattern' => 'fuel-*',
}
],
$simple_syntax_check = false,
$sudo_commands = ['/sbin/ebtables'],
$tls_cacertdir = '',
$verify_fuel_astute = false,
$verify_fuel_docs = false,
$verify_fuel_pkgs_requirements = false,
$verify_fuel_stats = false,
$verify_fuel_web = false,
$verify_fuel_web_npm_packages = ['casperjs','grunt-cli','gulp','phantomjs'],
$verify_jenkins_jobs = false,
$workspace = '/home/jenkins/workspace',
$x11_display_num = 99,
) {
if (!defined(Class['::fuel_project::common'])) {
class { '::fuel_project::common' :
external_host => $external_host,
ldap => $ldap,
ldap_uri => $ldap_uri,
ldap_base => $ldap_base,
tls_cacertdir => $tls_cacertdir,
pam_password => $pam_password,
pam_filter => $pam_filter,
bind_policy => $bind_policy,
ldap_ignore_users => $ldap_ignore_users,
}
}
class { 'transmission::daemon' :}
if ($jenkins_swarm_slave == true) {
class { '::jenkins::swarm_slave' :}
} else {
class { '::jenkins::slave' :}
}
# jenkins should be in www-data group by default
User <| title == 'jenkins' |> {
groups +> 'www-data',
}
class {'::devopslib::downloads_cleaner' :
cleanup_dirs => $seed_cleanup_dirs,
clean_seeds => true,
}
ensure_packages(['git', 'python-seed-client'])
# release status reports
if ($build_fuel_iso == true or $run_tests == true) {
class { '::landing_page::updater' :}
}
# FIXME: Legacy compability LP #1418927
cron { 'devops-env-cleanup' :
ensure => 'absent',
}
file { '/usr/local/bin/devops-env-cleanup.sh' :
ensure => 'absent',
}
file { '/etc/devops/local_settings.py' :
ensure => 'absent',
}
file { '/etc/devops' :
ensure => 'absent',
force => true,
require => File['/etc/devops/local_settings.py'],
}
package { 'python-devops' :
ensure => 'absent',
uninstall_options => ['purge']
}
# /FIXME
file { '/home/jenkins/.ssh' :
ensure => 'directory',
mode => '0700',
owner => 'jenkins',
group => 'jenkins',
require => User['jenkins'],
}
if ($local_ssh_private_key) {
file { '/home/jenkins/.ssh/id_rsa' :
ensure => 'present',
mode => '0600',
owner => 'jenkins',
group => 'jenkins',
content => $local_ssh_private_key,
require => [
User['jenkins'],
File['/home/jenkins/.ssh'],
]
}
}
if ($local_ssh_public_key) {
file { '/home/jenkins/.ssh/id_rsa.pub' :
ensure => 'present',
mode => '0600',
owner => 'jenkins',
group => 'jenkins',
content => $local_ssh_public_key,
require => [
User['jenkins'],
File['/home/jenkins/.ssh'],
]
}
}
# 'known_hosts' manage
if ($known_hosts) {
create_resources('ssh::known_host', $known_hosts, {
user => 'jenkins',
overwrite => $known_hosts_overwrite,
require => User['jenkins'],
})
}
# Run system tests
if ($run_tests == true) {
if ($libvirt_default_network == false) {
class { '::libvirt' :
listen_tls => false,
listen_tcp => true,
auth_tcp => 'none',
listen_addr => '127.0.0.1',
mdns_adv => false,
unix_sock_group => 'libvirtd',
unix_sock_rw_perms => '0777',
python => true,
qemu => true,
tcp_port => 16509,
deb_default => {
'libvirtd_opts' => '-d -l',
}
}
}
libvirt_pool { 'default' :
ensure => 'present',
type => 'dir',
autostart => true,
target => '/var/lib/libvirt/images',
require => Class['libvirt'],
}
# python-devops installation
if (!defined(Class['::postgresql::server'])) {
class { '::postgresql::server' : }
}
::postgresql::server::db { 'devops' :
user => 'devops',
password => 'devops',
}
::postgresql::server::db { 'fuel_devops' :
user => 'fuel_devops',
password => 'fuel_devops',
}
# /python-devops installation
$system_tests_packages = [
# dependencies
'libevent-dev',
'libffi-dev',
'libvirt-dev',
'python-dev',
'python-psycopg2',
'python-virtualenv',
'python-yaml',
'pkg-config',
'postgresql-server-dev-all',
# diagnostic utilities
'htop',
'sysstat',
'dstat',
'vncviewer',
'tcpdump',
# usefull utils
'screen',
# repo building utilities
'reprepro',
'createrepo',
]
ensure_packages($system_tests_packages)
file { $workspace :
ensure => 'directory',
owner => 'jenkins',
group => 'jenkins',
require => User['jenkins'],
}
ensure_resource('file', "${workspace}/iso", {
ensure => 'directory',
owner => 'jenkins',
group => 'jenkins',
mode => '0755',
require => [
User['jenkins'],
File[$workspace],
],
})
file { '/etc/sudoers.d/systest' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0440',
content => template('fuel_project/jenkins/slave/system_tests.sudoers.d.erb'),
}
# Working with bridging
# we need to load module to be sure /proc/sys/net/bridge branch will be created
exec { 'load_bridge_module' :
command => '/sbin/modprobe bridge',
user => 'root',
logoutput => 'on_failure',
}
# ensure bridge module will be loaded on system start
augeas { 'sysctl-net.bridge.bridge-nf-call-iptables' :
context => '/files/etc/modules',
changes => 'clear bridge',
}
sysctl { 'net.bridge.bridge-nf-call-iptables' :
value => '0',
require => Exec['load_bridge_module'],
}
sysctl { 'vm.swappiness' :
value => '0',
}
}
# provide env for building packages, actaully for "make sources"
# from fuel-main and remove duplicate packages from build ISO
if ($build_fuel_packages or $build_fuel_iso) {
$build_fuel_packages_list = [
'devscripts',
'libparse-debcontrol-perl',
'make',
'mock',
'nodejs',
'nodejs-legacy',
'npm',
'pigz',
'lzop',
'python-setuptools',
'python-rpm',
'python-pbr',
'reprepro',
'ruby',
'sbuild',
]
User <| title == 'jenkins' |> {
groups +> 'mock',
require => Package[$build_fuel_packages_list],
}
ensure_packages($build_fuel_packages_list)
if ($build_fuel_npm_packages) {
ensure_packages($build_fuel_npm_packages, {
provider => npm,
require => Package['npm'],
})
}
}
# Build ISO
if ($build_fuel_iso == true) {
$build_fuel_iso_packages = [
'bc',
'build-essential',
'createrepo',
'debmirror',
'debootstrap',
'dosfstools',
'extlinux',
'genisoimage',
'isomd5sum',
'kpartx',
'libconfig-auto-perl',
'libmysqlclient-dev',
'libparse-debian-packages-perl',
'libyaml-dev',
'lrzip',
'python-daemon',
'python-ipaddr',
'python-jinja2',
'python-nose',
'python-paramiko',
'python-pip',
'python-xmlbuilder',
'python-virtualenv',
'python-yaml',
'realpath',
'ruby-bundler',
'ruby-builder',
'ruby-dev',
'rubygems-integration',
'syslinux',
'time',
'unzip',
'xorriso',
'yum',
'yum-utils',
]
ensure_packages($build_fuel_iso_packages)
ensure_resource('file', '/var/www', {
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
})
ensure_resource('file', '/var/www/fwm', {
ensure => 'directory',
owner => 'jenkins',
group => 'jenkins',
mode => '0755',
require => [
User['jenkins'],
File['/var/www'],
],
})
if ($http_share_iso) {
class { '::fuel_project::nginx' :}
::nginx::resource::vhost { 'share':
server_name => ['_'],
autoindex => 'on',
www_root => '/var/www',
}
ensure_resource('file', '/var/www/fuelweb-iso', {
ensure => 'directory',
owner => 'jenkins',
group => 'jenkins',
mode => '0755',
require => [
User['jenkins'],
File['/var/www'],
],
})
}
if (!defined(Package['multistrap'])) {
package { 'multistrap' :
ensure => '2.1.6ubuntu3'
}
}
apt::pin { 'multistrap' :
packages => 'multistrap',
version => '2.1.6ubuntu3',
priority => 1000,
}
# LP: https://bugs.launchpad.net/ubuntu/+source/libxml2/+bug/1375637
if (!defined(Package['libxml2'])) {
package { 'libxml2' :
ensure => '2.9.1+dfsg1-ubuntu1',
}
}
if (!defined(Package['python-libxml2'])) {
package { 'python-libxml2' :
ensure => '2.9.1+dfsg1-ubuntu1',
}
}
apt::pin { 'libxml2' :
packages => 'libxml2 python-libxml2',
version => '2.9.1+dfsg1-ubuntu1',
priority => 1000,
}
# /LP
file { 'jenkins-sudo-for-build_iso' :
path => '/etc/sudoers.d/build_fuel_iso',
owner => 'root',
group => 'root',
mode => '0440',
content => template('fuel_project/jenkins/slave/build_iso.sudoers.d.erb')
}
}
# osci_tests - for deploying osci jenkins slaves
if ($osci_test == true) {
# osci needed packages
$osci_test_packages = [
'osc',
'yum-utils',
]
ensure_packages($osci_test_packages)
# sudo for user 'jenkins'
file { 'jenkins-sudo-for-osci-vm' :
path => '/etc/sudoers.d/jenkins_sudo',
owner => 'root',
group => 'root',
mode => '0440',
content => template('fuel_project/jenkins/slave/build_iso.sudoers.d.erb'),
require => User['jenkins'],
}
# obs client settings
file { 'oscrc' :
path => '/home/jenkins/.oscrc',
owner => 'jenkins',
group => 'jenkins',
mode => '0644',
content => template('fuel_project/jenkins/slave/oscrc.erb'),
require => [
Package[$osci_test_packages],
User['jenkins'],
],
}
# osci kvm settings
if (!defined(Class['::libvirt'])) {
class { '::libvirt' :
mdns_adv => false,
unix_sock_rw_perms => '0777',
qemu => true,
defaultnetwork => true,
}
}
# osci needed directories
file {
[
$osci_ubuntu_job_dir,
$osci_centos_job_dir,
$osci_trusty_job_dir
] :
ensure => 'directory',
owner => 'jenkins',
group => 'jenkins',
require => User['jenkins'],
}
# rsync of vm images from existing rsync share
class { 'rsync': package_ensure => 'present' }
rsync::get { $osci_ubuntu_image_name :
source => "rsync://${osci_rsync_source_server}/${osci_ubuntu_remote_dir}/${osci_ubuntu_image_name}",
path => $osci_ubuntu_job_dir,
timeout => 14400,
require => [
File[$osci_ubuntu_job_dir],
User['jenkins'],
],
}
rsync::get { $osci_centos_image_name :
source => "rsync://${osci_rsync_source_server}/${osci_centos_remote_dir}/${osci_centos_image_name}",
path => $osci_centos_job_dir,
timeout => 14400,
require => [
File[$osci_centos_job_dir],
User['jenkins'],
],
}
rsync::get { $osci_trusty_image_name :
source => "rsync://${osci_rsync_source_server}/${osci_trusty_remote_dir}/${osci_trusty_image_name}",
path => $osci_trusty_job_dir,
timeout => 14400,
require => [
File[$osci_trusty_job_dir],
User['jenkins'],
],
}
# osci needed ssh keys
file {
[
$osci_obs_jenkins_key,
$osci_vm_ubuntu_jenkins_key,
$osci_vm_centos_jenkins_key,
$osci_vm_trusty_jenkins_key
]:
owner => 'jenkins',
group => 'nogroup',
mode => '0600',
content => [
$osci_obs_jenkins_key_contents,
$osci_vm_ubuntu_jenkins_key_contents,
$osci_vm_centos_jenkins_key_contents,
$osci_vm_trusty_jenkins_key_contents
],
require => [
File[
'/home/jenkins/.ssh',
$osci_ubuntu_job_dir,
$osci_centos_job_dir,
$osci_trusty_job_dir
],
User['jenkins'],
],
}
}
# *** Custom tests ***
# anonymous statistics tests
if ($verify_fuel_stats) {
class { '::fuel_stats::tests' : }
}
# Web tests by verify-fuel-web, stackforge-verify-fuel-web, verify-fuel-ostf
if ($verify_fuel_web) {
$verify_fuel_web_packages = [
'inkscape',
'libxslt1-dev',
'nodejs-legacy',
'npm',
'postgresql-server-dev-all',
'python-all-dev',
'python-cloud-sptheme',
'python-sphinx',
'python-tox',
'python-virtualenv',
'python2.6',
'python2.6-dev',
'python3-dev',
'rst2pdf',
]
ensure_packages($verify_fuel_web_packages)
if ($verify_fuel_web_npm_packages) {
ensure_packages($verify_fuel_web_npm_packages, {
provider => npm,
require => Package['npm'],
})
}
if ($fuel_web_selenium) {
$selenium_packages = [
'chromium-browser',
'chromium-chromedriver',
'firefox',
'imagemagick',
'x11-apps',
'xfonts-100dpi',
'xfonts-75dpi',
'xfonts-cyrillic',
'xfonts-scalable',
]
ensure_packages($selenium_packages)
class { 'display' :
display => $x11_display_num,
width => 1366,
height => 768,
}
}
if (!defined(Class['postgresql::server'])) {
class { 'postgresql::server' : }
}
postgresql::server::db { $nailgun_db:
user => 'nailgun',
password => 'nailgun',
}
postgresql::server::db { $ostf_db:
user => 'ostf',
password => 'ostf',
}
file { '/var/log/nailgun' :
ensure => directory,
owner => 'jenkins',
require => User['jenkins'],
}
}
# For the below roles we need to have rvm base class
if ($verify_fuel_astute or $simple_syntax_check or $build_fuel_plugins) {
class { 'rvm' : }
rvm::system_user { 'jenkins': }
rvm_system_ruby { "ruby-${ruby_version}" :
ensure => 'present',
default_use => true,
require => Class['rvm'],
}
}
# Astute tests require only rvm package
if ($verify_fuel_astute) {
rvm_gem { 'bundler' :
ensure => 'present',
ruby_version => "ruby-${ruby_version}",
require => Rvm_system_ruby["ruby-${ruby_version}"],
}
# FIXME: remove this hack, create package raemon?
$raemon_file = '/tmp/raemon-0.3.0.gem'
file { $raemon_file :
source => 'puppet:///modules/fuel_project/gems/raemon-0.3.0.gem',
}
rvm_gem { 'raemon' :
ensure => 'present',
ruby_version => "ruby-${ruby_version}",
source => $raemon_file,
require => [ Rvm_system_ruby["ruby-${ruby_version}"], File[$raemon_file] ],
}
}
# Simple syntax check by:
# - verify-fuel-devops
# - fuellib_review_syntax_check (puppet tests)
if ($simple_syntax_check) {
$syntax_check_packages = [
'libxslt1-dev',
'puppet-lint',
'python-flake8',
'python-tox',
]
ensure_packages($syntax_check_packages)
rvm_gem { 'puppet-lint' :
ensure => 'installed',
ruby_version => "ruby-${ruby_version}",
require => Rvm_system_ruby["ruby-${ruby_version}"],
}
}
# Check tasks graph
if ($check_tasks_graph){
$tasks_graph_check_packages = [
'python-pytest',
'python-jsonschema',
'python-networkx',
]
ensure_packages($tasks_graph_check_packages)
}
# Verify Fuel docs
if ($verify_fuel_docs) {
$verify_fuel_docs_packages = [
'inkscape',
'libjpeg-dev',
'make',
'plantuml',
'python-cloud-sptheme',
'python-sphinx',
'python-sphinxcontrib.plantuml',
'rst2pdf',
'texlive-font-utils', # provides epstopdf binary
]
ensure_packages($verify_fuel_docs_packages)
}
# Verify Jenkins jobs
if ($verify_jenkins_jobs) {
$verify_jenkins_jobs_packages = [
'bats',
'python-tox',
'shellcheck',
]
ensure_packages($verify_jenkins_jobs_packages)
}
# Verify and Build fuel-plugins project
if ($build_fuel_plugins) {
$build_fuel_plugins_packages = [
'rpm',
'createrepo',
'dpkg-dev',
'libyaml-dev',
'make',
'python-dev',
'ruby-dev',
'gcc',
'python2.6',
'python2.6-dev',
'python-tox',
'python-virtualenv',
]
ensure_packages($build_fuel_plugins_packages)
# we also need fpm gem
rvm_gem { 'fpm' :
ensure => 'present',
ruby_version => "ruby-${ruby_version}",
require => [
Rvm_system_ruby["ruby-${ruby_version}"],
Package['make'],
],
}
}
# verify requirements-{deb|rpm}.txt files from fuel-main project
# test-requirements-{deb|rpm} jobs on fuel-ci
if ($verify_fuel_pkgs_requirements==true){
$verify_fuel_requirements_packages = [
'devscripts',
'yum-utils',
]
ensure_packages($verify_fuel_requirements_packages)
}
if ($install_docker or $build_fuel_iso or $build_fuel_packages) {
if (!$docker_package) {
fail('You must define docker package explicitly')
}
if (!defined(Package[$docker_package])) {
package { $docker_package :
ensure => 'present',
require => Package['lxc-docker'],
}
}
#actually docker have api, and in some cases it will not be automatically started and enabled
if ($docker_service and (!defined(Service[$docker_service]))) {
service { $docker_service :
ensure => 'running',
enable => true,
hasstatus => true,
require => [
Package[$docker_package],
Group['docker'],
],
}
}
package { 'lxc-docker' :
ensure => 'absent',
}
group { 'docker' :
ensure => 'present',
require => Package[$docker_package],
}
User <| title == 'jenkins' |> {
groups +> 'docker',
require => Group['docker'],
}
if ($external_host) {
firewall { '010 accept all to docker0 interface':
proto => 'all',
iniface => 'docker0',
action => 'accept',
require => Package[$docker_package],
}
}
}
if($ldap_sudo_group) {
file { '/etc/sudoers.d/sandbox':
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0440',
content => template('fuel_project/jenkins/slave/sandbox.sudoers.d.erb'),
}
}
}

View File

@ -1,61 +0,0 @@
# Class: fuel_project::jenkins::slave::custom_scripts
class fuel_project::jenkins::slave::custom_scripts (
$docker_package,
$configs_path = '/etc/custom_scripts/',
$docker_user = 'jenkins',
$known_hosts = undef,
$packages = [
'git',
],
) {
$configs = hiera_hash('fuel_project::jenkins::slave::custom_scripts::configs', {})
if (!defined(Class['::fuel_project::common'])) {
class { '::fuel_project::common' : }
}
if (!defined(Class['::jenkins::slave'])) {
class { '::jenkins::slave' : }
}
# install required packages
ensure_packages($packages)
ensure_packages($docker_package)
# ensure $docker_user in docker group
# docker group will be created by docker package
User <| title == $docker_user |> {
groups +> 'docker',
require => Package[$docker_package],
}
if ($known_hosts) {
create_resources('ssh::known_host', $known_hosts, {
user => $docker_user,
overwrite => false,
require => User[$docker_user],
})
}
if ($configs) {
file { $configs_path:
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0700',
}
create_resources(file, $configs, {
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0600',
require => File[$configs_path],
})
}
}

View File

@ -1,9 +0,0 @@
# Class: fuel_project::nginx
#
class fuel_project::nginx {
if (!defined(Class['::nginx'])) {
class { '::nginx' :}
}
}

View File

@ -1,68 +0,0 @@
# Class: fuel_project::puppet::master
#
class fuel_project::puppet::master (
$apply_firewall_rules = false,
$enable_update_cronjob = true,
$external_host = false,
$firewall_allow_sources = {},
$hiera_backends = ['yaml'],
$hiera_config = '/etc/hiera.yaml',
$hiera_config_template = 'puppet/hiera.yaml.erb',
$hiera_hierarchy = ['nodes/%{::clientcert}', 'roles/%{::role}', 'locations/%{::location}', 'common'],
$hiera_json_datadir = '/var/lib/hiera',
$hiera_logger = 'console',
$hiera_merge_behavior = 'deeper',
$hiera_yaml_datadir = '/var/lib/hiera',
$manifests_binpath = '/etc/puppet/bin',
$manifests_branch = 'master',
$manifests_manifestspath = '/etc/puppet/manifests',
$manifests_modulespath = '/etc/puppet/modules',
$manifests_repo = 'ssh://puppet-master-tst@review.fuel-infra.org:29418/fuel-infra/puppet-manifests',
$manifests_tmpdir = '/tmp/puppet-manifests',
$puppet_config = '/etc/puppet/puppet.conf',
$puppet_environment = 'production',
$puppet_master_run_with = 'nginx+uwsgi',
$puppet_server = $::fqdn,
) {
class { '::fuel_project::common' :
external_host => $external_host,
}
class { '::fuel_project::nginx' :
require => Class['::fuel_project::common'],
}
class { '::puppet::master' :
apply_firewall_rules => $apply_firewall_rules,
firewall_allow_sources => $firewall_allow_sources,
hiera_backends => $hiera_backends,
hiera_config => $hiera_config,
hiera_config_template => $hiera_config_template,
hiera_hierarchy => $hiera_hierarchy,
hiera_json_datadir => $hiera_json_datadir,
hiera_logger => $hiera_logger,
hiera_merge_behavior => $hiera_merge_behavior,
hiera_yaml_datadir => $hiera_yaml_datadir,
config => $puppet_config,
environment => $puppet_environment,
server => $puppet_server,
puppet_master_run_with => $puppet_master_run_with,
require => [
Class['::fuel_project::common'],
Class['::fuel_project::nginx'],
],
}
file { '/usr/local/bin/puppet-manifests-update.sh' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
content => template('fuel_project/puppet/master/puppet-manifests-update.sh.erb')
}
if ($enable_update_cronjob) {
cron { 'puppet-manifests-update' :
command => '/usr/bin/timeout -k80 60 /usr/local/bin/puppet-manifests-update.sh 2>&1 | logger -t puppet-manifests-update',
user => 'root',
minute => '*/5',
require => File['/usr/local/bin/puppet-manifests-update.sh'],
}
}
}

View File

@ -1,278 +0,0 @@
#
class fuel_project::roles::docs (
$community_hostname = 'docs.fuel-infra.org',
$community_ssl_cert_content = '',
$community_ssl_cert_filename = '/etc/ssl/community-docs.crt',
$community_ssl_key_content = '',
$community_ssl_key_filename = '/etc/ssl/community-docs.key',
$docs_user = 'docs',
$fuel_version = '6.0',
$hostname = 'docs.mirantis.com',
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$nginx_log_format = undef,
$redirect_root_to = 'http://www.mirantis.com/openstack-documentation/',
$specs_hostname = 'specs.fuel-infra.org',
$specs_www_root = '/var/www/specs',
$ssh_auth_key = undef,
$ssl_cert_content = '',
$ssl_cert_filename = '/etc/ssl/docs.crt',
$ssl_key_content = '',
$ssl_key_filename = '/etc/ssl/docs.key',
$www_root = '/var/www',
) {
if ( ! defined(Class['::fuel_project::nginx']) ) {
class { '::fuel_project::nginx' : }
}
user { $docs_user :
ensure => 'present',
shell => '/bin/bash',
managehome => true,
}
ensure_packages('error-pages')
if ($ssl_cert_content and $ssl_key_content) {
file { $ssl_cert_filename :
ensure => 'present',
mode => '0600',
group => 'root',
owner => 'root',
content => $ssl_cert_content,
}
file { $ssl_key_filename :
ensure => 'present',
mode => '0600',
group => 'root',
owner => 'root',
content => $ssl_key_content,
}
Nginx::Resource::Vhost <| title == $hostname |> {
ssl => true,
ssl_cert => $ssl_cert_filename,
ssl_key => $ssl_key_filename,
listen_port => 443,
ssl_port => 443,
}
::nginx::resource::vhost { "${hostname}-redirect" :
ensure => 'present',
server_name => [$hostname],
listen_port => 80,
www_root => $www_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
location_cfg_append => {
return => "301 https://${hostname}\$request_uri",
},
}
$ssl = true
} else {
$ssl = false
}
if ($community_ssl_cert_content and $community_ssl_key_content) {
file { $community_ssl_cert_filename :
ensure => 'present',
mode => '0600',
group => 'root',
owner => 'root',
content => $community_ssl_cert_content,
}
file { $community_ssl_key_filename :
ensure => 'present',
mode => '0600',
group => 'root',
owner => 'root',
content => $community_ssl_key_content,
}
Nginx::Resource::Vhost <| title == $community_hostname |> {
ssl => true,
ssl_cert => $community_ssl_cert_filename,
ssl_key => $community_ssl_key_filename,
listen_port => 443,
ssl_port => 443,
}
::nginx::resource::vhost { "${community_hostname}-redirect" :
ensure => 'present',
server_name => [$community_hostname],
listen_port => 80,
www_root => $www_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
location_cfg_append => {
return => "301 https://${community_hostname}\$request_uri",
},
}
$community_ssl = true
} else {
$community_ssl = false
}
if ($ssh_auth_key) {
ssh_authorized_key { 'fuel_docs@jenkins' :
user => $docs_user,
type => 'ssh-rsa',
key => $ssh_auth_key,
require => User[$docs_user],
}
}
::nginx::resource::vhost { $community_hostname :
ensure => 'present',
server_name => [$community_hostname],
listen_port => 80,
www_root => $www_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
location_cfg_append => {
'rewrite' => {
'^/$' => '/fuel-dev',
'^/express/?$' => '/openstack/express/latest',
'^/(express/.+)' => '/openstack/$1',
'^/fuel/?$' => "/openstack/fuel/fuel-${fuel_version}",
'^/(fuel/.+)' => '/openstack/$1',
'^/openstack/fuel/$' => "/openstack/fuel/fuel-${fuel_version}",
},
},
vhost_cfg_append => {
'error_page 403' => '/fuel-infra/403.html',
'error_page 404' => '/fuel-infra/404.html',
'error_page 500 502 504' => '/fuel-infra/5xx.html',
}
}
# error pages for community
::nginx::resource::location { "${community_hostname}-error-pages" :
ensure => 'present',
vhost => $community_hostname,
location => '~ ^\/fuel-infra\/(403|404|5xx)\.html$',
ssl => true,
ssl_only => true,
www_root => '/usr/share/error_pages',
require => Package['error-pages'],
}
# Disable fuel-master docs on community site
::nginx::resource::location { "${community_hostname}/openstack/fuel/fuel-master" :
vhost => $community_hostname,
location => '~ \/openstack\/fuel\/fuel-master\/.*',
www_root => $www_root,
ssl => $community_ssl,
ssl_only => $community_ssl,
location_cfg_append => {
return => 404,
},
}
::nginx::resource::location { "${community_hostname}/fuel-dev" :
vhost => $community_hostname,
location => '/fuel-dev',
location_alias => "${www_root}/fuel-dev-docs/fuel-dev-master",
ssl => $community_ssl,
ssl_only => $community_ssl,
}
# Bug: https://bugs.launchpad.net/fuel/+bug/1473440
::nginx::resource::location { "${community_hostname}/fuel-qa" :
vhost => $community_hostname,
location => '/fuel-qa',
location_alias => "${www_root}/fuel-qa/fuel-master",
ssl => $community_ssl,
ssl_only => $community_ssl,
}
::nginx::resource::vhost { $hostname :
ensure => 'present',
server_name => [$hostname],
listen_port => 80,
www_root => $www_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
format_log => $nginx_log_format,
location_cfg_append => {
'rewrite' => {
'^/$' => $redirect_root_to,
'^/fuel-dev/?(.*)$' => "http://${community_hostname}/fuel-dev/\$1",
'^/express/?$' => '/openstack/express/latest',
'^/(express/.+)' => '/openstack/$1',
'^/fuel/?$' => "/openstack/fuel/fuel-${fuel_version}",
'^/(fuel/.+)' => '/openstack/$1',
'^/openstack/fuel/$' => "/openstack/fuel/fuel-${fuel_version}",
},
},
vhost_cfg_append => {
'error_page 403' => '/mirantis/403.html',
'error_page 404' => '/mirantis/404.html',
'error_page 500 502 504' => '/mirantis/5xx.html',
}
}
# error pages for primary docs
::nginx::resource::location { "${hostname}-error-pages" :
ensure => 'present',
vhost => $hostname,
location => '~ ^\/mirantis\/(403|404|5xx)\.html$',
ssl => true,
ssl_only => true,
www_root => '/usr/share/error_pages',
require => Package['error-pages'],
}
if (! defined(File[$www_root])) {
file { $www_root :
ensure => 'directory',
mode => '0755',
owner => $docs_user,
group => $docs_user,
require => User[$docs_user],
}
} else {
File <| title == $www_root |> {
owner => $docs_user,
group => $docs_user,
require => User[$docs_user],
}
}
file { "${www_root}/robots.txt" :
ensure => 'present',
mode => '0644',
owner => 'root',
group => 'root',
content => template('fuel_project/fuel_docs/robots.txt.erb'),
require => File[$www_root],
}
# fuel specs
file { $specs_www_root :
ensure => 'directory',
mode => '0755',
owner => $docs_user,
group => $docs_user,
require => [
File[$www_root],
User[$docs_user],
]
}
::nginx::resource::vhost { $specs_hostname :
server_name => [$specs_hostname],
www_root => $specs_www_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
location_cfg_append => {
'rewrite' => {
'^/$' => '/fuel-specs-master',
},
},
vhost_cfg_append => {
'error_page 403' => '/mirantis/403.html',
'error_page 404' => '/mirantis/404.html',
'error_page 500 502 504' => '/mirantis/5xx.html',
}
}
}

View File

@ -1,6 +0,0 @@
# Class: fuel_project::roles::errata
#
class fuel_project::roles::errata {
class { '::fuel_project::roles::errata::web' :}
class { '::fuel_project::roles::errata::database' :}
}

View File

@ -1,8 +0,0 @@
# Class: fuel_project::roles::errata::database
#
class fuel_project::roles::errata::database {
if (!defined(Class['::fuel_project::common'])) {
class { '::fuel_project::common' :}
}
class { '::errata::database' :}
}

View File

@ -1,9 +0,0 @@
# Class: fuel_project::roles::errata::web
#
class fuel_project::roles::errata::web {
if (!defined(Class['::fuel_project::common'])) {
class { '::fuel_project::common' :}
}
class { '::fuel_project::nginx' :}
class { '::errata::web' :}
}

View File

@ -1,16 +0,0 @@
# Class: fuel_project::roles::mailman
#
class fuel_project::roles::mailman {
class { '::fuel_project::common' :}
class { '::fuel_project::nginx' :}
class { '::mailman' :}
class { '::apache' :}
class { '::apache::mod::cgid' :}
class { '::apache::mod::mime' :}
::apache::vhost { $::fqdn :
docroot => '/var/www/lists',
aliases => hiera_array('fuel_project::roles::mailman::apache_aliases'),
directories => hiera_array('fuel_project::roles::mailman::apache_directories'),
}
}

View File

@ -1,104 +0,0 @@
# Class: fuel_project::roles::ns
#
class fuel_project::roles::ns (
$dns_repo,
$dns_branch = 'master',
$dns_checkout_private_key_content = undef,
$dns_tmpdir = '/tmp/ns-update',
$firewall_enable = false,
$firewall_rules = {},
$role = 'master',
$target_path = '/var/cache/bind',
) {
class { '::fuel_project::common' :
external_host => $firewall_enable,
}
class { '::bind' :}
::bind::server::conf { '/etc/bind/named.conf' :
require => Class['::bind'],
}
if ($role == 'master') {
ensure_packages(['git'])
file { '/usr/local/bin/ns-update.sh' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
content => template('fuel_project/roles/ns/ns-update.sh.erb'),
require => [
Class['::bind'],
::Bind::Server::Conf['/etc/bind/named.conf'],
Package['git'],
],
}
cron { 'ns-update' :
command => '/usr/bin/timeout -k80 60 /usr/local/bin/ns-update.sh 2>&1 | logger -t ns-update',
user => 'root',
minute => '*/5',
require => File['/usr/local/bin/ns-update.sh'],
}
}
ensure_packages(['perl', 'perl-base'])
file { '/usr/local/bin/bind96-stats-parse.pl' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
source => 'puppet:///modules/fuel_project/ns/bind96-stats-parse.pl',
require => [
Package['perl'],
Package['perl-base']
],
}
file { '/var/lib/bind/statistics.txt' :
ensure => 'present',
owner => 'bind',
group => 'bind',
}
cron { 'rndc-stats' :
command => '>/var/lib/bind/statistics.txt ; /usr/sbin/rndc stats',
user => 'root',
minute => '*/5',
require => [
File['/var/lib/bind/statistics.txt'],
File['/usr/local/bin/bind96-stats-parse.pl'],
],
}
::zabbix::item { 'bind' :
content => 'puppet:///modules/fuel_project/ns/zabbix_bind.conf',
}
if ($dns_checkout_private_key_content) {
file { '/root/.ssh' :
ensure => 'directory',
mode => '0500',
owner => 'root',
group => 'root',
}
file { '/root/.ssh/id_rsa' :
ensure => 'present',
content => $dns_checkout_private_key_content,
mode => '0400',
owner => 'root',
group => 'root',
require => File['/root/.ssh'],
}
}
if ($firewall_enable) {
include firewall_defaults::pre
create_resources(firewall, $firewall_rules, {
action => 'accept',
require => Class['firewall_defaults::pre'],
})
}
}

View File

@ -1,44 +0,0 @@
# Class: fuel_project::roles::perestroika::builder
#
# jenkins slave host for building packages
# see hiera file for list and params of used classes
class fuel_project::roles::perestroika::builder (
$docker_package,
$builder_user = 'jenkins',
$known_hosts = undef,
$packages = [
'createrepo',
'devscripts',
'git',
'python-setuptools',
'reprepro',
'yum-utils',
],
){
# ensure build user exists
ensure_resource('user', $builder_user, {
'ensure' => 'present'
})
# install required packages
ensure_packages($packages)
ensure_packages($docker_package)
# ensure $builder_user in docker group
# docker group will be created by docker package
User <| title == $builder_user |> {
groups +> 'docker',
require => Package[$docker_package],
}
if ($known_hosts) {
create_resources('ssh::known_host', $known_hosts, {
user => $builder_user,
overwrite => false,
require => User[$builder_user],
})
}
}

View File

@ -1,55 +0,0 @@
# Class: fuel_project::roles::perestroika::publisher
#
# jenkins slave host for publishing of packages
# see hiera file for list and params of used classes
class fuel_project::roles::perestroika::publisher (
$gpg_content_priv,
$gpg_content_pub,
$gpg_id_priv,
$gpg_id_pub,
$gpg_pub_key_owner = 'jenkins',
$gpg_priv_key_owner = 'jenkins',
$packages = [
'createrepo',
'devscripts',
'expect',
'python-lxml',
'reprepro',
'rpm',
'yum-utils',
],
) {
ensure_packages($packages)
if( ! defined(Class['::fuel_project::jenkins::slave'])) {
class { '::fuel_project::jenkins::slave' : }
}
class { '::gnupg' : }
gnupg_key { 'perestroika_gpg_public':
ensure => 'present',
key_id => $gpg_id_pub,
user => $gpg_pub_key_owner,
key_content => $gpg_content_pub,
key_type => public,
require => [
User['jenkins'],
Class['::fuel_project::jenkins::slave'],
],
}
gnupg_key { 'perestroika_gpg_private':
ensure => 'present',
key_id => $gpg_id_priv,
user => $gpg_priv_key_owner,
key_content => $gpg_content_priv,
key_type => private,
require => [
User['jenkins'],
Class['::fuel_project::jenkins::slave'],
],
}
}

View File

@ -1,24 +0,0 @@
# Class: fuel_project::roles::storage
#
class fuel_project::roles::storage (
$iso_vault_fqdn = "iso.${::fqdn}",
) {
class { '::fuel_project::common' :}
class { '::fuel_project::apps::mirror' :}
if (!defined(Class['::fuel_project::nginx'])) {
class { '::fuel_project::nginx' :}
}
::nginx::resource::vhost { 'iso-vault' :
ensure => 'present',
www_root => '/var/www/iso-vault',
access_log => '/var/log/nginx/access.log',
error_log => '/var/log/nginx/error.log',
format_log => 'proxy',
server_name => [$iso_vault_fqdn, "iso.${::fqdn}"],
location_cfg_append => {
autoindex => 'on',
},
}
}

View File

@ -1,6 +0,0 @@
# Class: fuel_project::roles::tracker
#
class fuel_project::roles::tracker {
class { '::fuel_project::common' :}
class { '::opentracker' :}
}

View File

@ -1,6 +0,0 @@
# Class: fuel_project::roles::zabbix::proxy
#
class fuel_project::roles::zabbix::proxy {
class { '::fuel_project::common' :}
class { '::zabbix::proxy' :}
}

View File

@ -1,59 +0,0 @@
# Class: fuel_project::roles::zabbix::server
#
class fuel_project::roles::zabbix::server (
$mysql_replication_password = '',
$mysql_replication_user = 'repl',
$mysql_slave_host = undef,
$maintenance_script = '/usr/share/zabbix-server-mysql/maintenance.sh',
$maintenance_script_config = '/root/.my.cnf',
$server_role = 'master', # master || slave
$slack_emoji_ok = ':smile:',
$slack_emoji_problem = ':frowning:',
$slack_emoji_unknown = ':ghost:',
$slack_post_username = '',
$slack_web_hook_url = '',
) {
class { '::fuel_project::common' :}
class { '::zabbix::server' :}
::zabbix::server::alertscript { 'slack.sh' :
template => 'fuel_project/zabbix/slack.sh.erb',
require => Class['::zabbix::server'],
}
::zabbix::server::alertscript { 'zabbkit.sh' :
template => 'fuel_project/zabbix/zabbkit.sh.erb',
require => Class['::zabbix::server'],
}
if ($server_role == 'master' and $mysql_slave_host) {
mysql_user { "${mysql_replication_user}@${mysql_slave_host}" :
ensure => 'present',
password_hash => mysql_password($mysql_replication_password),
}
mysql_grant { "${mysql_replication_user}@${mysql_slave_host}/*.*" :
ensure => 'present',
options => ['GRANT'],
privileges => ['REPLICATION SLAVE'],
table => '*.*',
user => "${mysql_replication_user}@${mysql_slave_host}",
}
file { $maintenance_script :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
content => template('fuel_project/roles/zabbix/server/maintenance.sh.erb'),
require => Class['::zabbix::server'],
}
cron { 'zabbix-maintenance' :
ensure => 'present',
command => "${maintenance_script} 2>&1 | logger -t zabbix-maintenance",
weekday => 'Wednesday',
hour => '15',
}
}
}

View File

@ -1,86 +0,0 @@
# Used for deployment of TPI lab
class fuel_project::tpi::lab (
$btsync_secret = $fuel_project::tpi::params::btsync_secret,
$sudo_commands = [ '/sbin/ebtables', '/sbin/iptables' ],
$local_home_basenames = [ 'jenkins' ],
) {
class { '::tpi::nfs_client' :
local_home_basenames => $local_home_basenames,
}
class { '::fuel_project::jenkins::slave' :
run_tests => true,
sudo_commands => $sudo_commands,
ldap => true,
build_fuel_plugins => true,
}
File<| title == 'jenkins-sudo-for-build_iso' |> {
content => template('fuel_project/tpi/jenkins-sudo-for-build_iso'),
}
class { '::tpi::vmware_lab' : }
# these packages will be installed from tpi apt repo defined in hiera
$tpi_packages = [
'linux-image-3.13.0-39-generic',
'linux-image-extra-3.13.0-39-generic',
'linux-headers-3.13.0-39',
'linux-headers-3.13.0-39-generic',
'btsync',
'sudo-ldap',
'zsh',
'most',
]
ensure_packages($tpi_packages)
service { 'btsync':
ensure => 'running',
enable => true,
require => Package['btsync'],
}
file { '/etc/default/btsync':
notify => Service['btsync'],
mode => '0600',
owner => 'btsync',
group => 'btsync',
content => template('fuel_project/tpi/btsync.erb'),
require => File['/etc/btsync/tpi.conf'],
}
file { '/etc/btsync/tpi.conf':
notify => Service['btsync'],
mode => '0600',
owner => 'btsync',
group => 'btsync',
content => template('fuel_project/tpi/tpi.conf.erb'),
require => Package['btsync'],
}
# transparent hugepage defragmentation leads to slowdowns
# in our environments (kvm+vmware workstation), disable it
file { '/etc/init.d/disable-hugepage-defrag':
mode => '0755',
owner => 'root',
group => 'root',
content => template('fuel_project/tpi/disable-hugepage-defrag.erb'),
}
service { 'disable-hugepage-defrag':
ensure => 'running',
enable => true,
require => File['/etc/init.d/disable-hugepage-defrag'],
}
file { '/etc/sudoers.d/tpi' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0600',
content => template('fuel_project/tpi/tpi.sudoers.d.erb'),
}
}

View File

@ -1,12 +0,0 @@
# Used for deployment of TPI puppet master
class fuel_project::tpi::puppetmaster (
$local_home_basenames= [],
) {
class { 'tpi::nfs_client' :
local_home_basenames => $local_home_basenames,
}
class { '::fuel_project::puppet::master' : }
}

View File

@ -1,34 +0,0 @@
# Used for deployment of TPI servers
class fuel_project::tpi::server (
) {
class { '::fuel_project::common' : }
class { '::jenkins::master' :}
class { 'rsync':
package_ensure => 'present',
}
if (!defined(Class['::rsync::server'])) {
class { '::rsync::server' :
gid => 'root',
uid => 'root',
use_chroot => 'yes',
use_xinetd => false,
}
}
::rsync::server::module{ 'storage':
comment => 'TPI main rsync share',
uid => 'nobody',
gid => 'nogroup',
list => 'yes',
lock_file => '/var/run/rsync_storage.lock',
max_connections => 100,
path => '/storage',
read_only => 'yes',
write_only => 'no',
incoming_chmod => false,
outgoing_chmod => false,
}
}

View File

@ -1,11 +0,0 @@
# Class: fuel_project::web
#
class fuel_project::web (
$fuel_landing_page = false,
$docs_landing_page = false,
) {
class { '::fuel_project::nginx' :}
class { '::fuel_project::common' :}
}

View File

@ -1,39 +0,0 @@
[mirror]
; The directory where the mirror data will be stored.
directory = <%= @mirror_dir %>
; The PyPI server which will be mirrored.
; master = https://testpypi.python.org
; scheme for PyPI server MUST be https
master = <%= @mirror_master %>
; The network socket timeout to use for all connections. This is set to a
; somewhat aggressively low value: rather fail quickly temporarily and re-run
; the client soon instead of having a process hang infinitely and have TCP not
; catching up for ages.
timeout = <%= @mirror_timeout %>
; Number of worker threads to use for parallel downloads.
; Recommendations for worker thread setting:
; - leave the default of 3 to avoid overloading the pypi master
; - official servers located in data centers could run 20 workers
; - anything beyond 50 is probably unreasonable and avoided by bandersnatch
workers = <%= @mirror_workers %>
; Whether to stop a sync quickly after an error is found or whether to continue
; syncing but not marking the sync as successful. Value should be "true" or
; "false".
stop-on-error = <%= @mirror_stop_on_error %>
; Whether or not files that have been deleted on the master should be deleted
; on the mirror, too.
; IMPORTANT: if you are running an official mirror than you *need* to leave
; this on.
delete-packages = <%= @mirror_delete_packages %>
[statistics]
; A glob pattern matching all access log files that should be processed to
; generate daily access statistics that will be aggregated on the master PyPI.
access-log-pattern = <%= @nginx_access_log %>*
; vim: set ft=cfg:

View File

@ -1,5 +0,0 @@
- from: <%= @upstream_mirror %>
to: <%= @npm_dir %>
server: http://<%= @service_fqdn %>
parallelism: <%= @parallelism %>
recheck: <%= @recheck ? 'true' : 'false' %>

View File

@ -1,4 +0,0 @@
---
- from: <%= @upstream_mirror %>
to: <%= @rubygems_dir %>
parallelism: <%= @parallelism %>

View File

@ -1,6 +0,0 @@
127.0.0.1 localhost
127.0.1.1 <%= @fqdn %> <%= @hostname %>
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

View File

@ -1,48 +0,0 @@
[remote "<%= @title %>"]
url = <%= @user %>@<%= @host %>:<%= @path %>${name}.git
<% if @admin_url != nil -%>
adminUrl = <%= @admin_url %>
<% end -%>
<% if @auth_group != nil -%>
authGroup = <%= @auth_group %>
<% end -%>
<% if @create_missing_repositories != nil -%>
createMissingRepositories = <%= @create_missing_repositories %>
<% end -%>
<% if @mirror != nil -%>
mirror = <%= @mirror %>
<% end -%>
<% if @projects != nil -%>
projects = <%= @projects %>
<% end -%>
<% if @push != nil -%>
push = <%= @push %>
<% end -%>
<% if @receivepack != nil -%>
receivepack = <%= @receivepack %>
<% end -%>
<% if @remote_name_style != nil -%>
remoteNameStyle = <%= @remote_name_style %>
<% end -%>
<% if @replicate_permissions != nil -%>
replicatePermissions = <%= @replicate_permissions %>
<% end -%>
<% if @replicate_project_deletions != nil -%>
replicateProjectDeletions = <%= @replicate_project_deletions %>
<% end -%>
<% if @replication_delay != nil -%>
replicationDelay = <%= @replication_delay %>
<% end -%>
<% if @replication_retry != nil -%>
replicationRetry = <%= @replication_retry %>
<% end -%>
<% if @timeout != nil -%>
timeout = <%= @timeout %>
<% end -%>
<% if @threads != nil -%>
threads = <%= @threads %>
<% end -%>
<% if @uploadpack != nil -%>
uploadpack = <%= @uploadpack %>
<% end -%>

View File

@ -1,3 +0,0 @@
# FIXME: https://bugs.launchpad.net/fuel/+bug/1348599
jenkins ALL=(ALL) NOPASSWD: ALL
# /FIXME

View File

@ -1,10 +0,0 @@
[general]
apiurl = <%= @osc_apiurl %>
[<%= @osc_url_primary %>]
user = <%= @osc_user_primary %>
pass = <%= @osc_pass_primary %>
[<%= @osc_url_secondary %>]
user = <%= @osc_user_secondary %>
pass = <%= @osc_pass_secondary %>

View File

@ -1,3 +0,0 @@
<% if @ldap_sudo_group and not @ldap_sudo_group.empty? -%>
<%= @ldap_sudo_group %> ALL=(ALL) NOPASSWD: ALL
<% end -%>

View File

@ -1,3 +0,0 @@
<% @sudo_commands.each {|command| -%>
jenkins ALL=(ALL) NOPASSWD: <%= command %>
<% } -%>

View File

@ -1,84 +0,0 @@
<% if @super_user -%>
user <%= @daemon_user %>;
<% end -%>
worker_processes <%= @worker_processes %>;
<% if @worker_rlimit_nofile -%>
worker_rlimit_nofile <%= @worker_rlimit_nofile %>;
<% end -%>
<% if @pid -%>
pid <%= @pid %>;
<% end -%>
error_log <%= @nginx_error_log %>;
events {
worker_connections <%= @worker_connections -%>;
<%- if @multi_accept == 'on' -%>
multi_accept on;
<%- end -%>
<%- if @events_use -%>
use <%= @events_use %>;
<%- end -%>
}
http {
include <%= @conf_dir %>/mime.types;
default_type application/octet-stream;
log_format proxy '[$time_local] $host $remote_addr "$request" $status "$http_referer" "$http_user_agent" "$http_cookie" "$http_x_forwarded_for" [proxy ($upstream_cache_status) : $upstream_addr $upstream_response_time $upstream_status ] $request_length $bytes_sent $request_time';
access_log <%= @http_access_log %>;
<% if @sendfile == 'on' -%>
sendfile on;
<%- if @http_tcp_nopush == 'on' -%>
tcp_nopush on;
<%- end -%>
<% end -%>
server_tokens <%= @server_tokens %>;
types_hash_max_size <%= @types_hash_max_size %>;
types_hash_bucket_size <%= @types_hash_bucket_size %>;
server_names_hash_bucket_size <%= @names_hash_bucket_size %>;
server_names_hash_max_size <%= @names_hash_max_size %>;
keepalive_timeout <%= @keepalive_timeout %>;
tcp_nodelay <%= @http_tcp_nodelay %>;
<% if @gzip == 'on' -%>
gzip on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_types application/json application/x-javascript application/yaml text/css;
<% end -%>
<% if @proxy_cache_path -%>
proxy_cache_path <%= @proxy_cache_path %> levels=<%= @proxy_cache_levels %> keys_zone=<%= @proxy_cache_keys_zone %> max_size=<%= @proxy_cache_max_size %> inactive=<%= @proxy_cache_inactive %>;
<% end -%>
<% if @fastcgi_cache_path -%>
fastcgi_cache_path <%= @fastcgi_cache_path %> levels=<%= @fastcgi_cache_levels %> keys_zone=<%= @fastcgi_cache_keys_zone %> max_size=<%= @fastcgi_cache_max_size %> inactive=<%= @fastcgi_cache_inactive %>;
<% end -%>
<% if @fastcgi_cache_key -%>
fastcgi_cache_key <%= @fastcgi_cache_key %>;
<% end -%>
<% if @fastcgi_cache_use_stale -%>
fastcgi_cache_use_stale <%= @fastcgi_cache_use_stale %>;
<% end -%>
<% if @http_cfg_append -%>
<%- field_width = @http_cfg_append.inject(0) { |l,(k,v)| k.size > l ? k.size : l } -%>
<%- @http_cfg_append.sort_by{|k,v| k}.each do |key,value| -%>
<%= sprintf("%-*s", field_width, key) %> <%= value %>;
<%- end -%>
<% end -%>
include <%= @conf_dir %>/conf.d/*.conf;
include <%= @conf_dir %>/sites-enabled/*;
}
<% if @mail -%>
mail {
include <%= @conf_dir %>/conf.mail.d/*.conf;
}
<% end -%>

View File

@ -1,47 +0,0 @@
#!/bin/bash
set -e
export BRANCH=${BRANCH:-<%= @manifests_branch %>}
export TMPDIR=${TMPDIR:-<%= @manifests_tmpdir %>}
export REPO=${REPO:-<%= @manifests_repo %>}
export BINPATH=${BINPATH:-<%= @manifests_binpath %>}
export MANIFESTSPATH=${MANIFESTSPATH:-<%= @manifests_manifestspath %>}
export MODULESPATH=${MODULESPATH:-<%= @manifests_modulespath %>}
(
flock -n 9 || exit 1
echo "Clean up..."
rm -rf "${TMPDIR}"
echo "Cloning..."
git clone "${REPO}" "${TMPDIR}"
cd "${TMPDIR}"
git checkout "${BRANCH}"
REVISION=`git log -1 HEAD | fgrep commit | awk '{print $NF}'`
PREV_REVISION=`cat /tmp/puppet-manifests-revision.txt 2>/dev/null || echo -n none`
echo -n "${REVISION}" > /tmp/puppet-manifests-revision.txt
echo "Revision: \$Id: ${REVISION} \$"
echo "Previous revision: ${PREV_REVISION}"
if [[ "${REVISION}" == "${PREV_REVISION}" ]]; then
echo "No updates found."
exit 0
fi
sed -i 's~\$Id\$~\$Id: '${REVISION}' \$~' "${TMPDIR}/manifests/site.pp"
echo "Linking..."
rm -rf "${BINPATH}"
rm -rf "${MANIFESTSPATH}"
rm -rf "${MODULESPATH}"
mv "${TMPDIR}/bin" "${BINPATH}"
mv "${TMPDIR}/manifests" "${MANIFESTSPATH}"
mv "${TMPDIR}/modules" "${MODULESPATH}"
echo "Running modules update"
${BINPATH}/install_modules.sh
) 9>/var/lock/puppet-manifests-update.lock

View File

@ -1,30 +0,0 @@
# This is the configuration file for /etc/init.d/btsync
#
# Start only these btsync instances automatically via
# init script.
# Allowed values are "all", "none" or a space separated list of
# names of the instances. If empty, "all" is assumed.
#
# The instance name refers to the btsync configuration file name.
# i.e. "general" would be /etc/btsync/general.conf
#
#AUTOSTART="all"
#AUTOSTART="none"
#AUTOSTART="general special"
AUTOSTART="tpi"
#
# Optional arguments to btsync's command line. Be careful!
# You should only add things here if you know EXACTLY what
# you are doing!
DAEMON_ARGS=""
#
# Optional bind address for all daemons. This setting can
# be overridden for each instance by specifying the
# parameter as a comment in the configuration file.
# NOTICE: this will work only, if bind-shim is installed
#DAEMON_BIND=10.20.30.40
#
#
# Enable this to see more output during the init script
# execution
#DAEMON_INIT_DEBUG=1

View File

@ -1,26 +0,0 @@
#! /bin/sh
. /lib/lsb/init-functions
set -e
case "$1" in
start)
echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
;;
stop)
echo 'always' > /sys/kernel/mm/transparent_hugepage/defrag
;;
status)
grep -q "\[never\]" /sys/kernel/mm/transparent_hugepage/defrag || ( echo "hugepage defrag is ON" && exit 1 )
log_success_msg "hugepage defrag is OFF"
exit 0
;;
*)
log_action_msg "Usage: $0 {start|stop|status}"
exit 1
;;
esac
exit 0

View File

@ -1,2 +0,0 @@
Defaults!/btsync/build_iso.sh setenv
jenkins ALL=(ALL) NOPASSWD: /btsync/build_iso.sh

View File

@ -1,17 +0,0 @@
{
"storage_path" : "/var/lib/btsync/",
"check_for_updates" : false,
"shared_folders" :
[
{
"secret" : "<%= @btsync_secret %>",
"dir" : "/btsync/",
"use_relay_server" : true,
"use_tracker" : true,
"use_dht" : false,
"search_lan" : true,
"use_sync_trash" : false,
"overwrite_changes" : false
}
]
}

View File

@ -1 +0,0 @@
%partner-ecosystem-all ALL=(ALL) NOPASSWD: <%= @sudo_commands.join(',') %>

View File

@ -1,362 +0,0 @@
import hudson.model.Item
import hudson.model.Computer
import hudson.model.Hudson
import hudson.model.Run
import hudson.model.View
import hudson.security.GlobalMatrixAuthorizationStrategy
import hudson.security.AuthorizationStrategy
import hudson.security.Permission
import hudson.tasks.Shell
import jenkins.model.Jenkins
import jenkins.model.JenkinsLocationConfiguration
import jenkins.security.s2m.AdminWhitelistRule
import com.cloudbees.plugins.credentials.CredentialsMatchers
import com.cloudbees.plugins.credentials.CredentialsProvider
import com.cloudbees.plugins.credentials.common.StandardUsernameCredentials
import com.cloudbees.plugins.credentials.domains.SchemeRequirement
import com.cloudbees.jenkins.plugins.sshcredentials.impl.BasicSSHUserPrivateKey
import com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl
import com.cloudbees.plugins.credentials.domains.Domain
import com.cloudbees.plugins.credentials.CredentialsScope
class InvalidAuthenticationStrategy extends Exception{}
class InvalidUserCredentials extends Exception{}
class InvalidUser extends Exception{}
class Actions {
Actions(out) { this.out = out }
def out
///////////////////////////////////////////////////////////////////////////////
// this is -> setup_shell
///////////////////////////////////////////////////////////////////////////////
//
// Allows to set specific shell in Jenkins Master instance
//
void setup_shell(String shell) {
def shl = new Shell.DescriptorImpl()
shl.setShell(shell)
shl.save()
}
///////////////////////////////////////////////////////////////////////////////
// this is -> setup_email_adm
///////////////////////////////////////////////////////////////////////////////
//
// Allows to set specific admin email in Jenkins Master instance
//
void setup_email_adm(String email) {
def loc = JenkinsLocationConfiguration.get()
loc.setAdminAddress(email)
loc.save()
}
///////////////////////////////////////////////////////////////////////////////
// this is -> enable_slave_to_master_acl
///////////////////////////////////////////////////////////////////////////////
//
// Allows to enable/disable SlaveToMasterAccessControl feature
//
void enable_slave_to_master_acl(String act) {
def s2m = new AdminWhitelistRule()
if(act == "true") {
// for 'enabled' state we need to pass 'false'
s2m.setMasterKillSwitch(false)
}
if(act == "false") {
s2m.setMasterKillSwitch(true)
}
// requires Jenkins restart
Hudson.instance.safeRestart()
}
///////////////////////////////////////////////////////////////////////////////
// this is -> cred_for_user
///////////////////////////////////////////////////////////////////////////////
//
// Helper in retrieving user's credentials
//
private cred_for_user(String user) {
def user_match = CredentialsMatchers.withUsername(user)
def available_cred = CredentialsProvider.lookupCredentials(
StandardUsernameCredentials.class,
Jenkins.getInstance(),
hudson.security.ACL.SYSTEM,
new SchemeRequirement("ssh")
)
return CredentialsMatchers.firstOrNull(available_cred, user_match)
}
///////////////////////////////////////////////////////////////////////////////
// this is -> user_info
///////////////////////////////////////////////////////////////////////////////
//
// Prints everything it can about the user
//
void user_info(String user) {
def get_user = hudson.model.User.get(user, false)
if(get_user == null) {
throw new InvalidUser()
}
def user_id = get_user.getId()
def name = get_user.getFullName()
def email_addr = null
def email_property = get_user.getProperty(hudson.tasks.Mailer.UserProperty)
if(email_property != null) {
email_addr = email_property.getAddress()
}
def ssh_keys = null
def ssh_keys_property = get_user.getProperty(org.jenkinsci.main.modules.cli.auth.ssh.UserPropertyImpl)
if(ssh_keys_property != null) {
keys = ssh_keys_property.authorizedKeys.split('\\s+')
}
def token = null
def api_token_property = get_user.getProperty(jenkins.security.ApiTokenProperty.class)
if (api_token_property != null) {
token = api_token_property.getApiToken()
}
def joutput = new groovy.json.JsonBuilder()
joutput {
id user_id
full_name name
email email_addr
api_token token
public_keys ssh_keys
}
// outputs in json format user's details
out.println(joutput)
}
///////////////////////////////////////////////////////////////////////////////
// this is -> create credentials
///////////////////////////////////////////////////////////////////////////////
//
// Sets up (or updates) credentials for a particular user
//
void create_update_cred(String user, String passwd, String descr=null, String priv_key=null) {
def global_domain = Domain.global()
def cred_store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()
def cred
if(priv_key==null) {
cred = new UsernamePasswordCredentialsImpl(CredentialsScope.GLOBAL, null, descr, user, passwd)
} else {
def key_src
if (priv_key.startsWith('-----BEGIN')) {
key_src = new BasicSSHUserPrivateKey.DirectEntryPrivateKeySource(priv_key)
} else {
key_src = new BasicSSHUserPrivateKey.FileOnMasterPrivateKeySource(priv_key)
}
cred = new BasicSSHUserPrivateKey(CredentialsScope.GLOBAL, null, user, key_src, passwd, descr)
}
def current_cred = cred_for_user(user)
if (current_cred != null) {
cred_store.updateCredentials(global_domain, current_cred, cred)
} else {
cred_store.addCredentials(global_domain, cred)
}
}
///////////////////////////////////////////////////////////////////////////////
// this -> is create_update_user
///////////////////////////////////////////////////////////////////////////////
//
// Creates or updates user
//
void create_update_user(String user, String email, String passwd=null, String name=null, String pub_keys=null) {
def set_user = hudson.model.User.get(user)
set_user.setFullName(name)
def email_property = new hudson.tasks.Mailer.UserProperty(email)
set_user.addProperty(email_property)
def pw_details = hudson.security.HudsonPrivateSecurityRealm.Details.fromPlainPassword(passwd)
set_user.addProperty(pw_details)
if (pub_keys != null && pub_keys !="") {
def ssh_keys_property = new org.jenkinsci.main.modules.cli.auth.ssh.UserPropertyImpl(pub_keys)
set_user.addProperty(ssh_keys_property)
}
set_user.save()
}
///////////////////////////////////////////////////////////////////////////////
// this -> is del_user
///////////////////////////////////////////////////////////////////////////////
//
// Deletes user
//
void del_user(String user) {
def rm_user = hudson.model.User.get(user, false)
if (rm_user != null) {
rm_user.delete()
}
}
///////////////////////////////////////////////////////////////////////////////
// this -> is del_credentials
///////////////////////////////////////////////////////////////////////////////
//
// Deletes credential for particular user
//
void del_cred(String user) {
def current_cred = cred_for_user(user)
if(current_cred != null) {
def global_domain = com.cloudbees.plugins.credentials.domains.Domain.global()
def cred_store = Jenkins.instance.getExtensionList('com.cloudbees.plugins.credentials.SystemCredentialsProvider')[0].getStore()
cred_store.removeCredentials(global_domain, current_cred)
}
}
///////////////////////////////////////////////////////////////////////////////
// this is -> cred_info
///////////////////////////////////////////////////////////////////////////////
//
// Retrieves current credentials for a user
//
void cred_info(String user) {
def cred = cred_for_user(user)
if(cred == null) {
throw new InvalidUserCredentials()
}
def current_cred = [ id:cred.id, description:cred.description, username:cred.username ]
if ( cred.hasProperty('password') ) {
current_cred['password'] = cred.password.plainText
} else {
current_cred['private_key'] = cred.privateKey
current_cred['passphrase'] = cred.passphrase.plainText
}
def joutput = new groovy.json.JsonBuilder(current_cred)
// output in json format.
out.println(joutput)
}
///////////////////////////////////////////////////////////////////////////////
// this -> is set_security
///////////////////////////////////////////////////////////////////////////////
//
// Sets up security for the Jenkins Master instance.
//
void set_security_ldap(
String overwrite_permissions=null,
String item_perms=null,
String server=null,
String rootDN=null,
String userSearch=null,
String inhibitInferRootDN=null,
String userSearchBase=null,
String groupSearchBase=null,
String managerDN=null,
String managerPassword=null,
String ldapuser,
String email=null,
String password,
String name=null,
String pub_keys=null,
String s2m_acl=null
) {
if (inhibitInferRootDN==null) {
inhibitInferRootDN = false
}
def instance = Jenkins.getInstance()
def strategy
def realm
List users = item_perms.split(' ')
if (!(instance.getAuthorizationStrategy() instanceof hudson.security.GlobalMatrixAuthorizationStrategy)) {
overwrite_permissions = 'true'
}
create_update_user(ldapuser, email, password, name, pub_keys)
strategy = new hudson.security.GlobalMatrixAuthorizationStrategy()
for (String user : users) {
for (Permission p : Item.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : Computer.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : Hudson.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : Run.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : View.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
}
realm = new hudson.security.LDAPSecurityRealm(
server, rootDN, userSearchBase, userSearch, groupSearchBase, managerDN, managerPassword, inhibitInferRootDN.toBoolean()
)
// apply new strategy&realm
if (overwrite_permissions == 'true') {
instance.setAuthorizationStrategy(strategy)
}
instance.setSecurityRealm(realm)
// commit new settings permanently (in config.xml)
instance.save()
// now setup s2m if requested
if(s2m_acl != null) {
enable_slave_to_master_acl(s2m_acl)
}
}
void set_unsecured() {
def instance = Jenkins.getInstance()
def strategy
def realm
strategy = new hudson.security.AuthorizationStrategy.Unsecured()
realm = new hudson.security.HudsonPrivateSecurityRealm(false, false, null)
instance.setAuthorizationStrategy(strategy)
instance.setSecurityRealm(realm)
instance.save()
}
void set_security_password(String user, String email, String password, String name=null, String pub_keys=null, String s2m_acl=null) {
def instance = Jenkins.getInstance()
def overwrite_permissions
def strategy
def realm
strategy = new hudson.security.GlobalMatrixAuthorizationStrategy()
if (!(instance.getAuthorizationStrategy() instanceof hudson.security.GlobalMatrixAuthorizationStrategy)) {
overwrite_permissions = 'true'
}
create_update_user(user, email, password, name, pub_keys)
for (Permission p : Item.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : Computer.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : Hudson.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : Run.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
for (Permission p : View.PERMISSIONS.getPermissions()) {
strategy.add(p,user)
}
realm = new hudson.security.HudsonPrivateSecurityRealm(false)
// apply new strategy&realm
if (overwrite_permissions == 'true') {
instance.setAuthorizationStrategy(strategy)
instance.setSecurityRealm(realm)
}
// commit new settings permanently (in config.xml)
instance.save()
// now setup s2m if requested
if(s2m_acl != null) {
enable_slave_to_master_acl(s2m_acl)
}
}
}
///////////////////////////////////////////////////////////////////////////////
// CLI Argument Processing
///////////////////////////////////////////////////////////////////////////////
actions = new Actions(out)
action = args[0]
if (args.length < 2) {
actions."$action"()
} else {
actions."$action"(*args[1..-1])
}

View File

@ -1,312 +0,0 @@
# Class: jenkins::master
#
class jenkins::master (
$service_fqdn = $::fqdn,
# Firewall access
$apply_firewall_rules = false,
$firewall_allow_sources = [],
# Nginx parameters
# Jenkins user keys
$jenkins_ssh_private_key_contents = '',
$jenkins_ssh_public_key_contents = '',
$ssl_cert_file = $::jenkins::params::ssl_cert_file,
$ssl_cert_file_contents = $::jenkins::params::ssl_cert_file_contents,
$ssl_key_file = '/etc/ssl/jenkins.key',
$ssl_key_file_contents = '',
# Jenkins config parameters
$install_label_dumper = false,
$install_zabbix_item = false,
$jenkins_address = '0.0.0.0',
$jenkins_java_args = '',
$jenkins_port = '8080',
$jenkins_proto = 'http',
$label_dumper_nginx_location = '/labels',
$nginx_access_log = '/var/log/nginx/access.log',
$nginx_error_log = '/var/log/nginx/error.log',
$www_root = '/var/www',
# Jenkins auth
$install_groovy = 'yes',
$jenkins_cli_file = '/var/cache/jenkins/war/WEB-INF/jenkins-cli.jar',
$jenkins_cli_tries = '6',
$jenkins_cli_try_sleep = '30',
$jenkins_libdir = '/var/lib/jenkins',
$jenkins_management_email = '',
$jenkins_management_login = '',
$jenkins_management_name = '',
$jenkins_management_password = '',
$jenkins_s2m_acl = true,
$ldap_access_group = '',
$ldap_group_search_base = '',
$ldap_inhibit_root_dn = 'no',
$ldap_manager = '',
$ldap_manager_passwd = '',
$ldap_overwrite_permissions = '',
$ldap_root_dn = 'dc=company,dc=net',
$ldap_uri = 'ldap://ldap',
$ldap_user_search = 'uid={0}',
$ldap_user_search_base = '',
$security_model = 'unsecured',
) inherits ::jenkins::params{
# Install base packages
package { 'openjdk-7-jre-headless':
ensure => present,
}
package { 'openjdk-6-jre-headless':
ensure => purged,
require => Package['openjdk-7-jre-headless'],
}
if($install_groovy) {
package { 'groovy' :
ensure => present,
}
}
package { 'jenkins' :
ensure => present,
}
service { 'jenkins' :
ensure => 'running',
enable => true,
hasstatus => true,
hasrestart => false,
}
Package['openjdk-7-jre-headless'] ~>
Package['jenkins'] ~>
Service['jenkins']
file { '/etc/default/jenkins':
ensure => present,
mode => '0644',
content => template('jenkins/jenkins.erb'),
require => Package['jenkins'],
}
ensure_resource('user', 'jenkins', {
ensure => 'present',
home => $jenkins_libdir,
managehome => true,
})
file { "${jenkins_libdir}/.ssh/" :
ensure => directory,
owner => 'jenkins',
group => 'jenkins',
mode => '0700',
require => User['jenkins'],
}
file { "${jenkins_libdir}/.ssh/id_rsa" :
owner => 'jenkins',
group => 'jenkins',
mode => '0600',
content => $jenkins_ssh_private_key_contents,
replace => true,
require => File["${jenkins_libdir}/.ssh/"],
}
file { "${jenkins_libdir}/.ssh/id_rsa.pub" :
owner => 'jenkins',
group => 'jenkins',
mode => '0644',
content => "${jenkins_ssh_public_key_contents} jenkins@${::fqdn}",
replace => true,
require => File["${jenkins_libdir}/.ssh"],
}
ensure_resource('file', $www_root, {'ensure' => 'directory' })
# Setup nginx
if (!defined(Class['::nginx'])) {
class { '::nginx' :}
}
::nginx::resource::vhost { 'jenkins-http' :
ensure => 'present',
listen_port => 80,
server_name => [$service_fqdn, $::fqdn],
www_root => $www_root,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
location_cfg_append => {
return => "301 https://${service_fqdn}\$request_uri",
},
}
::nginx::resource::vhost { 'jenkins' :
ensure => 'present',
listen_port => 443,
server_name => [$service_fqdn, $::fqdn],
ssl => true,
ssl_cert => $ssl_cert_file,
ssl_key => $ssl_key_file,
ssl_cache => 'shared:SSL:10m',
ssl_session_timeout => '10m',
ssl_stapling => true,
ssl_stapling_verify => true,
proxy => 'http://127.0.0.1:8080',
proxy_read_timeout => 120,
access_log => $nginx_access_log,
error_log => $nginx_error_log,
location_cfg_append => {
client_max_body_size => '8G',
proxy_redirect => 'off',
proxy_set_header => {
'X-Forwarded-For' => '$remote_addr',
'X-Forwarded-Proto' => 'https',
'X-Real-IP' => '$remote_addr',
'Host' => '$host',
},
},
}
if $ssl_cert_file_contents != '' {
file { $ssl_cert_file:
owner => 'root',
group => 'root',
mode => '0400',
content => $ssl_cert_file_contents,
before => Nginx::Resource::Vhost['jenkins'],
}
}
if $ssl_key_file_contents != '' {
file { $ssl_key_file:
owner => 'root',
group => 'root',
mode => '0400',
content => $ssl_key_file_contents,
before => Nginx::Resource::Vhost['jenkins'],
}
}
if($install_zabbix_item) {
file { '/usr/local/bin/jenkins_items.py' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
content => template('jenkins/jenkins_items.py.erb'),
}
::zabbix::item { 'jenkins' :
template => 'jenkins/zabbix_item.conf.erb',
require => File['/usr/local/bin/jenkins_items.py'],
}
}
if($install_label_dumper) {
file { '/usr/local/bin/labeldump.py' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0700',
content => template('jenkins/labeldump.py.erb'),
}
cron { 'labeldump-cronjob' :
command => '/usr/bin/test -f /tmp/jenkins.is.fine && /usr/local/bin/labeldump.py 2>&1 | logger -t labeldump',
user => 'root',
hour => '*',
minute => '*/30',
require => File['/usr/local/bin/labeldump.py'],
}
file { "${www_root}${label_dumper_nginx_location}" :
ensure => 'directory',
owner => 'root',
group => 'root',
mode => '0755',
}
::nginx::resource::location { 'labels' :
ensure => 'present',
ssl => true,
ssl_only => true,
location => $label_dumper_nginx_location,
vhost => 'jenkins',
www_root => $www_root,
}
}
if $apply_firewall_rules {
include firewall_defaults::pre
create_resources(firewall, $firewall_allow_sources, {
dport => [80, 443],
action => 'accept',
require => Class['firewall_defaults::pre'],
})
}
# Prepare groovy script
file { "${jenkins_libdir}/jenkins_cli.groovy":
ensure => present,
source => ('puppet:///modules/jenkins/jenkins_cli.groovy'),
owner => 'root',
group => 'root',
mode => '0644',
require => Package['groovy'],
}
if $security_model == 'unsecured' {
$security_opt_params = 'set_unsecured'
}
if $security_model == 'ldap' {
$security_opt_params = join([
'set_security_ldap',
"'${ldap_overwrite_permissions}'",
"'${ldap_access_group}'",
"'${ldap_uri}'",
"'${ldap_root_dn}'",
"'${ldap_user_search}'",
"'${ldap_inhibit_root_dn}'",
"'${ldap_user_search_base}'",
"'${ldap_group_search_base}'",
"'${ldap_manager}'",
"'${ldap_manager_passwd}'",
"'${jenkins_management_login}'",
"'${jenkins_management_email}'",
"'${jenkins_management_password}'",
"'${jenkins_management_name}'",
"'${jenkins_ssh_public_key_contents}'",
"'${jenkins_s2m_acl}'",
], ' ')
}
if $security_model == 'password' {
$security_opt_params = join([
'set_security_password',
"'${jenkins_management_login}'",
"'${jenkins_management_email}'",
"'${jenkins_management_password}'",
"'${jenkins_management_name}'",
"'${jenkins_ssh_public_key_contents}'",
"'${jenkins_s2m_acl}'",
], ' ')
}
# Execute groovy script to setup auth
exec { 'jenkins_auth_config':
require => [
File["${jenkins_libdir}/jenkins_cli.groovy"],
Package['groovy'],
Service['jenkins'],
],
command => join([
'/usr/bin/java',
"-jar ${jenkins_cli_file}",
"-s ${jenkins_proto}://${jenkins_address}:${jenkins_port}",
"groovy ${jenkins_libdir}/jenkins_cli.groovy",
$security_opt_params,
], ' '),
tries => $jenkins_cli_tries,
try_sleep => $jenkins_cli_try_sleep,
user => 'jenkins',
}
}

View File

@ -1,35 +0,0 @@
# Class: jenkins::params
#
class jenkins::params {
$slave_authorized_keys = {}
$slave_java_package = 'openjdk-7-jre-headless'
$swarm_labels = ''
$swarm_master = ''
$swarm_user = ''
$swarm_password = ''
$swarm_package = 'jenkins-swarm-slave'
$swarm_service = 'jenkins-swarm-slave'
$ssl_cert_file = '/etc/ssl/jenkins.crt'
$ssl_cert_file_contents = '-----BEGIN CERTIFICATE-----
MIIDPjCCAiYCCQCiiKl1cghBuDANBgkqhkiG9w0BAQUFADBhMQswCQYDVQQGEwJB
VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0
cyBQdHkgTHRkMRowGAYDVQQDDBFzZXJ2ZXIudGVzdC5sb2NhbDAeFw0xNDA4MTEy
MDAzNTVaFw0xNTA4MTEyMDAzNTVaMGExCzAJBgNVBAYTAkFVMRMwEQYDVQQIDApT
b21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQxGjAY
BgNVBAMMEXNlcnZlci50ZXN0LmxvY2FsMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAtjgfpwJsZBz4bY/QgV69S1P5+54d+1lAxmoWDe++US3EdzVGWR8+
oaHO4crxDztTyOTJtstC/MkRCZngxUZdcpJz9T3v70pT/tfu95fVJaX1fLjcCdw3
wD1IMcGIzjtMzDLujYoeM7mdjR9ixzN5WiQFnngQkwqeEsQltV3jJSqn3U+P093l
6eVnSdfhLT+q8mvWYr8fhx4el3SHyk8qRomYj4gnsEJ3dGtjDNF9CI7XL2pZXVeD
PZZQKI5Fc3tYnSAUbvXO7fdeAn2QLG0kntkPLPGGAJspr8Vq3Ic/glbPBj6BsdYj
0jOMwWRYpCV2lk/CFZN03BUx6JPQMxbWHQIDAQABMA0GCSqGSIb3DQEBBQUAA4IB
AQBlKH2m2AdkmASM1Q7J/LA0NnanqUvy4n+zhYb8NarOLEHG+OzLBLyW/y51X3cb
0IOzHHupA3cu38TuXnIGnoT/M3QsKKKz8smthHLvb7RPiVkJNYMLm8ZJlX2uCQSu
rN4ikYHut6bElAf2yZDOiLhDhhFhIwQTj1vm+4gmYFnexcHylLvRY3ulkN/MccXr
NyObrYJYR4jB5C+S9rCTN7gU7jX6fCD2NoY5DGdpBkSNvnSIWDPftRExLkMC4vvs
hrL5z+KEJjQEQJMMQFgdt1kDeLcnFmZl3sqhRFs0/2alyRmxTxkrUtLn3z68RsZy
gDKsvK5qpm7hWt35IVL3nZsZ
-----END CERTIFICATE-----'
}

View File

@ -1,29 +0,0 @@
# Class: jenkins::slave
#
class jenkins::slave (
$java_package = $::jenkins::params::slave_java_package,
$authorized_keys = $::jenkins::params::slave_authorized_keys,
) inherits ::jenkins::params {
ensure_packages([$java_package])
if (!defined(User['jenkins'])) {
user { 'jenkins' :
ensure => 'present',
name => 'jenkins',
shell => '/bin/bash',
home => '/home/jenkins',
managehome => true,
system => true,
comment => 'Jenkins',
}
}
create_resources(ssh_authorized_key, $authorized_keys, {
ensure => 'present',
user => 'jenkins',
require => [
User['jenkins'],
Package[$java_package],
],
})
}

View File

@ -1,104 +0,0 @@
# Class: jenkins::swarm_slave
#
class jenkins::swarm_slave (
$cache_labels = false,
$java_package = $::jenkins::params::slave_java_package,
$jenkins_fetchlabels_url = '',
$labels = $::jenkins::params::swarm_labels,
$master = $::jenkins::params::swarm_master,
$package = $::jenkins::params::swarm_package,
$password = $::jenkins::params::swarm_password,
$service = $::jenkins::params::swarm_service,
$ssl_cert_file = $::jenkins::params::ssl_cert_file,
$ssl_cert_file_contents = $::jenkins::params::ssl_cert_file_contents,
$swarm_service = $::jenkins::params::swarm_service,
$user = $::jenkins::params::swarm_user,
) inherits ::jenkins::params{
if (!defined(User['jenkins'])) {
user { 'jenkins' :
ensure => 'present',
name => 'jenkins',
shell => '/bin/bash',
home => '/home/jenkins',
managehome => true,
system => true,
comment => 'Jenkins',
}
}
if($cache_labels == true) {
file { '/usr/local/bin/fetchlabels.sh' :
ensure => 'present',
owner => 'root',
group => 'root',
mode => '0755',
content => template('jenkins/fetchlabels.sh.erb'),
}
cron { 'fetchlabels' :
command => '/usr/local/bin/fetchlabels.sh 2>&1 | logger -t fetchlabels',
user => 'root',
hour => '*',
minute => '*/30',
require => File['/usr/local/bin/fetchlabels.sh'],
}
}
if (!defined(Package[$package])) {
package { $package :
ensure => 'present',
require => User['jenkins'],
}
}
if (!defined(Package[$java_package])) {
package { $java_package :
ensure => 'present',
}
}
file { '/etc/default/jenkins-swarm-slave' :
ensure => 'present',
owner => 'jenkins',
group => 'root',
mode => '0440',
content => template('jenkins/swarm_slave.conf.erb'),
require => [
Package[$package],
User['jenkins'],
],
notify => Service[$service],
}
service { $service :
ensure => 'running',
enable => true,
hasstatus => true,
hasrestart => false,
}
if $ssl_cert_file_contents != '' {
file { $ssl_cert_file:
owner => 'root',
group => 'root',
mode => '0400',
content => $ssl_cert_file_contents,
}
java_ks { 'jenkins-cert:/etc/ssl/certs/java/cacerts':
ensure => latest,
certificate => $ssl_cert_file,
password => 'changeit',
trustcacerts => true,
require => [
File[$ssl_cert_file],
Package[$java_package],
],
notify => Service[$swarm_service],
}
}
}

Some files were not shown because too many files have changed in this diff Show More