Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: Icaff65f27554a379e9130bf9354b27ce0eb3b2a6
This commit is contained in:
Andreas Jaeger 2019-12-18 09:47:51 +01:00
parent c0ca5834c2
commit 5e717e6568
102 changed files with 8 additions and 5096 deletions

67
.gitignore vendored
View File

@ -1,67 +0,0 @@
*.py[cod]
# C extensions
*.so
# Artifacts
utils/packer/packer_cache/*
utils/packer/output*
*.box
*.qcow2
# Packages
*.egg*
*.egg-info
dist
build
eggs
parts
bin
var
sdist
develop-eggs
.installed.cfg
lib
lib64
# Installer logs
pip-log.txt
# Unit test / coverage reports
cover/
.coverage*
!.coveragerc
.tox
nosetests.xml
.testrepository
.venv
.vagrant/
vagrant-settings.yaml
# Translations
*.mo
# Mr Developer
.mr.developer.cfg
.project
.pydevproject
# Complexity
output/*.html
output/*/index.html
# Sphinx
doc/build
publish-doc
# pbr generates these
AUTHORS
ChangeLog
# Editors
*~
.*.swp
.*sw?
# Dev env deploy leftovers
VLAN_IPS

View File

@ -1,39 +1,10 @@
Express Fuel CCP Kubernetes deployment using Kargo
--------------------------------------------------
This project is no longer maintained.
Deploy Kubernetes on pre-provisioned virtual or bare metal hosts
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
This project leverages [Kargo](https://github.com/kubespray/kargo) to deploy
Kubernetes with Calico networking plugin.
There are four ways you can use to deploy:
* Preprovisioned list of hosts
* Precreated Ansible inventory
* Vagrant
* [fuel-devops](https://github.com/openstack/fuel-devops)
Preprovisioned list of hosts
----------------------------
See [Quickstart guide](doc/source/quickstart.rst)
Precreated Ansible inventory
----------------------------
See [Generating Ansible Inventory](doc/source/generate-inventory.rst)
Vagrant
-------
Vagrant support is limited at this time. Try it and report bugs if you see any!
Using VirtualBox
================
::
vagrant up --provider virtualbox
Using Libvirt
=============
See [Vagrant libvirt guide](doc/source/vagrant.rst)
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,40 +0,0 @@
Fuel CCP Installer release process
----------------------------------
In order to tag a release of MCP Installer, the following items must be set:
* Pinned Fuel-devops version
* Pinned Kargo commit
* Pinned Docker version
* Pinned Calico version
* Pinned Kubernetes hyperkube image
* Pinned etcd version
* Pinned Ansible version
List of items to check off before tagging release:
* CI passes (pep8, pytest, docs, and functional test)
* HA destructive test:
* disable first master, restart kubelet on all nodes, check kubectl get nodes,
restart first master, check kubectl get nodes
* QA signoff
* fuel-devops version
* test coverage is green
* CCP team signoff
* Fuel-ccp deployment succeeds and contains no blocker bugs
Estimated timeline: 3 days to propose release and obtain signoff.
Technical Details
^^^^^^^^^^^^^^^^^
Master branch of fuel-ccp-installer will principally point to “master” commit
of Kargo, except during release time.
Release manager will create a commit to fuel-ccp-installer, pinning
KARGO_COMMIT to either a tagged release of Kargo or a specific commit.
After all signoffs are obtained, the commit will be merged.
Release manager will create a tag in Gerrit for the new release.
Release manager creates another commit to un-pin KARGO_COMMIT back to “master”
gitref.

157
Vagrantfile vendored
View File

@ -1,157 +0,0 @@
# -*- mode: ruby -*-
# # vi: set ft=ruby :
require 'yaml'
Vagrant.require_version ">= 1.8.0"
defaults_cfg = YAML.load_file('vagrant-settings.yaml_defaults')
if File.exist?('vagrant-settings.yaml')
custom_cfg = YAML.load_file('vagrant-settings.yaml')
cfg = defaults_cfg.merge(custom_cfg)
else
cfg = defaults_cfg
end
# Defaults for config options
$num_instances = (cfg['num_instances'] || 3).to_i
$kargo_node_index = (cfg['kargo_node_index'] || 1).to_i
$instance_name_prefix = cfg['instance_name_prefix'] || "node"
$vm_gui = cfg['vm_gui'] || false
$vm_memory = (cfg['vm_memory'] || 2048).to_i
$vm_cpus = (cfg['vm_cpus'] || 1).to_i
$vm_cpu_limit = (cfg['vm_cpu_limit'] || 1).to_i
$vm_linked_clone = cfg['vm_linked_clone'] || true
$forwarded_ports = cfg['forwarded_ports'] || {}
$subnet_prefix = cfg['subnet_prefix'] || "172.17"
$public_subnet = cfg['public_subnet'] || "#{$subnet_prefix}.0"
$private_subnet = cfg['private_subnet'] || "#{$subnet_prefix}.1"
$neutron_subnet = cfg['neutron_subnet'] || "#{$subnet_prefix}.3"
$mgmt_cidr = cfg['mgmt_cidr'] || "#{$subnet_prefix}.2.0/24"
$box = cfg['box']
$sync_type = cfg['sync_type'] || "rsync"
$kargo_repo = ENV['KARGO_REPO'] || cfg['kargo_repo']
$kargo_commit = ENV['KARGO_COMMIT'] || cfg['kargo_commit']
def with_env(filename, env=[])
"#{env.join ' '} /bin/bash -x -c #{filename}"
end
Vagrant.configure("2") do |config|
config.ssh.username = 'vagrant'
config.ssh.password = 'vagrant'
config.vm.box = $box
# plugin conflict
if Vagrant.has_plugin?("vagrant-vbguest") then
config.vbguest.auto_update = false
end
node_ips = []
(1 .. $num_instances).each { |i| node_ips << "#{$private_subnet}.#{i+10}" }
# Serialize nodes startup order for the virtualbox provider
$num_instances.downto(1).each do |i|
config.vm.define vm_name = "%s%d" % [$instance_name_prefix, i] do |config|
if i != $kargo_node_index
config.vm.provision "shell", inline: "/bin/true"
else
# Only execute once the Ansible provisioner on a kargo node,
# when all the machines are up and ready.
ip = "#{$private_subnet}.#{$kargo_node_index+10}"
vars = {
"KARGO_REPO" => $kargo_repo,
"KARGO_COMMIT" => $kargo_commit,
"SLAVE_IPS" => "\"#{node_ips.join(' ')}\"",
"ADMIN_IP" => "local",
"IMAGE_PATH" => $box.sub('/','_'),
}
env = []
vars.each { |k, v| env << "#{k}=#{v}" }
deploy = with_env("/vagrant/utils/jenkins/kargo_deploy.sh", env)
config.vm.provision "shell", inline: "#{deploy}", privileged: false
end
end
end
# Define nodes configuration
(1 .. $num_instances).each do |i|
config.vm.define vm_name = "%s%d" % [$instance_name_prefix, i] do |config|
config.vm.box = $box
config.vm.hostname = vm_name
# Networks and interfaces and ports
ip = "#{$private_subnet}.#{i+10}"
nets = [{:ip => ip, :name => "private",
:mode => "none", :dhcp => false, :intnet => true},
{:ip => "#{$public_subnet}.#{i+10}", :name => "public",
:mode => "nat", :dhcp => false, :intnet => false},
{:ip => "#{$neutron_subnet}.#{i+10}", :name => "neutron",
:mode => "none", :dhcp => false, :intnet => true}]
if $expose_docker_tcp
config.vm.network "forwarded_port", guest: 2375, host: ($expose_docker_tcp + i - 1), auto_correct: true
end
$forwarded_ports.each do |guest, host|
config.vm.network "forwarded_port", guest: guest, host: host, auto_correct: true
end
config.vm.provider :virtualbox do |vb|
vb.gui = $vm_gui
vb.memory = $vm_memory
vb.cpus = $vm_cpus
vb.linked_clone = $vm_linked_clone
vb.customize ["modifyvm", :id, "--cpuexecutioncap", "#{$vm_cpu_limit}"]
vb.check_guest_additions = false
vb.functional_vboxsf = false
end
config.vm.provider :libvirt do |domain|
domain.uri = "qemu+unix:///system"
domain.memory = $vm_memory
domain.cpus = $vm_cpus
domain.driver = "kvm"
domain.host = "localhost"
domain.connect_via_ssh = false
domain.username = $user
domain.storage_pool_name = "default"
domain.nic_model_type = "e1000"
domain.management_network_name = "#{$instance_name_prefix}-mgmt-net"
domain.management_network_address = $mgmt_cidr
domain.nested = true
domain.cpu_mode = "host-passthrough"
domain.volume_cache = "unsafe"
domain.disk_bus = "virtio"
domain.graphics_ip = "0.0.0.0"
end
if $sync_type == 'nfs'
config.vm.synced_folder ".", "/vagrant", type: "nfs"
end
if $sync_type == 'rsync'
config.vm.synced_folder ".", "/vagrant", type: "rsync",
rsync__args: ["--verbose", "--archive", "--delete", "-z"],
rsync__verbose: true, rsync__exclude: [".git/", ".tox/", "output*/",
"packer_cache/"]
end
# This works for all providers
nets.each do |net|
config.vm.network :private_network,
:ip => net[:ip],
:model_type => "e1000",
:libvirt__network_name => "#{$instance_name_prefix}-#{net[:name]}",
:libvirt__dhcp_enabled => net[:dhcp],
:libvirt__forward_mode => net[:mode],
:virtualbox__intnet => net[:intnet]
end
end
end
# Force-plug nat devices to ensure an IP is assigned via DHCP
config.vm.provider :virtualbox do |vb|
vb.customize "post-boot",["controlvm", :id, "setlinkstate1", "on"]
end
end

View File

@ -1,67 +0,0 @@
# [Packer](https://www.packer.io) Templates
The most of settings are specified as variables. This allows to override them
with `-var` key without template modification. A few environment variables
should be specified as a safety measure. See `debian.json` `ubuntu.json` with
the post-processors section with all details about deploying the Vagrant Boxes
to Atlas.
## Custom builds
### Ubuntu build
```sh
UBUNTU_MAJOR_VERSION=16.04 \
UBUNTU_MINOR_VERSION=.1 \
UBUNTU_TYPE=server \
ARCH=amd64 \
HEADLESS=true \
packer build -var 'cpus=2' ubuntu.json
```
### Debian build
```sh
DEBIAN_MAJOR_VERSION=8 \
DEBIAN_MINOR_VERSION=5 \
ARCH=amd64 \
HEADLESS=true \
packer build -var 'cpus=2' debian.json
```
## Login Credentials
(root password is "vagrant" or is not set )
* Username: vagrant
* Password: vagrant
SSH_USER may be used to create a different user whci may be used later to
access environment.
## VM Specifications
* Vagrant Libvirt Provider
* Vagrant Virtualbox Provider
### qemu
* VirtIO dynamic Hard Disk (up to 10 GiB)
#### Customized installation
Debian configuration is based on
[jessie preseed](https://www.debian.org/releases/jessie/example-preseed.txt).
Ubuntu configuration is based on
[xenial preseed](https://help.ubuntu.com/lts/installation-guide/example-preseed.txt).
A few modifications have been made. Use `diff` for more details.
##### Debian/Ubuntu installation
* en_US.UTF-8
* keymap for standard US keyboard
* UTC timezone
* NTP enabled (default configuration)
* full-upgrade
* unattended-upgrades
* /dev/vda1 mounted on / using ext4 filesystem (all files in one partition)
* no swap

View File

@ -1,43 +0,0 @@
.. _diag-info-tools:
Diagnostic info collection tools
================================
Configuring ansible logs and plugins
------------------------------------
Ansible logs and plugins are configured with the preinstall role and playbook
located in the ``utils/kargo`` directory.
In order to make changes to logs configuration without running the
``kargo_deploy.sh`` completely, run the following Ansible command from the
admin node::
.. code:: sh
export ws=~/workspace/
ansible-playbook --ssh-extra-args '-o\ StrictHostKeyChecking=no' \
-u vagrant -b --become-user=root -i ~/${ws}inventory/inventory.cfg \
-e @${ws}kargo/inventory/group_vars/all.yml \
-e @${ws}utils/kargo/roles/configure_logs/defaults/main.yml \
${ws}utils/kargo/preinstall.yml
Note that the ``ws`` var should point to the actual admin workspace directory.
Collecting diagnostic info
--------------------------
There is a diagnostic info helper script located in the
``/usr/local/bin/collect_logs.sh`` directory. It issues commands and collects
files given in the ``${ws}utils/kargo/roles/configure_logs/defaults/main.yml``
file, from all of the cluster nodes online. Results are aggregated to the
admin node in the ``logs.tar.gz`` tarball.
In order to re-build the tarball with fresh info, run from the admin node:
.. code:: sh
ADMIN_WORKSPACE=$ws /usr/local/bin/collect_logs.sh
You can also adjust ansible forks with the ``FORKS=X`` and switch to the
password based ansible auth with the ``ADMIN_PASSWORD=foopassword``.

View File

@ -1,75 +0,0 @@
# -*- coding: utf-8 -*-
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))
# -- General configuration ----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
# 'sphinx.ext.intersphinx',
'oslosphinx'
]
# autodoc generation is a bit aggressive and a nuisance when doing heavy
# text edit cycles.
# execute "export SPHINX_DEBUG=1" in your terminal to disable
# The suffix of source filenames.
source_suffix = '.rst'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'fuel-ccp-installer'
copyright = u'2013, OpenStack Foundation'
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
# html_theme_path = ["."]
# html_theme = '_theme'
# html_static_path = ['static']
# Output file base name for HTML help builder.
htmlhelp_basename = '%sdoc' % project
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index',
'%s.tex' % project,
u'%s Documentation' % project,
u'OpenStack Foundation', 'manual'),
]
# Example configuration for intersphinx: refer to the Python standard library.
# intersphinx_mapping = {'http://docs.python.org/': None}

View File

@ -1,117 +0,0 @@
Configurable Parameters in Fuel CCP Installer
=============================================
Configurable parameters are divided into three sections:
* generic Ansible variables
* Fuel CCP Installer specific
* Kargo specific
These variables only relate to Ansible global variables. Variables can be
defined in facts gathered by Ansible automatically, facts set in tasks,
variables registered from task results, and then in external variable files.
This document covers custom.yaml, which is either defined by a shell
variable or in custom.yaml in your inventory repo. (See :doc:`inventory_repo`
for more details.)
Generic Ansible variables
-------------------------
You can view facts gathered by Ansible automatically
`here <http://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts>`_
Some variables of note include:
* **ansible_user**: user to connect to via SSH
* **ansible_default_ipv4.address**: IP address Ansible automatically chooses.
Generated based on the output from the command ``ip -4 route get 8.8.8.8``
Fuel CCP Installer specific variables
-------------------------------------
Fuel CCP Installer currently overrides several variables from Kargo currently.
Below is a list of variables and what they affect.
Common vars that are used in Kargo
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* **calico_version** - Specify version of Calico to use
* **calico_cni_version** - Specify version of Calico CNI plugin to use
* **docker_version** - Specify version of Docker to used (should be quoted
string)
* **etcd_version** - Specify version of ETCD to use
* **ipip** - Enables Calico ipip encapsulation by default
* **hyperkube_image_repo** - Specify the Docker repository where Hyperkube
resides
* **hyperkube_image_tag** - Specify the Docker tag where Hyperkube resides
* **kube_network_plugin** - Changes k8s plugin to Calico
* **kube_proxy_mode** - Changes k8s proxy mode to iptables mode
* **kube_version** - Specify a given Kubernetes hyperkube version
* **searchdomains** - Array of DNS domains to search when looking up hostnames
* **nameservers** - Array of nameservers to use for DNS lookup
Fuel CCP Installer-only vars
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
* **e2e_conformance_image_repo** - Docker repository where e2e conformance
container resides
* **e2e_conformance_image_tag** - Docker tag where e2e conformance container
resides
Kargo variables
---------------
There are several variables used in deployment that are necessary for certain
situations. The first section is related to addressing.
Addressing variables
^^^^^^^^^^^^^^^^^^^^
* **ip** - IP to use for binding services (host var)
* **access_ip** - IP for other hosts to use to connect to. Useful when using
OpenStack and you have separate floating and private ips (host var
* **ansible_default_ipv4.address** - Not Kargo-specific, but it is used if ip
and access_ip are undefined
* **loadbalancer_apiserver** - If defined, all hosts will connect to this
address instead of localhost for kube-masters and kube-master[0] for
kube-nodes. See more details in the
`HA guide <https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md>`_.
* **loadbalancer_apiserver_localhost** - If enabled, all hosts will connect to
the apiserver internally load balanced endpoint. See more details in the
`HA guide <https://github.com/kubernetes-incubator/kargo/blob/master/docs/ha-mode.md>`_.
Cluster variables
^^^^^^^^^^^^^^^^^
Kubernetes needs some parameters in order to get deployed. These are the
following default cluster paramters:
* **cluster_name** - Name of cluster DNS domain (default is cluster.local)
* **kube_network_plugin** - Plugin to use for container networking
* **kube_service_addresses** - Subnet for cluster IPs (default is
10.233.0.0/18)
* **kube_pods_subnet** - Subnet for Pod IPs (default is 10.233.64.0/18)
* **dns_setup** - Enables dnsmasq
* **dns_server** - Cluster IP for dnsmasq (default is 10.233.0.2)
* **skydns_server** - Cluster IP for KubeDNS (default is 10.233.0.3)
* **cloud_provider** - Enable extra Kubelet option if operating inside GCE or
OpenStack (default is unset)
* **kube_hostpath_dynamic_provisioner** - Required for use of PetSets type in
Kubernetes
Other service variables
^^^^^^^^^^^^^^^^^^^^^^^
* **docker_options** - Commonly used to set
``--insecure-registry=myregistry.mydomain:5000``
* **http_proxy/https_proxy/no_proxy** - Proxy variables for deploying behind a
proxy
User accounts
^^^^^^^^^^^^^
Kargo sets up two Kubernetes accounts by default: ``root`` and ``kube``. Their
passwords default to changeme. You can set this by changing ``kube_api_pwd``.

View File

@ -1,22 +0,0 @@
.. _CONTRIBUTING:
=================
How To Contribute
=================
General info
============
#. Bugs should be filed on CCP launchpad_, not GitHub.
#. Bugs related to Kubernetes installation process should be filed as
Kargo_ GitHub issues.
#. Please follow OpenStack `Gerrit Workflow`_ to contribute to CCP installer.
.. _launchpad: https://bugs.launchpad.net/fuel-ccp
.. _Kargo: https://github.com/kubernetes-incubator/kargo/issues
.. _Gerrit Workflow: http://docs.openstack.org/infra/manual/developers.html#development-workflow
Please always generate and attach a diagnostic info tarball to submitted
bugs! See :ref:`Diagnostic info tool<diag-info-tools>` and :ref:`Troubleshooting<tshoot>`.

View File

@ -1,151 +0,0 @@
.. _external_ip_controller:
=================================
Installing External IP Controller
=================================
This document describes how to expose Kubernetes services using External IP
Controller.
Introduction
~~~~~~~~~~~~
One of the possible ways to expose k8s services on a bare metal deployment is
using External IPs. Each node runs a kube-proxy process which programs
iptables rules to trap access to External IPs and redirect them to the correct
backends.
So in order to accessa k8s service from the outside we just need to route public
traffic to one of k8s worker nodes which has kube-proxy running and thus has
needed iptables rules for External IPs.
Deployment scheme
~~~~~~~~~~~~~~~~~
Description
-----------
External IP controller is a k8s application which is deployed on top of a k8s
cluster and which configures External IPs on k8s worker node(s) to provide IP
connectivity.
For further details please read `External IP controller documentation
<https://github.com/Mirantis/k8s-externalipcontroller/blob/master/doc/>`_
Ansible Playbook
----------------
The playbook is ``utils/kargo/externalip.yaml`` and the ansible role is
``utils/kargo/roles/externalip``.
The nodes that have ``externalip`` role assigned to them will run External IP
controller application which will manage Kubernetes services' external IPs.
Playbook labels such nodes and then creates DaemonSet with appropriate
``nodeSelector``.
External IP scheduler will be running as a standard Deployment (ReplicaSet)
with specified number of replicas.
ECMP deployment
~~~~~~~~~~~~~~~
Deployment model
----------------
In this sample deployment we're going to deploy External IP controller on a set
of nodes and provide load balancing and high availability for External IPs
based on Equal-cost multi-path routing (ECMP).
For further details please read `Documentation about ECMP deployment
<https://github.com/Mirantis/k8s-externalipcontroller/blob/master/doc/ecmp-load-balancing.md>`_
Inventory
---------
You can take inventory generated previously for Kargo deployment. Using
``utils/kargo/externalip.yaml`` playbook with such inventory will deploy
External IP controller on all ``kube-node`` worker nodes.
Custom yaml
-----------
Custom Ansible yaml for ECMP deployment is stored here:
``utils/jenkins/extip_ecmp.yaml``. Here is the content:
::
# Type of deployment
extip_ctrl_app_kind: "DaemonSet"
# IP distribution model
extip_distribution: "all"
# Netmask for external IPs
extip_mask: 32
# Interface to bring IPs on, should be "lo" for ECMP
extip_iface: "lo"
Deployment
----------
Just run the following ansible-playbook command:
.. code:: sh
export ws=/home/workspace
ansible-playbook -e ansible_ssh_pass=vagrant -u vagrant -b \
--become-user=root -i ${ws}/inventory/inventory.cfg \
-e @${ws}/utils/jenkins/extip_ecmp.yaml \
${ws}/utils/kargo/externalip.yaml
This will deploy the application according to your inventory.
Routing
-------
This application only brings IPs up or down on a specified interface. We also
need to provide routing to those nodes with external IPs. So for Kubernetes
cluster with Calico networking plugin we already have ``calico-node`` container
running on every k8s worker node. This container also includes BGP speaker
which monitors local routing tables and announces changes via BGP protocol.
So in order to include external IPs to BGP speaker export we need to add the
following custom export filter for Calico:
.. code:: sh
cat << EOF | etcdctl set /calico/bgp/v1/global/custom_filters/v4/lo_iface
if ( ifname = "lo" ) then {
if net != 127.0.0.0/8 then accept;
}
EOF
Please note that this will only configure BGP for ``calico-node``. In order to
announce routing to your network infrastructure you may want to peer Calico
with routers. Please check this URL for details:
`Kargo docs: Calico BGP Peering with border routers
<https://github.com/kubernetes-incubator/kargo/blob/master/docs/calico.md#optional--bgp-peering-with-border-routers>`_
Uninstalling and undoing customizations
---------------------------------------
Uninstall k8s applications by running the following commands on the first
kube-master node in your ansible inventory:
.. code:: sh
kubectl delete -f /etc/kubernetes/extip_scheduler.yml
kubectl delete -f /etc/kubernetes/extip_controller.yml
Remove custom Calico export filter:
.. code:: sh
etcdctl rm /calico/bgp/v1/global/custom_filters/v4/lo_iface
Also remove external IPs from `lo` interface on the nodes with the command
like this:
.. code:: sh
ip ad del 10.0.0.7/32 dev lo
Where ``10.0.0.7/32`` is external IP.

View File

@ -1,68 +0,0 @@
Generating Ansible Inventory
============================
Ansible makes use of an inventory file in order to list hosts, host groups, and
specify individual host variables. This file can be in any of three formats:
inifile, JSON, or YAML. Fuel CCP Installer only makes use of inifile format.
For many users, it is possible to generate Ansible inventory with the help of
your bare metal provisioner, such as `Cobbler <http://cobbler.github.io>`_ or
`Foreman <http://theforman.org>`_. For further reading, refer to the
documentation on `Dynamic Inventory <http://docs.ansible.com/ansible/intro_dynamic_inventory.html>`_.
Fuel CCP Installer takes a different approach, due its git-based workflow. You
can still use any tool you wish to generate Ansible inventory, but you need to
save this inventory file to the path `$ADMIN_WORKSPACE/inventory`.
Below you can find a few examples on how generate Ansible inventory that can be
used for deployment.
Using Kargo's simple inventory generator
-------------------------------------------
If you run kargo_deploy.sh with a predefined list of nodes, it will generate
Ansible inventory for you automatically. Below is an example:
::
$ SLAVE_IPS="10.90.0.2 10.90.0.3 10.90.0.4" utils/jenkins/kargo_deploy.sh
This will generate the same inventory as the example
`inventory <https://github.com/openstack/fuel-ccp-installer/blob/master/inventory.cfg.sample>`_
file. Role distribution is as follows:
* The first 2 hosts have Kubernetes Master role
* The first 3 hosts have ETCD role
* All hosts have Kubernetes Node role
Using Kargo-cli
---------------
You can use `Kargo-cli <https://github.com/kubernetes-incubator/kargo-cli>` tool to
generate Ansible inventory with some more complicated role distribution. Below
is an example you can use (indented for visual effect):
::
$ sudo apt-get install python-dev python-pip gcc libssl-dev libffi-dev
$ pip install kargo
$ kargo --noclone -i inventory.cfg prepare \
--nodes \
node1[ansible_host=10.90.0.2,ip=10.90.0.2] \
node2[ansible_host=10.90.0.3,ip=10.90.0.3] \
node3[ansible_host=10.90.0.4,ip=10.90.0.4] \
--etcds \
node4[ansible_host=10.90.0.5,ip=10.90.0.5] \
node5[ansible_host=10.90.0.6,ip=10.90.0.6] \
node6[ansible_host=10.90.0.7,ip=10.90.0.7] \
--masters \
node7[ansible_host=10.90.0.5,ip=10.90.0.8] \
node8[ansible_host=10.90.0.6,ip=10.90.0.9]
This allows more granular control over role distribution, but kargo-cli has
several dependencies because it several other functions.
Manual inventory creation
-------------------------
You can simply Generate your inventory by hand by using the example
`inventory <https://github.com/openstack/fuel-ccp-installer/blob/master/inventory.cfg.sample>`_
file and save it as inventory.cfg. Note that all groups are required and you
should only define host variables inside the [all] section.

View File

@ -1,31 +0,0 @@
Fuel CCP installer
==================
Fuel CCP installer is a wrapper for a
`Kargo <https://github.com/kubernetes-incubator/kargo>`_ installer.
It uses Ansible CM tool to install Kubernetes clusters.
It also provides some extra tools and playbooks.
Contents
~~~~~~~~
.. toctree::
:maxdepth: 1
contributing
quickstart
vagrant
troubleshooting
inventory_repo
generate_inventory
packer
collect_info
specify_hyperkube_image
configurable_params
external_ip_controller
Search in this guide
~~~~~~~~~~~~~~~~~~~~
* :ref:`search`

View File

@ -1,37 +0,0 @@
.. _inventory-and-deployment-data-management:
Inventory and deployment data management
========================================
Deployment data and ansible inventory are represented as a git repository,
either remote or local. It is cloned and being updated on the admin node's
``$ADMIN_WORKSPACE/inventory`` directory. The ``$ADMIN_WORKSPACE`` copies
a value of a given ``$WORKSPACE`` env var (defaults to the current directory of
the admin node). Or it takes a ``workspace``, when the ``$ADMIN_IP`` refers to
not a ``local`` admin node. For example, if it is a VM.
Installer passes that data and inventory to
`Kargo <https://github.com/kubernetes-incubator/kargo>`_ ansible installer.
Pre-prepared inventory should have the following content in
the repo root directory:
* ``inventory.cfg`` - a mandatory inventory file. It must be created manually
or generated based on ``$SLAVE_IPS`` provided with the
`helper script <https://github.com/kubernetes-incubator/kargo/blob/master/contrib/inventory_builder/inventory.py>`_.
* ``kargo_default_common.yaml`` - a mandatory vars file, overrides the kargo
defaults in the ``$ADMIN_WORKSPACE/kargo/inventory/group_vars/all.yml``)
and defaults for roles.
* ``kargo_default_ubuntu.yaml`` - a mandatory vars file for Ubuntu nodes,
overrides the common file.
* ``custom.yaml`` - not a mandatory vars file, overrides all vars.
Note, that the ``custom.yaml`` overrides all data vars defined elsewhere in
Kargo or in defaults files. The data priority precedes as the following:
kargo defaults, then common defaults, then ubuntu defaults, then custom YAML.
Final data decisions are done automatically by the installer:
* If the ADMIN_WORKSPACE/inventory directory has content, it gets used.
* If SLAVE_IPS or fuel-devops deploy mode gets used, inventory is overwritten.
* Copy installer defaults into the inventory directory if they don't exist.

View File

@ -1,68 +0,0 @@
`Packer <https://www.packer.io>`_ Templates
===========================================
Custom build examples
---------------------
Ubuntu build for libvrit
~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: sh
PACKER_LOG=1 \
UBUNTU_MAJOR_VERSION=16.04 \
UBUNTU_MINOR_VERSION=.1 \
UBUNTU_TYPE=server \
ARCH=amd64 \
HEADLESS=true \
packer build -var 'cpus=2' -var 'memory=2048' -only=qemu ubuntu.json
Note, in order to preserve manpages, sources and docs to the image, define
the ``-var 'cleanup=false'``.
Debian build for virtualbox
~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: sh
DEBIAN_MAJOR_VERSION=8 \
DEBIAN_MINOR_VERSION=5 \
ARCH=amd64 \
HEADLESS=true \
packer build -only=virtualbox-iso debian.json
Login Credentials
-----------------
(root password is "vagrant" or is not set )
- Username: vagrant
- Password: vagrant
Manual steps
------------
To add a local box into Vagrant, run from the repo root dir:
.. code:: sh
vagrant box add --name debian \
utils/packer/debian-8.5.0-amd64-libvirt.box
vagrant box add --name ubuntu \
utils/packer/ubuntu-16.04.1-server-amd64-libvirt.box
To upload a local box into `Atlas <https://atlas.hashicorp.com/>`_,
run from the `./utils/packer` dir:
.. code:: sh
VERSION=0.1.0 DEBIAN_MAJOR_VERSION=8 DEBIAN_MINOR_VERSION=5 ARCH=amd64 \
OSTYPE=debian TYPE=libvirt ATLAS_USER=john NAME=foobox ./deploy.sh
UBUNTU_MAJOR_VERSION=16.04 UBUNTU_MINOR_VERSION=.1 UBUNTU_TYPE=server \
ARCH=amd64 OSTYPE=ubuntu TYPE=virtualbox ATLAS_USER=doe ./deploy.sh
The first command creates a box named ``john/foobox`` which has the version
``0.1.0`` and the libvirt provider. The second one uses the version autoincrement
and puts the box as ``john/ubuntu-16.04.1-server-amd64`` and virtualbox provider.

View File

@ -1,100 +0,0 @@
===========================
Kubernetes deployment guide
===========================
This guide provides a step by step instruction of how to deploy k8s cluster on
bare metal or a virtual machine.
K8s node requirements
=====================
The recommended deployment target requirements:
- At least 3 nodes running Ubuntu 16.04
- At least 8Gb of RAM per node
- At least 20Gb of disk space on each node.
Configure BM nodes (or the host running VMs) with the commands:
::
sysctl net.ipv4.ip_forward=1
sysctl net.bridge.bridge-nf-call-iptables=0
.. NOTE:: If you deploy on Ubuntu Trusty BM nodes or host running your VM
nodes, make sure you have at least Kernel version 3.19.0-15 installed
and ``modprobe br_netfilter`` enabled. Lucky you, if your provisioning
underlay took care for that! Otherwise you may want to
`persist <http://manpages.ubuntu.com/manpages/xenial/en/man5/sysctl.d.5.html>`_
that configuration change manually.
Admin node requirements
=======================
This is a node where to run the installer. Admin node should be Debian/Ubuntu
based with the following packages installed:
* ansible (2.1.x)
* python-netaddr
* sshpass
* git
.. NOTE:: You could use one of the k8s node as an admin node. In this case this
node should meet both k8s and admin node requirements.
Node access requirements
========================
- Each node must have a user "vagrant" with a password "vagrant" created or
have access via ssh key.
- Each node must have passwordless sudo for "vagrant" user.
Deploy k8s cluster
==================
Clone fuel-ccp-installer repository:
::
git clone https://review.openstack.org/openstack/fuel-ccp-installer
Create deployment script:
::
cat > ./deploy-k8s.sh << EOF
#!/bin/bash
set -ex
# CHANGE ADMIN_IP AND SLAVE_IPS TO MATCH YOUR ENVIRONMENT
export ADMIN_IP="10.90.0.2"
export SLAVE_IPS="10.90.0.2 10.90.0.3 10.90.0.4"
export DEPLOY_METHOD="kargo"
export WORKSPACE="${HOME}/workspace"
mkdir -p $WORKSPACE
cd ./fuel-ccp-installer
bash -x "./utils/jenkins/run_k8s_deploy_test.sh"
EOF
- ``ADMIN_IP`` - IP of the node which will run ansible. When the `$ADMIN_IP`
refers to a remote node, like a VM, it should take an IP address.
Otherwise, it should take the `local` value.
- ``SLAVE_IPS`` - IPs of the k8s nodes.
.. NOTE:: You can also use ``./utils/jenkins/deploy_k8s_cluster_example.sh``
as a starting point.
.. NOTE:: If you deploy using libvirt with Ubuntu Trusty as a bare metal
hypervisor or deploy on AWS, GCE, or OpenStack, , make sure to add
``export CUSTOM_YAML='ipip: true'`` to the ``./deploy-k8s.sh`` file.
Run script:
::
bash ~/deploy-k8s.sh
.. note::
See :ref:`specify-hyperkube-image` if you want to specify the location
and version of the ``hyperkube`` image to use.

View File

@ -1,43 +0,0 @@
.. _specify-hyperkube-image:
=================================
Deploy a specific hyperkube image
=================================
By default ``fuel-ccp-installer`` uses an hyperkube image downloaded from the
``quay.io`` images repository. See the variables ``hyperkube_image_repo`` and
``hyperkube_image_tag`` variables in the `kargo_default_common.yaml`_ file.
To use a specific version of ``hyperkube`` the ``hyperkube_image_repo`` and
``hyperkube_image_tag`` variables can be set in the ``deploy-k8s.sh`` script.
This is done through the ``CUSTOM_YAML`` environment variable. Here is an
example:
::
#!/bin/bash
set -ex
# CHANGE ADMIN_IP AND SLAVE_IPS TO MATCH YOUR ENVIRONMENT
export ADMIN_IP="10.90.0.2"
export SLAVE_IPS="10.90.0.2 10.90.0.3 10.90.0.4"
export DEPLOY_METHOD="kargo"
export WORKSPACE="${HOME}/workspace"
export CUSTOM_YAML='hyperkube_image_repo: "gcr.io/google_containers/hyperkube-amd64"
hyperkube_image_tag: "v1.3.7"
'
mkdir -p $WORKSPACE
cd ./fuel-ccp-installer
bash -x "./utils/jenkins/run_k8s_deploy_test.sh"
In this example the ``CUSTOM_YAML`` variable includes the definitions of
the ``hyperkube_image_repo`` and ``hyperkube_image_tag`` variables, defining
what ``hyperkube`` image to use and what repository to get the image from.
.. note::
If you use an inventory Git repo please refer
:ref:`inventory-and-deployment-data-management` to know how you can set
variables for the environment.
.. _kargo_default_common.yaml: https://github.com/openstack/fuel-ccp-installer/blob/master/utils/kargo/kargo_default_common.yaml

View File

@ -1,61 +0,0 @@
.. _tshoot:
===============
Troubleshooting
===============
Calico related problems
=======================
If you use standalone bare metal servers, or if you experience issues with a
Calico bird daemon and networking for a Kubernetes cluster VMs, ensure that
netfilter for bridge interfaces is disabled for your host node(s):
.. code:: sh
echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables
Otherwise, bird daemon inside Calico won't function correctly because of
libvirt and NAT networks. More details can be found in this
`bug <https://bugzilla.redhat.com/show_bug.cgi?id=512206>`_.
Then reporting issues, please also make sure to include details on the host
OS type and its kernel version.
DNS resolve issues
==================
See a `known configuration issue <https://bugs.launchpad.net/fuel-ccp/+bug/1627680>`_.
The workaround is as simple as described in the bug: always define custom
intranet DNS resolvers in the ``upstream_dns_servers`` var listed in the first
place, followed by public internet resolvers, if any.
Network check
=============
While a net check is a part of deployment process, you can run the basic DNS
check manually from a cluster node as ``bash /usr/local/bin/test_networking.sh``.
You can as well run all network checks from the admin node:
.. code:: sh
export ws=/home/workspace/
ansible-playbook -e ansible_ssh_pass=vagrant -u vagrant -b \
--become-user=root -i ~${ws}inventory/inventory.cfg \
-e @${ws}kargo/inventory/group_vars/all.yml \
-e @${ws}inventory/kargo_default_common.yaml \
-e @${ws}inventory/kargo_default_ubuntu.yaml \
-e @${ws}inventory/custom.yaml \
${ws}utils/kargo/postinstall.yml -v --tags netcheck
There is also K8s netcheck server and agents applications running, if the
cluster was created with the ``deploy_netchecker: true``.
In order to verify networking health and status of agents, which include
timestamps of the last known healthy networking state, those may be quieried
from any cluster node with:
.. code:: sh
curl -s -X GET 'http://localhost:31081/api/v1/agents/' | \
python -mjson.tool
curl -X GET 'http://localhost:31081/api/v1/connectivity_check'

View File

@ -1,19 +0,0 @@
===========================
Vagrant for a quick dev env
===========================
Requirements:
* Vagrant >= 1.8.5
* Plugin vagrant-libvirt >= 0.0.35
* Plugin vagrant-vbguest >= 0.13.0
To start with defaults, just run ``vagrant up``. To tweak defaults, see the
``vagrant-settings.yaml_defaults`` file. You can rename the file as
``vagrant-settings.yaml`` end edit it to override defaults as well.
.. note:: Make sure the default network choice doesn't conflict with existing
host networks!
.. note:: If you are running on Ubuntu Xenial, you may need to run the
following command: ``sudo sh -c 'echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables'``
or else container networking will be broken.

View File

@ -1,16 +0,0 @@
# Copyright 2015-2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License attached#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See then
# License for the specific language governing permissions and limitations
# under the License.
import pbr.version
version_info = pbr.version.VersionInfo('fuel-ccp-installer')

View File

@ -1,23 +0,0 @@
[kube-master]
node1
node2
[all]
node1 ansible_host=10.90.0.2 ip=10.90.0.2
node2 ansible_host=10.90.0.3 ip=10.90.0.3
node3 ansible_host=10.90.0.4 ip=10.90.0.4
[k8s-cluster:children]
kube-node
kube-master
[kube-node]
node1
node2
node3
[etcd]
node1
node2
node3

View File

@ -1,13 +0,0 @@
apiVersion: v1
clusters:
- cluster: {server: 'http://10.0.0.3:8080'}
name: opensnek
contexts:
- context: {cluster: opensnek, namespace: '', user: user}
name: osctx
current-context: osctx
kind: Config
preferences: {}
users:
- name: user
user: {}

View File

@ -1,16 +0,0 @@
apiVersion: v1
kind: Pod
metadata:
name: registry
labels:
app: registry
spec:
containers:
- name: registry
image: registry:2
env:
imagePullPolicy: Always
ports:
- containerPort: 5000
hostPort: 5000

View File

@ -1,15 +0,0 @@
kind: "Service"
apiVersion: "v1"
metadata:
name: "registry-service"
spec:
selector:
app: "registry"
ports:
-
protocol: "TCP"
port: 5000
targetPort: 5000
nodePort: 31500
type: "NodePort"

View File

@ -1,3 +0,0 @@
-e git+https://github.com/openstack/fuel-devops.git@3.0.1#egg=fuel-devops
netaddr
configparser>=3.3.0

View File

@ -1,22 +0,0 @@
[metadata]
name = fuel-ccp-installer
summary = Tools for configuring Kubernetes with Calico plugin
description-file =
README.rst
author = OpenStack
author-email = openstack-dev@lists.openstack.org
home-page = http://www.openstack.org/
classifier =
Environment :: OpenStack
Intended Audience :: Information Technology
Intended Audience :: System Administrators
License :: OSI Approved :: Apache Software License
Operating System :: POSIX :: Linux
[build_sphinx]
source-dir = doc/source
build-dir = doc/build
all_files = 1
[upload_sphinx]
upload-dir = doc/build/html

View File

@ -1,29 +0,0 @@
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
import setuptools
# In python < 2.7.4, a lazy loading of package `pbr` will break
# setuptools if some other modules registered functions in `atexit`.
# solution from: http://bugs.python.org/issue15881#msg170215
try:
import multiprocessing # noqa
except ImportError:
pass
setuptools.setup(
setup_requires=['pbr'],
pbr=True)

View File

@ -1,6 +0,0 @@
bashate>=0.2 # Apache-2.0
doc8
oslosphinx>=2.5.0,!=3.4.0 # Apache-2.0
sphinx>=1.2.1,!=1.3b1,<1.3 # BSD
hacking>=0.10.2

49
tox.ini
View File

@ -1,49 +0,0 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = bashate, pep8
[testenv]
deps =
-r{toxinidir}/requirements.txt
-r{toxinidir}/test-requirements.txt
[testenv:doc8]
commands = doc8 doc
[testenv:docs]
whitelist_externals = /bin/rm
commands =
/bin/rm -rf doc/build
python setup.py build_sphinx
[doc8]
# Settings for doc8:
# Ignore target directories
ignore-path = doc/build*
# File extensions to use
extensions = .rst,.txt
# Maximal line length should be 79 but we have some overlong lines.
# Let's not get far more in.
max-line-length = 80
# Disable some doc8 checks:
# D000: Check RST validity (cannot handle lineos directive)
ignore = D000
[testenv:bashate]
whitelist_externals = bash
commands = bash -c "find {toxinidir} -type f -name '*.sh' -not -path '*/.tox/*' -print0 | xargs -0 bashate -v"
[testenv:pep8]
usedevelop = False
whitelist_externals = bash
commands =
bash -c "find {toxinidir}/* -type f -name '*.py' -print0 | xargs -0 flake8"
[testenv:venv]
commands = {posargs}
[flake8]
show-source = true
builtins = _
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,tools

View File

@ -1,4 +0,0 @@
============
mcp-underlay
============
Builds image for MCP underlay provisioning

View File

@ -1,5 +0,0 @@
ubuntu-minimal
openssh-server
devuser
cloud-init-datasources
modprobe-blacklist

View File

@ -1,5 +0,0 @@
export DIB_DEV_USER_USERNAME="vagrant"
export DIB_DEV_USER_PASSWORD="vagrant"
export DIB_DEV_USER_PWDLESS_SUDO="yes"
export DIB_CLOUD_INIT_DATASOURCES="ConfigDrive"
export DIB_MODPROBE_BLACKLIST="evbug"

View File

@ -1,7 +0,0 @@
grub2-common:
grub-pc-bin:
cloud-init:
linux-image-generic:
net-tools:
isc-dhcp-client:
python:

View File

@ -1,10 +0,0 @@
#!/bin/bash
if [ "${DIB_DEBUG_TRACE:-0}" -gt 0 ]; then
set -x
fi
set -eu
set -o pipefail
echo "127.0.0.1 localhost" >> /etc/hosts

View File

@ -1,33 +0,0 @@
To enable ironic based inventory, add this to your deployment script:
export IRONIC_NODE_LIST=$(openstack --os-cloud=bifrost baremetal node list -f\
yaml --noindent --fields name instance_info)
The output should look like this:
- Instance Info:
configdrive: '******'
image_checksum: ee1eca47dc88f4879d8a229cc70a07c6
image_disk_format: qcow2
image_source: http://10.20.0.2:8080/deployment_image.qcow2
image_url: '******'
ipv4_address: 10.20.0.98
root_gb: 10
tags:
- kube-etcd
- kube-node
Name: host1
- Instance Info:
configdrive: '******'
image_checksum: ee1eca47dc88f4879d8a229cc70a07c6
image_disk_format: qcow2
image_source: http://10.20.0.2:8080/deployment_image.qcow2
image_url: '******'
ipv4_address: 10.20.0.66
root_gb: 10
tags:
- k8s_master
- k8s_etcd
Name: host2
Note: Replace --os-cloud with your environment.

View File

@ -1,21 +0,0 @@
#!/usr/bin/python
# Converts Ironic YAML output to Ansible YAML inventory
# Example input:
# openstack --os-cloud=bifrost baremetal node list -f yaml --noindent \
# --fields name instance_info | python nodelist_to_inventory.py
from __future__ import print_function
import sys
import yaml
nodes = yaml.load(sys.stdin.read())
groups = {}
for node in nodes:
for tag in node['Instance Info']['tags']:
if tag not in groups.keys():
groups[tag] = {}
ip = node['Instance Info']['ipv4_address']
groups[tag][node['Name']] = {'ip': ip}
print(yaml.dump(groups, indent=2, default_flow_style=False))

View File

@ -1,16 +0,0 @@
Examples testing
================
To automatically test examples install first fuel-devops framework. Installation process is described here https://github.com/openstack/fuel-devops. After installation run migrations scripts:
```bash
export DJANGO_SETTINGS_MODULE=devops.settings
django-admin.py syncdb
django-admin.py migrate
```
To test examples run one of available test scripts. You need to run it from main dir, for example:
```
./utils/jenkins/run_hosts_example.sh
```

View File

@ -1,103 +0,0 @@
---
rack-01-node-params:
vcpu: 6
memory: 8192
boot:
- hd
volumes:
- name: base
source_image:
format: qcow2
interfaces:
- label: ens3
l2_network_device: public
interface_model: virtio
- label: ens4
l2_network_device: private
interface_model: virtio
- label: ens5
l2_network_device: storage
interface_model: virtio
- label: ens6
l2_network_device: management
interface_model: virtio
network_config:
ens3:
networks:
- public
ens4:
networks:
- private
ens5:
networks:
- storage
ens6:
networks:
- management
env_name:
address_pools:
# Network pools used by the environment
public-pool01:
net: 10.10.0.0/16:24
params:
tag: 0
ip_reserved:
gateway: +1 # Provides variable gateway address for 3rd-party modules
l2_network_device: +1 # Reserved IP that will be assigned to the libvirt network device
ip_ranges:
rack-01: [+2, -2] # Provides address range for 'rack-01' for 3rd-party modules
dhcp: [+2, -2] # Reserved 'dhcp' range name for libvirt driver
private-pool01:
net: 10.11.0.0/16:24
params:
tag: 101
storage-pool01:
net: 10.12.0.0/16:24
params:
tag: 102
management-pool01:
net: 10.13.0.0/16:24
params:
tag: 103
groups:
- name: rack-01
driver:
name: devops.driver.libvirt
params:
connection_string: qemu:///system
storage_pool_name: default
stp: True
hpet: False
use_host_cpu: true
enable_acpi: True
network_pools: # Address pools for OpenStack networks.
# Actual names should be used for keys
# (the same as in Nailgun, for example)
public: public-pool01
private: private-pool01
storage: storage-pool01
management: management-pool01
l2_network_devices: # Libvirt bridges. It is *NOT* Nailgun networks
public:
address_pool: public-pool01
dhcp: 'true'
forward:
mode: nat
private:
address_pool: private-pool01
storage:
address_pool: storage-pool01
management:
address_pool: management-pool01
nodes:
- name: fuel-ccp
role: master

View File

@ -1,177 +0,0 @@
aliases:
dynamic_address_pool:
- &pool_default !os_env POOL_DEFAULT, 10.90.0.0/16:24
default_interface_model:
- &interface_model !os_env INTERFACE_MODEL, e1000
template:
devops_settings:
env_name: !os_env ENV_NAME
address_pools:
# Network pools used by the environment
public-pool01:
net: *pool_default
params:
vlan_start: 1210
ip_reserved:
gateway: +1
l2_network_device: +1 # l2_network_device will get this IP address
ip_ranges:
dhcp: [+2, -2]
storage-pool01:
net: *pool_default
management-pool01:
net: *pool_default
private-pool01:
net: *pool_default
local-pool01:
net: *pool_default
groups:
- name: default
driver:
name: devops.driver.libvirt
params:
connection_string: !os_env CONNECTION_STRING, qemu:///system
storage_pool_name: !os_env STORAGE_POOL_NAME, default
stp: False
hpet: False
use_host_cpu: !os_env DRIVER_USE_HOST_CPU, true
enable_acpi: True
network_pools: # Address pools for OpenStack networks.
# Actual names should be used for keys
# (the same as in Nailgun, for example)
public: public-pool01
storage: storage-pool01
management: management-pool01
private: private-pool01
l2_network_devices: # Libvirt bridges. It is *NOT* Nailgun networks
public:
address_pool: public-pool01
dhcp: true
forward:
mode: nat
storage:
address_pool: storage-pool01
dhcp: false
management:
address_pool: management-pool01
dhcp: false
private:
address_pool: private-pool01
dhcp: false
local:
dhcp: false
forward:
mode: bridge
parent_iface:
phys_dev: !os_env VLAN_BRIDGE
nodes:
- name: fuel-ccp # Custom name of VM for Fuel admin node
role: fuel-ccp # Fixed role for Fuel master node properties
params:
vcpu: !os_env ADMIN_NODE_CPU, 2
memory: !os_env ADMIN_NODE_MEMORY, 8192
boot:
- hd
volumes:
- name: system
capacity: !os_env ADMIN_NODE_VOLUME_SIZE, 175
source_image: !os_env MASTER_IMAGE_PATH
format: qcow2
interfaces:
- label: iface0
l2_network_device: public
interface_model: *interface_model
- label: iface1
l2_network_device: private
interface_model: *interface_model
- label: iface2
l2_network_device: storage
interface_model: *interface_model
- label: iface3
l2_network_device: management
interface_model: *interface_model
- label: iface4
l2_network_device: local
interface_model: *interface_model
network_config:
iface0:
networks:
- public
iface1:
networks:
- private
iface2:
networks:
- storage
iface3:
networks:
- management
# Slave nodes
- name: slave-0
role: k8s-node
params: &rack-01-slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 2
memory: !os_env SLAVE_NODE_MEMORY, 8192
boot:
- network
- hd
volumes:
- name: system
capacity: !os_env NODE_VOLUME_SIZE, 150
source_image: !os_env IMAGE_PATH
format: qcow2
# List of node interfaces
interfaces:
- label: iface0
l2_network_device: public
interface_model: *interface_model
- label: iface1
l2_network_device: private
interface_model: *interface_model
- label: iface2
l2_network_device: storage
interface_model: *interface_model
- label: iface3
l2_network_device: management
interface_model: *interface_model
- label: iface4
l2_network_device: local
interface_model: *interface_model
network_config:
iface0:
networks:
- public
iface1:
networks:
- private
iface2:
networks:
- storage
iface3:
networks:
- management
- name: slave-1
role: k8s-node
params: *rack-01-slave-node-params
- name: slave-2
role: k8s-node
params: *rack-01-slave-node-params

View File

@ -1,122 +0,0 @@
aliases:
dynamic_address_pool:
- &pool_default !os_env POOL_DEFAULT, 10.90.0.0/16:24
- &pool_neutron_ext !os_env POOL_NEUTRON_EXT, 10.95.0.0/16:24
default_interface_model:
- &interface_model !os_env INTERFACE_MODEL, e1000
template:
devops_settings:
env_name: !os_env ENV_NAME
address_pools:
# Network pools used by the environment
private-pool01:
net: *pool_default
params:
vlan_start: 1210
ip_reserved:
gateway: +1
l2_network_device: +1 # l2_network_device will get this IP address
ip_ranges:
dhcp: [+2, -2]
public-pool01:
net: *pool_default
neutron-pool01:
net: *pool_neutron_ext
params:
vlan_start: 1310
ip_reserved:
gateway: +1
l2_network_device: +1
ip_ranges:
dhcp: [+2, -2]
groups:
- name: default
driver:
name: devops.driver.libvirt
params:
connection_string: !os_env CONNECTION_STRING, qemu:///system
storage_pool_name: !os_env STORAGE_POOL_NAME, default
stp: False
hpet: False
use_host_cpu: !os_env DRIVER_USE_HOST_CPU, true
enable_acpi: True
network_pools: # Address pools for OpenStack networks.
# Actual names should be used for keys
# (the same as in Nailgun, for example)
private: private-pool01
public: public-pool01
neutron: neutron-pool01
l2_network_devices: # Libvirt bridges. It is *NOT* Nailgun networks
private:
address_pool: private-pool01
dhcp: true
forward:
mode: nat
public:
dhcp: false
forward:
mode: bridge
parent_iface:
phys_dev: !os_env VLAN_BRIDGE
neutron:
address_pool: neutron-pool01
dhcp: true
forward:
mode: nat
nodes:
- name: slave-0
role: k8s-node
params: &rack-01-slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 2
memory: !os_env SLAVE_NODE_MEMORY, 8192
boot:
- network
- hd
volumes:
- name: system
capacity: !os_env NODE_VOLUME_SIZE, 150
source_image: !os_env IMAGE_PATH
format: qcow2
# List of node interfaces
interfaces:
- label: iface0
l2_network_device: private
interface_model: *interface_model
- label: iface1
l2_network_device: public
interface_model: *interface_model
- label: iface2
l2_network_device: neutron
interface_model: *interface_model
network_config:
iface0:
networks:
- private
iface1:
networks:
- public
iface2:
networks:
- neutron
- name: slave-1
role: k8s-node
params: *rack-01-slave-node-params
- name: slave-2
role: k8s-node
params: *rack-01-slave-node-params

View File

@ -1,115 +0,0 @@
aliases:
dynamic_address_pool:
- &pool_default !os_env POOL_DEFAULT, 10.90.0.0/16:24
default_interface_model:
- &interface_model !os_env INTERFACE_MODEL, e1000
template:
devops_settings:
env_name: !os_env ENV_NAME
address_pools:
# Network pools used by the environment
private-pool01:
net: *pool_default
params:
vlan_start: 1210
ip_reserved:
gateway: +1
l2_network_device: +1 # l2_network_device will get this IP address
ip_ranges:
dhcp: [+2, -2]
public-pool01:
net: *pool_default
neutron-pool01:
net: *pool_default
params:
vlan_start: 1310
ip_reserved:
gateway: +1
l2_network_device: +1
ip_ranges:
dhcp: [+2, -32]
groups:
- name: default
driver:
name: devops.driver.libvirt
params:
connection_string: !os_env CONNECTION_STRING, qemu:///system
storage_pool_name: !os_env STORAGE_POOL_NAME, default
stp: False
hpet: False
use_host_cpu: !os_env DRIVER_USE_HOST_CPU, true
enable_acpi: True
network_pools: # Address pools for OpenStack networks.
private: private-pool01
public: public-pool01
neutron: neutron-pool01
l2_network_devices: # Libvirt bridges. It is *NOT* Nailgun networks
private:
address_pool: private-pool01
dhcp: true
forward:
mode: nat
public:
address_pool: public-pool01
dhcp: false
neutron:
address_pool: neutron-pool01
dhcp: true
forward:
mode: nat
nodes:
- name: slave-0
role: k8s-node
params: &rack-01-slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 2
memory: !os_env SLAVE_NODE_MEMORY, 8192
boot:
- network
- hd
volumes:
- name: system
capacity: !os_env NODE_VOLUME_SIZE, 150
source_image: !os_env IMAGE_PATH
format: qcow2
# List of node interfaces
interfaces:
- label: iface0
l2_network_device: private
interface_model: *interface_model
- label: iface1
l2_network_device: public
interface_model: *interface_model
- label: iface2
l2_network_device: neutron
interface_model: *interface_model
network_config:
iface0:
networks:
- private
iface1:
networks:
- public
iface2:
networks:
- neutron
- name: slave-1
role: k8s-node
params: *rack-01-slave-node-params
- name: slave-2
role: k8s-node
params: *rack-01-slave-node-params

View File

@ -1,158 +0,0 @@
aliases:
dynamic_address_pool:
- &pool_default !os_env POOL_DEFAULT, 10.90.0.0/16:24
default_interface_model:
- &interface_model !os_env INTERFACE_MODEL, e1000
template:
devops_settings:
env_name: !os_env ENV_NAME
address_pools:
# Network pools used by the environment
public-pool01:
net: *pool_default
params:
vlan_start: 1210
ip_reserved:
gateway: +1
l2_network_device: +1 # l2_network_device will get this IP address
ip_ranges:
dhcp: [+2, -2]
storage-pool01:
net: *pool_default
management-pool01:
net: *pool_default
private-pool01:
net: *pool_default
groups:
- name: default
driver:
name: devops.driver.libvirt
params:
connection_string: !os_env CONNECTION_STRING, qemu:///system
storage_pool_name: !os_env STORAGE_POOL_NAME, default
stp: False
hpet: False
use_host_cpu: !os_env DRIVER_USE_HOST_CPU, true
enable_acpi: True
network_pools: # Address pools for OpenStack networks.
public: public-pool01
storage: storage-pool01
management: management-pool01
private: private-pool01
l2_network_devices: # Libvirt bridges. It is *NOT* Nailgun networks
public:
address_pool: public-pool01
dhcp: true
forward:
mode: nat
storage:
address_pool: storage-pool01
dhcp: false
management:
address_pool: management-pool01
dhcp: false
private:
address_pool: private-pool01
dhcp: false
nodes:
- name: fuel-ccp # Custom name of VM for Fuel admin node
role: fuel-ccp # Fixed role for Fuel master node properties
params:
vcpu: !os_env ADMIN_NODE_CPU, 2
memory: !os_env ADMIN_NODE_MEMORY, 8192
boot:
- hd
volumes:
- name: system
capacity: !os_env ADMIN_NODE_VOLUME_SIZE, 175
source_image: !os_env MASTER_IMAGE_PATH
format: qcow2
interfaces:
- label: iface0
l2_network_device: public
interface_model: *interface_model
- label: iface1
l2_network_device: private
interface_model: *interface_model
- label: iface2
l2_network_device: storage
interface_model: *interface_model
- label: iface3
l2_network_device: management
interface_model: *interface_model
network_config:
iface0:
networks:
- public
iface1:
networks:
- private
iface2:
networks:
- storage
iface3:
networks:
- management
# Slave nodes
- name: slave-0
role: k8s-node
params: &rack-01-slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 2
memory: !os_env SLAVE_NODE_MEMORY, 8192
boot:
- network
- hd
volumes:
- name: system
capacity: !os_env NODE_VOLUME_SIZE, 150
source_image: !os_env IMAGE_PATH
format: qcow2
interfaces:
- label: iface0
l2_network_device: public
interface_model: *interface_model
- label: iface1
l2_network_device: private
interface_model: *interface_model
- label: iface2
l2_network_device: storage
interface_model: *interface_model
- label: iface3
l2_network_device: management
interface_model: *interface_model
network_config:
iface0:
networks:
- public
iface1:
networks:
- private
iface2:
networks:
- storage
iface3:
networks:
- management
- name: slave-1
role: k8s-node
params: *rack-01-slave-node-params
- name: slave-2
role: k8s-node
params: *rack-01-slave-node-params

View File

@ -1,13 +0,0 @@
#!/bin/bash
set -xe
export DONT_DESTROY_ON_SUCCESS=1
export SLAVES_COUNT=3
export DEPLOY_TIMEOUT=1200
export TEST_SCRIPT="/usr/bin/python mcpinstall.py deploy"
if [[ "$DEPLOY_METHOD" == "kargo" ]]; then
./utils/jenkins/kargo_deploy.sh
else
./utils/jenkins/run.sh
fi

View File

@ -1,25 +0,0 @@
#!/bin/bash -ex
#
# Sample script to deploy K8s cluster with Kargo.
# Please adjust to your needs.
# Configuration:
export ENV_NAME="k8s-kargo-test-env"
export IMAGE_PATH="/home/ubuntu/packer-ubuntu-1604-server.qcow2" # path to VM image file (e.g. build with packer)
export DONT_DESTROY_ON_SUCCESS=1
#export VLAN_BRIDGE="vlan456" # full name e.g. "vlan450"
export DEPLOY_METHOD="kargo"
export SLAVES_COUNT=3
export SLAVE_NODE_MEMORY=6144
export SLAVE_NODE_CPU=2
export WORKSPACE="/home/ubuntu/workspace"
CCP_INSTALLER_DIR="." # path to root fuel-ccp-installer/ directory
export CUSTOM_YAML='hyperkube_image_repo: "quay.io/coreos/hyperkube"
hyperkube_image_tag: "v1.4.0_coreos.1"
kube_version: "v1.4.0"'
mkdir -p ${WORKSPACE}
echo "Running on ${NODE_NAME}: ${ENV_NAME}"
bash -x "${CCP_INSTALLER_DIR}/utils/jenkins/run_k8s_deploy_test.sh"

View File

@ -1,95 +0,0 @@
# -*- coding: utf-8 -*-
from copy import deepcopy
import os
import sys
from devops.helpers.templates import yaml_template_load
from devops import models
def create_config():
env = os.environ
conf_path = env['CONF_PATH']
conf = yaml_template_load(conf_path)
slaves_count = int(env['SLAVES_COUNT'])
group = conf['template']['devops_settings']['groups'][0]
defined = filter(lambda x: x['role'] == 'k8s-node',
group['nodes'])
node_params = filter(lambda x: x['name'].endswith('slave-0'),
group['nodes'])[0]['params']
for i in range(len(defined), slaves_count):
group['nodes'].append(
{'name': 'slave-{}'.format(i),
'role': 'k8s-node',
'params': deepcopy(node_params)})
return conf
def _get_free_eth_interface(node):
taken = [i['label'] for i in node['params']['interfaces']]
iface = 'eth'
index = 0
while True:
new_iface = '{}{}'.format(iface, index)
if new_iface not in taken:
return new_iface
index += 1
def get_env():
env = os.environ
env_name = env['ENV_NAME']
return models.Environment.get(name=env_name)
def get_master_ip(env):
admin = env.get_node(name='fuel-ccp')
return admin.get_ip_address_by_network_name('private')
def get_slave_ips(env):
slaves = env.get_nodes(role='k8s-node')
ips = []
for slave in slaves:
ip = slave.get_ip_address_by_network_name('private').encode('utf-8')
ips.append(ip)
return ips
def get_bridged_iface_mac(env, ip):
for node in env.get_nodes():
ips = [iface.addresses[0].ip_address for iface in node.interfaces
if iface.addresses]
if ip in ips:
iface = node.get_interface_by_network_name('public')
return iface.mac_address
def define_from_config(conf):
env = models.Environment.create_environment(conf)
env.define()
env.start()
if __name__ == '__main__':
if len(sys.argv) < 2:
sys.exit(2)
cmd = sys.argv[1]
if cmd == 'create_env':
config = create_config()
define_from_config(config)
elif cmd == 'get_admin_ip':
sys.stdout.write(get_master_ip(get_env()))
elif cmd == 'get_slaves_ips':
sys.stdout.write(str(get_slave_ips(get_env())))
elif cmd == 'get_bridged_iface_mac':
if len(sys.argv) < 3:
sys.stdout.write('IP address required')
sys.exit(1)
ip = sys.argv[2]
sys.stdout.write(str(get_bridged_iface_mac(get_env(), ip)))

View File

@ -1,6 +0,0 @@
# External IP playbook settings to provide ECMP
extip_image_tag: "release-0.2.1"
extip_ctrl_app_kind: "DaemonSet"
extip_distribution: "all"
extip_mask: 32
extip_iface: "lo"

View File

@ -1 +0,0 @@
deb http://httpredir.debian.org/debian jessie-backports main

View File

@ -1,15 +0,0 @@
Package: ansible
Pin: release a=jessie-backports
Pin-Priority: 1001
Package: python-setuptools
Pin: release a=jessie-backports
Pin-Priority: 1001
Package: python-pkg-resources
Pin: release a=jessie-backports
Pin-Priority: 1001
Package: *
Pin: release a=jessie-backports
Pin-Priority: 100

View File

@ -1,410 +0,0 @@
#!/bin/bash
set -xe
# for now we assume that master ip is 10.0.0.2 and slaves ips are 10.0.0.{3,4,5,...}
ADMIN_PASSWORD=${ADMIN_PASSWORD:-vagrant}
ADMIN_USER=${ADMIN_USER:-vagrant}
WORKSPACE=${WORKSPACE:-.}
ENV_NAME=${ENV_NAME:-kargo-example}
SLAVES_COUNT=${SLAVES_COUNT:-0}
if [ "$VLAN_BRIDGE" ]; then
CONF_PATH=${CONF_PATH:-${BASH_SOURCE%/*}/default30-kargo-bridge.yaml}
else
CONF_PATH=${CONF_PATH:-${BASH_SOURCE%/*}/default30-kargo.yaml}
fi
IMAGE_PATH=${IMAGE_PATH:-$HOME/packer-ubuntu-16.04.1-server-amd64.qcow2}
# detect OS type from the image name, assume ubuntu by default
NODE_BASE_OS=$(basename ${IMAGE_PATH} | grep -io -e ubuntu -e debian || echo -n "ubuntu")
ADMIN_NODE_BASE_OS="${ADMIN_NODE_BASE_OS:-$NODE_BASE_OS}"
DEPLOY_TIMEOUT=${DEPLOY_TIMEOUT:-60}
SSH_OPTIONS="-A -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
SSH_OPTIONS_COPYID="-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
VM_LABEL=${BUILD_TAG:-unknown}
KARGO_REPO=${KARGO_REPO:-https://github.com/kubernetes-incubator/kargo.git}
KARGO_COMMIT=${KARGO_COMMIT:-origin/master}
# Default deployment settings
COMMON_DEFAULTS_YAML="kargo_default_common.yaml"
COMMON_DEFAULTS_SRC="${BASH_SOURCE%/*}/../kargo/${COMMON_DEFAULTS_YAML}"
OS_SPECIFIC_DEFAULTS_YAML="kargo_default_${NODE_BASE_OS}.yaml"
OS_SPECIFIC_DEFAULTS_SRC="${BASH_SOURCE%/*}/../kargo/${OS_SPECIFIC_DEFAULTS_YAML}"
SCALE_DEFAULTS_YAML="scale_defaults.yaml"
SCALE_DEFAULTS_SRC="${BASH_SOURCE%/*}/../kargo/${SCALE_DEFAULTS_YAML}"
SCALE_MODE=${SCALE_MODE:-no}
LOG_LEVEL=${LOG_LEVEL:--v}
ANSIBLE_TIMEOUT=${ANSIBLE_TIMEOUT:-600}
ANSIBLE_FORKS=${ANSIBLE_FORKS:-50}
# Valid sources: pip, apt
ANSIBLE_INSTALL_SOURCE=pip
required_ansible_version="2.3.0"
function collect_info {
# Get diagnostic info and store it as the logs.tar.gz at the admin node
admin_node_command FORKS=$ANSIBLE_FORKS ADMIN_USER=$ADMIN_USER \
ADMIN_WORKSPACE=$ADMIN_WORKSPACE collect_logs.sh > /dev/null
}
function exit_gracefully {
local exit_code=$?
set +e
# set exit code if it is a param
[[ -n "$1" ]] && exit_code=$1
if [[ "$ENV_TYPE" == "fuel-devops" && "$KEEP_ENV" != "0" ]]; then
if [[ "${exit_code}" -eq "0" && "${DONT_DESTROY_ON_SUCCESS}" != "1" ]]; then
dos.py erase ${ENV_NAME}
else
if [ "${exit_code}" -ne "0" ];then
dos.py suspend ${ENV_NAME}
dos.py snapshot ${ENV_NAME} ${ENV_NAME}.snapshot
dos.py destroy ${ENV_NAME}
echo "To revert snapshot please run: dos.py revert ${ENV_NAME} ${ENV_NAME}.snapshot"
fi
fi
fi
# Kill current ssh-agent
if [ -z "$INHERIT_SSH_AGENT" ]; then
eval $(ssh-agent -k)
fi
exit $exit_code
}
function with_retries {
local retries=3
set +e
set -o pipefail
for try in $(seq 1 $retries); do
${@}
[ $? -eq 0 ] && break
if [[ "$try" == "$retries" ]]; then
exit 1
fi
done
set +o pipefail
set -e
}
function admin_node_command {
# Accepts commands from args passed to function or multiple commands via stdin,
# one per line.
if [[ "$ADMIN_IP" == "local" ]]; then
if [ $# -gt 0 ];then
eval "$@"
else
cat | while read cmd; do
eval "$cmd"
done
fi
else
ssh $SSH_OPTIONS $ADMIN_USER@$ADMIN_IP "$@"
fi
}
function wait_for_nodes {
for IP in $@; do
elapsed_time=0
master_wait_time=30
while true; do
report=$(sshpass -p ${ADMIN_PASSWORD} ssh ${SSH_OPTIONS} -o PreferredAuthentications=password ${ADMIN_USER}@${IP} echo ok || echo not ready)
if [ "${report}" = "ok" ]; then
break
fi
if [ "${elapsed_time}" -gt "${master_wait_time}" ]; then
exit 2
fi
sleep 1
let elapsed_time+=1
done
done
}
function wait_for_apt_lock_release {
while admin_node_command 'sudo lslocks | egrep "apt|dpkg"'; do
echo 'Waiting for other software managers to release apt lock ...'
sleep 10
done
}
function with_ansible {
local tries=5
local retry_opt=""
playbook=$1
retryfile=${playbook/.yml/.retry}
until admin_node_command \
ANSIBLE_CONFIG=$ADMIN_WORKSPACE/utils/kargo/ansible.cfg \
ansible-playbook \
--ssh-extra-args "-A\ -o\ StrictHostKeyChecking=no\ -o\ ConnectionAttempts=20" \
-u ${ADMIN_USER} -b \
--become-user=root -i $ADMIN_WORKSPACE/inventory/inventory.cfg \
--forks=$ANSIBLE_FORKS --timeout $ANSIBLE_TIMEOUT $DEFAULT_OPTS \
-e ansible_ssh_user=${ADMIN_USER} \
$custom_opts $retry_opt $@; do
if [[ $tries -gt 1 ]]; then
tries=$((tries - 1))
echo "Deployment failed! Trying $tries more times..."
else
collect_info
exit_gracefully 1
fi
if admin_node_command test -e "$retryfile"; then
retry_opt="--limit @${retryfile}"
fi
done
rm -f "$retryfile" || true
}
mkdir -p tmp logs
# If SLAVE_IPS or IRONIC_NODE_LIST are specified or REAPPLY is set, then treat env as pre-provisioned
if [[ -z "$REAPPLY" && -z "$SLAVE_IPS" && -z "$IRONIC_NODE_LIST" ]]; then
ENV_TYPE="fuel-devops"
echo "Trying to ensure bridge-nf-call-iptables is disabled..."
br_netfilter=$(cat /proc/sys/net/bridge/bridge-nf-call-iptables)
if [[ "$br_netfilter" == "1" ]]; then
sudo sh -c 'echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables'
fi
dos.py erase ${ENV_NAME} || true
rm -rf logs/*
ENV_NAME=${ENV_NAME} SLAVES_COUNT=${SLAVES_COUNT} IMAGE_PATH=${IMAGE_PATH} CONF_PATH=${CONF_PATH} python ${BASH_SOURCE%/*}/env.py create_env
SLAVE_IPS=($(ENV_NAME=${ENV_NAME} python ${BASH_SOURCE%/*}/env.py get_slaves_ips | tr -d "[],'"))
# Set ADMIN_IP=local to use current host to run ansible
ADMIN_IP=${ADMIN_IP:-${SLAVE_IPS[0]}}
wait_for_nodes ${SLAVE_IPS[0]}
else
ENV_TYPE=${ENV_TYPE:-other_or_reapply}
SLAVE_IPS=( $SLAVE_IPS )
fi
ADMIN_IP=${ADMIN_IP:-${SLAVE_IPS[0]}}
# Trap errors during env preparation stage
trap exit_gracefully ERR INT TERM
# FIXME(mattymo): Should be part of underlay
echo "Checking local SSH environment..."
if ssh-add -l &>/dev/null; then
echo "Local SSH agent detected with at least one identity."
INHERIT_SSH_AGENT="yes"
else
echo "No SSH agent available. Preparing SSH key..."
if ! [ -f $WORKSPACE/id_rsa ]; then
ssh-keygen -t rsa -f $WORKSPACE/id_rsa -N "" -q
chmod 600 ${WORKSPACE}/id_rsa*
test -f ~/.ssh/config && SSH_OPTIONS="${SSH_OPTIONS} -F /dev/null"
fi
eval $(ssh-agent)
ssh-add $WORKSPACE/id_rsa
fi
# Install missing packages on the host running this script
if ! type sshpass 2>&1 > /dev/null; then
sudo apt-get update && sudo apt-get install -y sshpass
fi
# Copy utils/kargo dir to WORKSPACE/utils/kargo so it works across both local
# and remote admin node deployment modes.
echo "Preparing admin node..."
if [[ "$ADMIN_IP" != "local" ]]; then
ADMIN_WORKSPACE="workspace"
sshpass -p $ADMIN_PASSWORD ssh-copy-id $SSH_OPTIONS_COPYID -o PreferredAuthentications=password $ADMIN_USER@${ADMIN_IP} -p 22
else
ADMIN_WORKSPACE="$WORKSPACE"
fi
if [[ -n "$ADMIN_NODE_CLEANUP" ]]; then
if [[ "$ADMIN_IP" != "local" ]]; then
admin_node_command rm -rf $ADMIN_WORKSPACE || true
else
for dir in inventory kargo utils; do
admin_node_command rm -rf ${ADMIN_WORKSPACE}/${dir} || true
done
fi
fi
admin_node_command mkdir -p "$ADMIN_WORKSPACE/utils/kargo" "$ADMIN_WORKSPACE/inventory"
tar cz ${BASH_SOURCE%/*}/../kargo | admin_node_command tar xzf - -C $ADMIN_WORKSPACE/utils/
echo "Setting up ansible and required dependencies..."
# Install mandatory packages on admin node
if ! admin_node_command type sshpass 2>&1 > /dev/null; then
admin_node_command "sh -c \"sudo apt-get update && sudo apt-get install -y sshpass\""
fi
if ! admin_node_command type git 2>&1 > /dev/null; then
admin_node_command "sh -c \"sudo apt-get update && sudo apt-get install -y git\""
fi
if ! admin_node_command type ansible 2>&1 > /dev/null; then
# Wait for apt lock in case it is updating from cron job
case $ADMIN_NODE_BASE_OS in
ubuntu)
wait_for_apt_lock_release
with_retries admin_node_command -- sudo apt-get update
wait_for_apt_lock_release
with_retries admin_node_command -- sudo apt-get install -y software-properties-common
wait_for_apt_lock_release
with_retries admin_node_command -- sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv-keys 7BB9C367
wait_for_apt_lock_release
with_retries admin_node_command -- "sh -c \"sudo apt-add-repository -y 'deb http://ppa.launchpad.net/ansible/ansible/ubuntu xenial main'\""
wait_for_apt_lock_release
with_retries admin_node_command -- sudo apt-get update
wait_for_apt_lock_release
;;
debian)
cat ${BASH_SOURCE%/*}/files/debian_backports_repo.list | admin_node_command "sudo sh -c 'cat - > /etc/apt/sources.list.d/backports.list'"
cat ${BASH_SOURCE%/*}/files/debian_pinning | admin_node_command "sudo sh -c 'cat - > /etc/apt/preferences.d/backports'"
wait_for_apt_lock_release
with_retries admin_node_command -- sudo apt-get update
wait_for_apt_lock_release
with_retries admin_node_command -- sudo apt-get -y install --only-upgrade python-setuptools
;;
esac
wait_for_apt_lock_release
if [[ "$ANSIBLE_INSTALL_SOURCE" == "apt" ]]; then
with_retries admin_node_command -- sudo apt-get install -y ansible python-netaddr
elif [[ "$ANSIBLE_INSTALL_SOURCE" == "pip" ]]; then
admin_node_command -- sudo pip uninstall -y setuptools pip || true
with_retries admin_node_command -- sudo apt-get install -y --reinstall python-netaddr libssl-dev python-pip python-setuptools python-pkg-resources
with_retries admin_node_command -- sudo -H easy_install pyopenssl==16.2.0
with_retries admin_node_command -- sudo pip install --upgrade ansible==$required_ansible_version
else
echo "ERROR: Unknown Ansible install source: ${ANSIBLE_INSTALL_SOURCE}"
exit 1
fi
fi
echo "Checking out kargo playbook..."
admin_node_command git clone "$KARGO_REPO" "$ADMIN_WORKSPACE/kargo" || true
admin_node_command "sh -c 'cd $ADMIN_WORKSPACE/kargo && git fetch --all && git checkout $KARGO_COMMIT'"
echo "Uploading default settings and inventory..."
# Only copy default files if they are absent from inventory dir
if ! admin_node_command test -e "$ADMIN_WORKSPACE/inventory/${COMMON_DEFAULTS_YAML}"; then
cat $COMMON_DEFAULTS_SRC | admin_node_command "cat > $ADMIN_WORKSPACE/inventory/${COMMON_DEFAULTS_YAML}"
fi
if ! admin_node_command test -e "$ADMIN_WORKSPACE/inventory/${OS_SPECIFIC_DEFAULTS_YAML}"; then
cat $OS_SPECIFIC_DEFAULTS_SRC | admin_node_command "cat > $ADMIN_WORKSPACE/inventory/${OS_SPECIFIC_DEFAULTS_YAML}"
fi
if ! admin_node_command test -e "$ADMIN_WORKSPACE/inventory/${SCALE_DEFAULTS_YAML}"; then
cat $SCALE_DEFAULTS_SRC | admin_node_command "cat > $ADMIN_WORKSPACE/inventory/${SCALE_DEFAULTS_YAML}"
fi
if ! admin_node_command test -e "${ADMIN_WORKSPACE}/inventory/group_vars"; then
admin_node_command ln -rsf "${ADMIN_WORKSPACE}/kargo/inventory/group_vars" "${ADMIN_WORKSPACE}/inventory/group_vars"
fi
if [[ -n "${CUSTOM_YAML}" ]]; then
echo "Uploading custom YAML for deployment..."
echo -e "$CUSTOM_YAML" | admin_node_command "cat > $ADMIN_WORKSPACE/inventory/custom.yaml"
fi
if admin_node_command test -e "$ADMIN_WORKSPACE/inventory/custom.yaml"; then
custom_opts="-e @$ADMIN_WORKSPACE/inventory/custom.yaml"
fi
if [ -n "${SLAVE_IPS}" ]; then
admin_node_command CONFIG_FILE=$ADMIN_WORKSPACE/inventory/inventory.cfg python3 $ADMIN_WORKSPACE/kargo/contrib/inventory_builder/inventory.py ${SLAVE_IPS[@]}
elif [ -n "${IRONIC_NODE_LIST}" ]; then
inventory_formatted=$(echo -e "$IRONIC_NODE_LIST" | ${BASH_SOURCE%/*}/../ironic/nodelist_to_inventory.py)
admin_node_command CONFIG_FILE=$ADMIN_WORKSPACE/inventory/inventory.cfg python3 $ADMIN_WORKSPACE/kargo/contrib/inventory_builder/inventory.py load /dev/stdin <<< "$inventory_formatted"
fi
# Try to get IPs from inventory first
if [ -z "${SLAVE_IPS}" ]; then
if admin_node_command stat $ADMIN_WORKSPACE/inventory/inventory.cfg; then
SLAVE_IPS=($(admin_node_command CONFIG_FILE=$ADMIN_WORKSPACE/inventory/inventory.cfg python3 $ADMIN_WORKSPACE/kargo/contrib/inventory_builder/inventory.py print_ips))
else
echo "No slave nodes available. Unable to proceed!"
exit_gracefully 1
fi
fi
COMMON_DEFAULTS_OPT="-e @$ADMIN_WORKSPACE/inventory/${COMMON_DEFAULTS_YAML}"
OS_SPECIFIC_DEFAULTS_OPT="-e @$ADMIN_WORKSPACE/inventory/${OS_SPECIFIC_DEFAULTS_YAML}"
SCALE_DEFAULTS_OPT="-e @$ADMIN_WORKSPACE/inventory/${SCALE_DEFAULTS_YAML}"
if [[ "${#SLAVE_IPS[@]}" -lt 50 && "$SCALE_MODE" == "no" ]]; then
DEFAULT_OPTS="${COMMON_DEFAULTS_OPT} ${OS_SPECIFIC_DEFAULTS_OPT}"
else
DEFAULT_OPTS="${COMMON_DEFAULTS_OPT} ${OS_SPECIFIC_DEFAULTS_OPT} ${SCALE_DEFAULTS_OPT}"
fi
# Stop trapping pre-setup tasks
set +e
echo "Running pre-setup steps on nodes via ansible..."
with_ansible $ADMIN_WORKSPACE/utils/kargo/preinstall.yml -e "ansible_ssh_pass=${ADMIN_PASSWORD}"
echo "Deploying k8s masters/etcds first via ansible..."
with_ansible $ADMIN_WORKSPACE/kargo/cluster.yml --limit kube-master:etcd
# Only run non-master deployment if there are non-masters in inventory.
if admin_node_command ansible-playbook -i $ADMIN_WORKSPACE/inventory/inventory.cfg \
$ADMIN_WORKSPACE/kargo/cluster.yml --limit kube-node:!kube-master:!etcd \
--list-hosts &>/dev/null; then
echo "Deploying k8s non-masters via ansible..."
with_ansible $ADMIN_WORKSPACE/kargo/cluster.yml --limit kube-node:!kube-master:!etcd
fi
echo "Initial deploy succeeded. Proceeding with post-install tasks..."
with_ansible $ADMIN_WORKSPACE/utils/kargo/postinstall.yml
# FIXME(mattymo): Move this to underlay
# setup VLAN if everything is ok and env will not be deleted
if [ "$VLAN_BRIDGE" ] && [ "${DONT_DESTROY_ON_SUCCESS}" = "1" ];then
rm -f VLAN_IPS
for IP in ${SLAVE_IPS[@]}; do
bridged_iface_mac="`ENV_NAME=${ENV_NAME} python ${BASH_SOURCE%/*}/env.py get_bridged_iface_mac $IP`"
sshpass -p ${ADMIN_PASSWORD} ssh ${SSH_OPTIONS} ${ADMIN_USER}@${IP} bash -s <<EOF >>VLAN_IPS
bridged_iface=\$(/sbin/ifconfig -a|awk -v mac="$bridged_iface_mac" '\$0 ~ mac {print \$1}' 'RS=\n\n')
sudo ip route del default
sudo dhclient "\${bridged_iface}"
echo \$(ip addr list |grep ${bridged_iface_mac} -A 1 |grep 'inet ' |cut -d' ' -f6| cut -d/ -f1)
EOF
done
set +x
sed -i '/^\s*$/d' VLAN_IPS
echo "**************************************"
echo "**************************************"
echo "**************************************"
echo "Deployment is complete!"
echo "* VLANs IP addresses"
echo "* MASTER IP: `head -n1 VLAN_IPS`"
echo "* NODE IPS: `tail -n +2 VLAN_IPS | tr '\n' ' '`"
echo "* USERNAME: $ADMIN_USER"
echo "* PASSWORD: $ADMIN_PASSWORD"
echo "* K8s dashboard: https://kube:changeme@`head -n1 VLAN_IPS`/ui/"
echo "**************************************"
echo "**************************************"
echo "**************************************"
set -x
rm -f VLAN_IPS
else
echo "**************************************"
echo "**************************************"
echo "**************************************"
echo "Deployment is complete!"
echo "* Node network addresses:"
echo "* MASTER IP: $ADMIN_IP"
echo "* NODE IPS: $SLAVE_IPS"
echo "* USERNAME: $ADMIN_USER"
echo "* PASSWORD: $ADMIN_PASSWORD"
echo "* K8s dashboard: https://kube:changeme@${SLAVE_IPS[0]}/ui/"
echo "**************************************"
echo "**************************************"
echo "**************************************"
fi
# TODO(mattymo): Shift to FORCE_NEW instead of REAPPLY
echo "To reapply deployment, run env REAPPLY=yes ADMIN_IP=$ADMIN_IP $0"
exit_gracefully 0

View File

@ -1,4 +0,0 @@
#!/bin/bash
set -xe
dos.py erase ${ENV_NAME}

View File

@ -1,13 +0,0 @@
id: simple_riak_with_transports
resources:
#% for i in range(count|int) %#
#% set j = i +1 %#
- id: node#{j}#
from: k8s/node
input:
name: node#{j}#
ssh_user: 'vagrant'
ssh_key: ''
ssh_password: 'vagrant'
ip: '#{ips[i]}#'
#% endfor %#

View File

@ -1,14 +0,0 @@
#!/bin/bash
set -xe
export SLAVES_COUNT=${SLAVES_COUNT:-3}
export DEPLOY_TIMEOUT=1200
export BUILD_TAG=${BUILD_TAG:-unknown}
./utils/jenkins/kargo_deploy.sh
# Archive logs if they were generated
mkdir -p "${WORKSPACE}/_artifacts"
if [ -f "${WORKSPACE}/logs.tar.gz" ]; then
mv "${WORKSPACE}/logs.tar.gz" "${WORKSPACE}/_artifacts"
fi

View File

@ -1,179 +0,0 @@
#!/bin/bash -xe
ADMIN_USER=${ADMIN_USER:-vagrant}
WORKSPACE=${WORKSPACE:-.}
SSH_OPTIONS="-A -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null"
# Default deployment settings
COMMON_DEFAULTS_YAML="kargo_default_common.yaml"
COMMON_DEFAULTS_SRC="${BASH_SOURCE%/*}/../kargo/${COMMON_DEFAULTS_YAML}"
NODE_BASE_OS=${NODE_BASE_OS:-ubuntu}
OS_SPECIFIC_DEFAULTS_YAML="kargo_default_${NODE_BASE_OS}.yaml"
OS_SPECIFIC_DEFAULTS_SRC="${BASH_SOURCE%/*}/../kargo/${OS_SPECIFIC_DEFAULTS_YAML}"
SLAVE_IPS=( $SLAVE_IPS )
ADMIN_IP=${ADMIN_IP:-${SLAVE_IPS[0]}}
required_ansible_version="2.3.0"
function exit_gracefully {
local exit_code=$?
set +e
# set exit code if it is a param
[[ -n "$1" ]] && exit_code=$1
# Kill current ssh-agent
if [ -z "$INHERIT_SSH_AGENT" ]; then
eval $(ssh-agent -k)
fi
exit $exit_code
}
function with_retries {
set +e
local retries=3
for try in $(seq 1 $retries); do
${@}
[ $? -eq 0 ] && break
if [[ "$try" == "$retries" ]]; then
exit 1
fi
done
set -e
}
function admin_node_command {
if [[ "$ADMIN_IP" == "local" ]];then
eval "$@"
else
ssh $SSH_OPTIONS $ADMIN_USER@$ADMIN_IP "$@"
fi
}
function wait_for_nodes {
for IP in $@; do
elapsed_time=0
master_wait_time=30
while true; do
report=$(ssh ${SSH_OPTIONS} -o PasswordAuthentication=no ${ADMIN_USER}@${IP} echo ok || echo not ready)
if [ "${report}" = "ok" ]; then
break
fi
if [ "${elapsed_time}" -gt "${master_wait_time}" ]; then
exit 2
fi
sleep 1
let elapsed_time+=1
done
done
}
function with_ansible {
local tries=1
until admin_node_command ansible-playbook \
--ssh-extra-args "-A\ -o\ StrictHostKeyChecking=no" -u ${ADMIN_USER} -b \
--become-user=root -i $ADMIN_WORKSPACE/inventory/inventory.cfg \
$@ $KARGO_DEFAULTS_OPT $COMMON_DEFAULTS_OPT \
$OS_SPECIFIC_DEFAULTS_OPT $custom_opts; do
if [[ $tries > 1 ]]; then
(( tries-- ))
echo "Deployment failed! Trying $tries more times..."
else
exit_gracefully 1
fi
done
}
mkdir -p tmp logs
# Trap errors during env preparation stage
trap exit_gracefully ERR INT TERM
# FIXME(mattymo): Should be part of underlay
echo "Checking local SSH environment..."
if ssh-add -l &>/dev/null; then
echo "Local SSH agent detected with at least one identity."
INHERIT_SSH_AGENT="yes"
else
echo "No SSH agent available. Using precreated SSH key..."
if ! [ -f $WORKSPACE/id_rsa ]; then
echo "ERROR: This script expects an active SSH agent or a key already \
available at $WORKSPACE/id_rsa. Exiting."
exit_gracefully 1
fi
eval $(ssh-agent)
ssh-add $WORKSPACE/id_rsa
fi
# Install missing packages on the host running this script
if ! type sshpass > /dev/null; then
sudo apt-get update && sudo apt-get install -y sshpass
fi
echo "Preparing admin node..."
if [[ "$ADMIN_IP" != "local" ]]; then
ADMIN_WORKSPACE="workspace"
else
ADMIN_WORKSPACE="$WORKSPACE"
fi
admin_node_command mkdir -p $ADMIN_WORKSPACE/utils/kargo
tar cz ${BASH_SOURCE%/*}/../kargo | admin_node_command tar xzf - -C $ADMIN_WORKSPACE/utils/
echo "Setting up ansible and required dependencies..."
installed_ansible_version=$(admin_node_command dpkg-query -W -f='\${Version}\\n' ansible || echo "0.0")
if ! admin_node_command type ansible > /dev/null || \
dpkg --compare-versions "$installed_ansible_version" "lt" "$required_ansible_version"; then
# Wait for apt lock in case it is updating from cron job
while admin_node_command pgrep -a -f apt; do echo 'Waiting for apt lock...'; sleep 30; done
case $ADMIN_NODE_BASE_OS in
ubuntu)
with_retries admin_node_command -- sudo apt-get update
with_retries admin_node_command -- sudo apt-get install -y software-properties-common
with_retries admin_node_command -- sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com --recv-keys 7BB9C367
with_retries admin_node_command -- "sh -c \"sudo apt-add-repository -y 'deb http://ppa.launchpad.net/ansible/ansible/ubuntu xenial main'\""
with_retries admin_node_command -- sudo apt-get update
;;
debian)
cat ${BASH_SOURCE%/*}/files/debian_backports_repo.list | admin_node_command "sudo sh -c 'cat - > /etc/apt/sources.list.d/backports.list'"
cat ${BASH_SOURCE%/*}/files/debian_pinning | admin_node_command "sudo sh -c 'cat - > /etc/apt/preferences.d/backports'"
with_retries admin_node_command sudo apt-get update
with_retries admin_node_command sudo apt-get -y install --only-upgrade python-setuptools
;;
esac
admin_node_command sudo apt-get install -y ansible python-netaddr git
fi
# Ensure inventory exists
if ! admin_node_command test -f $ADMIN_WORKSPACE/inventory/inventory.cfg; then
echo "ERROR: $ADMIN_WORKSPACE/inventory/inventory.cfg does not exist. \
Cannot proceed."
exit_gracefully 1
fi
if [ -n "$SLAVE_IPS" ]; then
SLAVE_IPS=($(admin_node_command CONFIG_FILE=$ADMIN_WORKSPACE/inventory/inventory.cfg python3 $ADMIN_WORKSPACE/utils/kargo/inventory.py print_ips))
fi
COMMON_DEFAULTS_OPT="-e @$ADMIN_WORKSPACE/inventory/${COMMON_DEFAULTS_YAML}"
OS_SPECIFIC_DEFAULTS_OPT="-e @$ADMIN_WORKSPACE/inventory/${OS_SPECIFIC_DEFAULTS_YAML}"
# Kargo opts are not needed for this
KARGO_DEFAULTS_OPT=""
if admin_node_command test -f $ADMIN_WORKSPACE/inventory/custom.yaml; then
custom_opts="-e @$ADMIN_WORKSPACE/inventory/custom.yaml"
fi
echo "Waiting for all nodes to be reachable by SSH..."
wait_for_nodes ${SLAVE_IPS[@]}
# Stop trapping pre-setup tasks
set +e
echo "Running e2e conformance tests via ansible..."
with_ansible $ADMIN_WORKSPACE/utils/kargo/e2e_conformance.yml
exit_gracefully ${deploy_res}

View File

@ -1,10 +0,0 @@
[ssh_connection]
pipelining=True
[defaults]
host_key_checking=False
gathering=smart
fact_caching=jsonfile
fact_caching_connection=/tmp/kargocache
fact_caching_seconds=14400
stdout_callback = skippy
callback_whitelist = profile_tasks

View File

@ -1,5 +0,0 @@
---
- hosts: kube-master[0]
roles:
- { role: e2e_conformance, tags: e2e_conformance }

View File

@ -1,4 +0,0 @@
---
- hosts: kube-node
roles:
- { role: externalip }

View File

@ -1,60 +0,0 @@
# All values can be overridden in CUSTOM_YAML
kube_network_plugin: "calico"
#Required for calico
kube_proxy_mode: "iptables"
kube_apiserver_insecure_port: "8080"
#Allow services to expose nodePorts in a wide range
kube_apiserver_node_port_range: "25000-50000"
#Configure calico to set --nat-outgoing, but not --ipip
ipip: false
nat_outgoing: true
calico_cni_version: "v1.5.6"
calico_cni_checksum: "9a6bd6da267c498a1833117777c069f44f720d23226d8459bada2a0b41cb8258"
calico_cni_ipam_checksum: "8d3574736df1ce10ea88fdec94d84dc58642081d3774d2d48249c6ee94ed316d"
#Raise number of concurrent DNS queries
dns_forward_max: 300
#Overridden component versions (different from Kargo upstream
#revision aaa3f1c4910ae23718bb6e3a0080b1af97575bac)
#Set kubernetes version
kube_version: "v1.5.3"
hyperkube_image_repo: "quay.io/coreos/hyperkube"
hyperkube_image_tag: "v1.5.3_coreos.0"
kubelet_deployment_type: host
etcd_version: "v3.1.0"
#Download once, then push to nodes in batches, if enabled
download_run_once: true
#Enable netchecker app
deploy_netchecker: true
calico_version: "v1.0.2"
#Force calico CNI binaries to overwrite the ones copied from
#hyperkube while leaving other cni plugins intact
#overwrite_hyperkube_cni: true
# Custom (additional) DNS settings
# These go to hosts' /etc/resolv.conf, example:
#searchdomains:
# - namespace.svc.cluster.local
nameservers:
- 8.8.8.8
# These go to dnsmasq upstream servers, or /etc/resolv.conf, if dnsmasq skipped
upstream_dns_servers:
- 8.8.4.4
# Configure how we tell pods how to find DNS. Upstream uses docker_dns, which
# breaks host k8s resolution. We are using host_resolvconf configuration which
# has some bugs when DHCP is enabled.
resolvconf_mode: host_resolvconf
# Continue deploying other hosts even if one failed
any_errors_fatal: false
# Tweak kubelet monitoring parameters to node/endpoint node flapping
kubelet_status_update_frequency: "20s"
kube_controller_node_monitor_grace_period: "2m"
kube_controller_node_monitor_period: "10s"

View File

@ -1,2 +0,0 @@
# All values can be overridden in CUSTOM_YAML
docker_version: '1.13'

View File

@ -1,2 +0,0 @@
# All values can be overridden in CUSTOM_YAML
docker_version: '1.13'

View File

@ -1,5 +0,0 @@
---
- hosts: k8s-cluster
roles:
- { role: postinstall, tags: postinstall }

View File

@ -1,10 +0,0 @@
---
- hosts: all
gather_facts: no
roles:
- { role: preinstall, tags: preinstall }
- hosts: localhost
gather_facts: no
roles:
- { role: configure_logs, tags: configure_logs }

View File

@ -1,55 +0,0 @@
bin_dir: /usr/local/bin
log_path: /var/log/ansible/
conf_file: /etc/ansible/ansible.cfg
# Define custom diag info to collect
commands:
- name: git_info
cmd: find . -type d -name .git -execdir sh -c 'gen-gitinfos.sh global|head -12' \;
- name: timedate_info
cmd: timedatectl status
- name: boots_info
cmd: journalctl --list-boots --utc --no-pager
- name: space_info
cmd: df -h
- name: kernel_info
cmd: uname -r
- name: distro_info
cmd: cat /etc/issue.net
- name: docker_info
cmd: docker info
- name: ip_info
cmd: ip -4 -o a
- name: route_info
cmd: ip ro
- name: proc_info
cmd: ps auxf | grep -v ]$
- name: systemctl_info
cmd: systemctl status
- name: systemctl_failed_info
cmd: systemctl --state=failed --no-pager
- name: k8s_resolve_info
cmd: host kubernetes
- name: k8s_info
cmd: kubectl get all --all-namespaces -o wide
- name: k8s_dump_info
cmd: kubectl get all --all-namespaces -o yaml
- name: errors_info
cmd: journalctl -p err --utc --no-pager
- name: etcd_info
cmd: etcdctl --debug cluster-health
- name: calico_info
cmd: calicoctl status
- name: sysctl_info
cmd: sysctl -a
logs:
- /var/log/ansible/ansible.log
- /var/log/syslog
- /var/log/daemon.log
- /var/log/kern.log
- /etc/resolv.conf
- "{{searchpath}}/kargo/cluster.yml"
- "{{searchpath}}/kargo/inventory/group_vars/all.yml"
- "{{searchpath}}/inventory/inventory.cfg"
- "{{searchpath}}/inventory/kargo_default_ubuntu.yaml"
- "{{searchpath}}/inventory/kargo_default_debian.yaml"
- "{{searchpath}}/inventory/kargo_default_common.yaml"

View File

@ -1,37 +0,0 @@
---
- name: Configure logs | ensure log path
file:
path: "{{log_path}}"
state: directory
owner: "{{ansible_ssh_user}}"
- name: Configure logs | ensure config dir
file:
path: "{{conf_file | dirname}}"
state: directory
owner: root
group: root
mode: 0755
recurse: yes
- name: Configure logs | ensure config file
get_url:
url: https://raw.githubusercontent.com/ansible/ansible/stable-2.3/examples/ansible.cfg
dest: "{{conf_file}}"
force: no
owner: root
group: root
mode: 0644
- name: Configure logs | config
lineinfile:
line: "log_path={{log_path}}/ansible.log"
regexp: "^#log_path|^log_path"
dest: "{{conf_file}}"
- name: Configure logs | Install script for collecting info
template:
src: collect_logs.sh.j2
dest: "{{ bin_dir }}/collect_logs.sh"
mode: a+rwx

View File

@ -1,18 +0,0 @@
#!/bin/bash
SSH_EXTRA_ARGS='-o\ StrictHostKeyChecking=no'
ADMIN_USER=${ADMIN_USER:-vagrant}
ADMIN_WORKSPACE=${ADMIN_WORKSPACE:-workspace/}
FORKS=${FORKS:-10}
if [ "${ADMIN_PASSWORD}" -a -z "${NO_SSH_PASSWORD}" ]; then
echo "Using password based ansible auth!"
SSH_WRAPPER="-e ansible_ssh_pass=${ADMIN_PASSWORD}"
fi
ansible-playbook ${LOG_LEVEL} \
--ssh-extra-args "$SSH_EXTRA_ARGS" -u ${ADMIN_USER} -b ${SSH_WRAPPER} \
--become-user=root -i $ADMIN_WORKSPACE/inventory/inventory.cfg \
-e searchpath=$ADMIN_WORKSPACE \
-e @$ADMIN_WORKSPACE/utils/kargo/roles/configure_logs/defaults/main.yml \
$ADMIN_WORKSPACE/kargo/scripts/collect-info.yaml

View File

@ -1,2 +0,0 @@
e2e_conformance_image_repo: mirantis/k8s-conformance
e2e_conformance_image_tag: latest

View File

@ -1,9 +0,0 @@
- name: Install e2e conformance test script
template:
src: run_e2e_conformance.j2
dest: "/usr/local/bin/run_e2e_conformance"
mode: 0755
- name: Run e2e conformance test
shell: "/usr/local/bin/run_e2e_conformance"
changed_when: false

View File

@ -1,5 +0,0 @@
#!/bin/bash -e
docker run --rm --net=host \
-e API_SERVER="http://127.0.0.1:{{ kube_apiserver_insecure_port | default("8080") }}" \
{{ e2e_conformance_image_repo }}:{{ e2e_conformance_image_tag }} 2>&1 | \
tee -a /var/log/e2e-conformance.log

View File

@ -1,54 +0,0 @@
---
# All nodes will be labeled and then label will be used as "nodeSelector" for
# external IP controller application. You can spacify this label name here.
extip_node_label: "externalip"
# Image params
extip_image_repo: "mirantis/k8s-externalipcontroller"
extip_image_tag: "release-0.2.1"
# If multiple external IPs claim controllers are running we need to
# define how to distribute IPs.
# Valid values are:
# "balance" - IPs are balanced across controllers
# "single" - only one controller will hold all IPs. Not yet supported in
# multiple controllers mode. If you need to run on one node please
# use extip_ctrl_app_kind=Deployment and extip_ctrl_replicas=1
# "all" - all controllers will bring up all IPs (for ECMP, for example)
extip_distribution: "balance"
# External IPs network mask
extip_mask: 24
# Interface to bring external IPs on
extip_iface: "eth0"
# Kubernetes namespace for the application
k8s_namespace: "default"
#####
# K8s controller app params
extip_ctrl_app: "claimcontroller"
# App kind, valid values are: "Deployment", "DaemonSet"
extip_ctrl_app_kind: "Deployment"
extip_ctrl_replicas: 1
extip_ctrl_label: "externalipcontroller"
extip_ctrl_image_pull_policy: "IfNotPresent"
# Verbosity
extip_ctrl_verbose: 5
# Heartbeat
extip_ctrl_hb: "500ms"
extip_ctrl_hostname: "{{ ansible_hostname }}"
#####
# K8s scheduler app params
extip_sched_app: "claimscheduler"
extip_sched_label: "claimscheduler"
extip_sched_image_pull_policy: "IfNotPresent"
extip_sched_replicas: 2
# Verbosity
extip_sched_verbose: 5
# Scheduler leader elect, string ("true" or "false")
extip_sched_leader_elect: "true"
# Monitor
extip_sched_monitor: "1s"

View File

@ -1,65 +0,0 @@
---
- name: ExtIP Controller | Get pods
shell: "kubectl get pods -o wide"
run_once: true
register: pods
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Get list of nodes with labels
shell: "kubectl get node --show-labels"
run_once: true
register: k8s_nodes
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Get list of nodes with IPs
shell: 'kubectl get nodes -o jsonpath=''{range .items[*]}{@.metadata.name}{" "}{range @.status.addresses[?(@.type == "InternalIP")]}{@.address}{"\n"}{end}{end}'''
register: k8s_name_cmd
run_once: true
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Set fact with node-ip search pattern
set_fact:
k8s_ip_pattern: ".* {{ ip }}$"
- name: ExtIP Controller | Find k8s node name by IP address
set_fact:
k8s_name: "{{ (k8s_name_cmd.stdout_lines | select('match', k8s_ip_pattern) | join(',')).split(' ')[0] }}"
- name: ExtIP Controller | Print k8s node names
debug:
msg: "{{ k8s_name }}"
- name: ExtIP Controller | Set fact with node-label search pattern
set_fact:
k8s_label_pattern: "^{{ k8s_name }} .*[,\ ]{{ extip_node_label }}=true.*"
- name: ExtIP Controller | Find matches for node by label
set_fact:
matches: "{{ k8s_nodes.stdout_lines | select('match', k8s_label_pattern) | list }}"
- name: ExtIP Controller | Label node if needed
shell: "kubectl label nodes {{ k8s_name }} {{ extip_node_label }}=true"
when: "{{ (matches | length) < 1 }}"
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Upload claimcontroller config
run_once: true
template: src=controller.yaml.j2 dest=/etc/kubernetes/extip_controller.yml
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Upload claimscheduler config
run_once: true
template: src=scheduler.yaml.j2 dest=/etc/kubernetes/extip_scheduler.yml
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Create claimcontroller
run_once: true
shell: "kubectl create -f /etc/kubernetes/extip_controller.yml"
when: pods.stdout.find("{{ extip_ctrl_app }}-") == -1
delegate_to: "{{groups['kube-master'][0]}}"
- name: ExtIP Controller | Create claimscheduler
run_once: true
shell: "kubectl create -f /etc/kubernetes/extip_scheduler.yml"
when: pods.stdout.find("{{ extip_sched_app }}-") == -1
delegate_to: "{{groups['kube-master'][0]}}"

View File

@ -1,31 +0,0 @@
apiVersion: extensions/v1beta1
kind: {{ extip_ctrl_app_kind }}
metadata:
name: {{ extip_ctrl_app }}
spec:
{% if extip_ctrl_app_kind != "DaemonSet" %}
replicas: {{ extip_ctrl_replicas }}
{% endif %}
template:
metadata:
labels:
app: {{ extip_ctrl_label }}
spec:
hostNetwork: true
nodeSelector:
{{ extip_node_label }}: "true"
containers:
- name: externalipcontroller
image: {{ extip_image_repo }}:{{ extip_image_tag }}
imagePullPolicy: {{ extip_ctrl_image_pull_policy }}
securityContext:
privileged: true
command:
- ipmanager
- claimcontroller
- --iface={{ extip_iface }}
- --logtostderr
- --v={{ extip_ctrl_verbose }}
- --hb={{ extip_ctrl_hb }}
- --hostname={% if extip_distribution == "all" %}ecmp{% else %}{{ extip_ctrl_hostname }}
{% endif %}

View File

@ -1,23 +0,0 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ extip_sched_app }}
spec:
replicas: {{ extip_sched_replicas }}
template:
metadata:
labels:
app: {{ extip_sched_label }}
spec:
containers:
- name: externalipcontroller
image: {{ extip_image_repo }}:{{ extip_image_tag }}
imagePullPolicy: {{ extip_sched_image_pull_policy }}
command:
- ipmanager
- scheduler
- --mask={{ extip_mask }}
- --logtostderr
- --v={{ extip_sched_verbose }}
- --leader-elect={{ extip_sched_leader_elect }}
- --monitor={{ extip_sched_monitor }}

View File

@ -1,2 +0,0 @@
# Use only kubedns for testing k8s dns
skip_dnsmasq: true

View File

@ -1,69 +0,0 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
version: v1.4.0
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
app: kubernetes-dashboard

View File

@ -1,107 +0,0 @@
#!/bin/bash
test_networking() {
SLAVE_IPS=${SLAVE_IPS:-changeme}
ADMIN_IP=${ADMIN_IP:-changeme}
ADMIN_USER=${ADMIN_USER:-vagrant}
#Uncomment and set if running manually
#SLAVE_IPS=(10.10.0.2 10.10.0.3 10.10.0.3)
#ADMIN_IP="10.90.2.4"
if [[ "$SLAVE_IPS" == "changeme" || "$ADMIN_IP" == "changeme" ]];then
SLAVE_IPS=($(kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}'))
ADMIN_IP=${SLAVE_IPS[0]}
if [ -z "$SLAVE_IPS" ]; then
echo "Unable to determine k8s nodes. Please set variables SLAVE_IPS and ADMIN_IP."
return 1
fi
fi
SSH_OPTIONS="-o StrictHostKeyChecking=no -o UserKnownhostsFile=/dev/null"
if [ -z "$KUBEDNS_IP" ]; then
if type kubectl; then
KUBEDNS_IP=$(kubectl get svc --namespace kube-system kubedns --template={{.spec.clusterIP}})
else
KUBEDNS_IP=$(ssh $SSH_OPTIONS $ADMIN_USER@$ADMIN_IP kubectl get svc --namespace kube-system kubedns --template={{.spec.clusterIP}})
fi
fi
if [ -z "$DNSMASQ_IP" ]; then
if type kubectl; then
DNSMASQ_IP=$(kubectl get svc --namespace kube-system dnsmasq --template={{.spec.clusterIP}})
else
DNSMASQ_IP=$(ssh $SSH_OPTIONS $ADMIN_USER@$ADMIN_IP kubectl get svc --namespace kube-system dnsmasq --template={{.spec.clusterIP}})
fi
fi
domain="cluster.local"
internal_test_domain="kubernetes.default.svc.${domain}"
external_test_domain="kubernetes.io"
declare -A node_ip_works
declare -A node_internal_dns_works
declare -A node_external_dns_works
declare -A container_dns_works
declare -A container_hostnet_dns_works
failures=0
acceptable_failures=0
for node in "${SLAVE_IPS[@]}"; do
# Check UDP 53 for kubedns
if ssh $SSH_OPTIONS $ADMIN_USER@$node nc -uzv $KUBEDNS_IP 53 >/dev/null; then
node_ip_works["${node}"]="PASSED"
else
node_ip_works["${node}"]="FAILED"
(( failures++ ))
fi
# Check internal lookup
if ssh $SSH_OPTIONS $ADMIN_USER@$node nslookup $internal_test_domain $KUBEDNS_IP >/dev/null; then
node_internal_dns_works["${node}"]="PASSED"
else
node_internal_dns_works["${node}"]="FAILED"
(( failures++ ))
fi
# Check external lookup
if ssh $SSH_OPTIONS $ADMIN_USER@$node nslookup $external_test_domain $DNSMASQ_IP >/dev/null; then
node_external_dns_works[$node]="PASSED"
else
node_external_dns_works[$node]="FAILED"
(( failures++ ))
fi
# Check UDP 53 for kubedns in container
if ssh $SSH_OPTIONS $ADMIN_USER@$node sudo docker run --rm busybox nslookup $external_test_domain $DNSMASQ_IP >/dev/null; then
container_dns_works[$node]="PASSED"
else
container_dns_works[$node]="FAILED"
(( failures++ ))
fi
# Check UDP 53 for kubedns in container with host networking
if ssh $SSH_OPTIONS $ADMIN_USER@$node sudo docker run --net=host --rm busybox nslookup $external_test_domain $DNSMASQ_IP >/dev/null; then
container_hostnet_dns_works[$node]="PASSED"
else
container_hostnet_dns_works[$node]="FAILED"
(( failures++ ))
fi
done
# Report results
echo "Found $failures failures."
for node in "${SLAVE_IPS[@]}"; do
echo
echo "Node $node status:"
echo " Node to container communication: ${node_ip_works[$node]}"
echo " Node internal DNS lookup (via kubedns): ${node_internal_dns_works[$node]}"
echo " Node external DNS lookup (via dnsmasq): ${node_external_dns_works[$node]}"
echo " Container internal DNS lookup (via kubedns): ${container_dns_works[$node]}"
echo " Container internal DNS lookup (via kubedns): ${container_hostnet_dns_works[$node]}"
done
if [[ $failures > $acceptable_failures ]]; then
return $failures
else
return 0
fi
}
#Run test_networking if not sourced
[[ "${BASH_SOURCE[0]}" != "${0}" ]] || test_networking $@

View File

@ -1,55 +0,0 @@
---
# NOTE(mattymo): This can be removed after release of Calico 2.1
- name: Get timestamp from oldest dhclient pid
shell: "pgrep -o dhclient | xargs --no-run-if-empty -I{} stat --printf='%Y' /proc/{}"
register: oldest_dhclient_time
- name: check if /etc/dhclient.conf exists
stat:
path: /etc/dhclient.conf
register: dhclient_find_stat
- name: target dhclient conf file for /etc/dhclient.conf
set_fact:
dhclientconffile: /etc/dhclient.conf
when: dhclient_find_stat.stat.exists
- name: target dhclient conf file for /etc/dhcp/dhclient.conf
set_fact:
dhclientconffile: /etc/dhcp/dhclient.conf
when: not dhclient_find_stat.stat.exists
- name: Gather info on dhclient.conf
stat:
path: "{{ dhclientconffile }}"
register: dhclient_stat
- name: See if oldest dhclient process is older than dhclient.conf
set_fact:
needs_network_reset: "{{ oldest_dhclient_time.stdout != '' and oldest_dhclient_time.stdout|int < dhclient_stat.stat.mtime }}"
- name: update ansible time
setup: filter=ansible_date_time
- name: Calculate number of seconds since dhclient.conf was modified
set_fact:
mtime_dhclientconf: "{{ ansible_date_time.epoch|int - dhclient_stat.stat.mtime|int }}"
- name: Set networking service name
set_fact:
networking_service_name: >-
{% if ansible_os_family == "RedHat" -%}
network
{%- elif ansible_os_family == "Debian" -%}
networking
{%- endif %}
- name: Restart networking, dhclient, and calico-felix
shell: "{{ item }}"
when: needs_network_reset|bool and inventory_hostname in groups['k8s-cluster']
failed_when: false # in case dhclient was stopped
with_items:
- killall -q dhclient --older-than {{ mtime_dhclientconf }}s
- systemctl restart {{ networking_service_name }}
- docker exec calico-node killall -HUP calico-felix

View File

@ -1,75 +0,0 @@
---
- include: dns_fix.yml
when: kube_network_plugin in ['canal', 'calico']
- name: pick dnsmasq cluster IP
set_fact:
dnsmasq_server: >-
{%- if skip_dnsmasq|bool -%}{{ skydns_server }}{%- else -%}{{ dns_server }}{%- endif -%}
- name: Wait for kubedns to be ready
shell: "nslookup kubernetes.default.svc.{{ dns_domain }} {{ dnsmasq_server }}"
register: kubernetes_resolvable
until: kubernetes_resolvable.rc == 0
delay: 5
retries: 5
changed_when: false
- name: Copy network test script
copy:
src: test_networking.sh
dest: "{{ bin_dir }}/test_networking.sh"
owner: root
group: root
mode: 0755
- name: Get current list of kube nodes
command: kubectl get nodes
register: kubectl_nodes
delegate_to: "{{groups['kube-master'][0]}}"
run_once: true
- name: Ensure kube-nodes are in list of nodes
fail:
msg: "{{inventory_hostname}} is not in kubectl get nodes"
when: inventory_hostname in groups['kube-node'] and
inventory_hostname not in kubectl_nodes.stdout
- name: Test networking connectivity
shell: "bash {{ bin_dir }}/test_networking.sh"
environment:
KUBEDNS_IP: "{{ skydns_server }}"
DNSMASQ_IP: "{{ dnsmasq_server }}"
ADMIN_USER: "{{ ansible_user }}"
ADMIN_IP: "{{ hostvars[groups['kube-master'][0]]['ip'] | default(hostvars[groups['kube-master'][0]]['ansible_default_ipv4']['address']) }}"
SLAVE_IPS: "{{ ip }}"
changed_when: false
become: no
- name: Check netchecker status
uri: url=http://localhost:31081/api/v1/connectivity_check
register: netchecker_status
until: netchecker_status.status == 200
retries: 6
delay: 20
delegate_to: "{{groups['kube-node'][0]}}"
run_once: true
become: no
when: deploy_netchecker|bool | default(false)
- name: Copy dashboard definition
copy:
src: kubernetes-dashboard.yml
dest: /etc/kubernetes/kubernetes-dashboard.yml
owner: root
group: root
mode: 0644
register: dashboard
delegate_to: "{{groups['kube-master'][0]}}"
run_once: true
- name: Create Kubernetes dashboard
command: "{{ bin_dir }}/kubectl create -f /etc/kubernetes/kubernetes-dashboard.yml"
when: dashboard.changed
delegate_to: "{{groups['kube-master'][0]}}"
run_once: true

View File

@ -1,14 +0,0 @@
# Istall discovered self signed certificates
trust_self_signed_certs: false
atop_interval: 60
# Certificates to be preinstalled for cluster nodes, if trusted
# NOTE: indentation is crucial!
#certificates:
# - name: foo
# pem: |
# -----BEGIN CERTIFICATE-----
# ... skipped ...
# -----END CERTIFICATE-----
# - name: bar
# pem...

View File

@ -1,4 +0,0 @@
---
- name: Preinstall | update certs
command: update-ca-certificates
become: true

View File

@ -1,76 +0,0 @@
---
- name: Ensure required ansible version
assert:
that: ansible_version.major == 2 and ansible_version.minor >= 1
run_once: true
- name: "Wait for all hosts"
local_action: wait_for host={{ ansible_host | default(access_ip | default(ip)) }} port=22 state=started timeout=30
become: false
# bug from bugs.debian.org/843783
- name: "Install latest pyopenssl as workaround"
raw: pip install -U pyopenssl netaddr
- name: Gather facts
setup:
- name: Gather local SSH pubkeys
local_action: shell ssh-add -L
become: false
register: ssh_pubkeys
changed_when: false
failed_when: false
- name: Add SSH pubkey to authorized_keys
authorized_key:
user: "{{ ansible_ssh_user }}"
key: "{{ item }}"
state: present
with_items: "{{ ssh_pubkeys.stdout_lines }}"
when: ssh_pubkeys.rc == 0
- include: process_certificates.yml
when: trust_self_signed_certs|bool
- name: Set correct /etc/hosts entry
register: updated_etc_hosts
lineinfile:
dest: /etc/hosts
regexp: "^(127.0.1.1|{{ ip }})\t.*"
line: "{{ ip }} {{ inventory_hostname }}"
insertafter: "^127.0.0.1 localhost"
state: present
- name: Set atop interval
copy:
dest: /etc/default/atop
content: "INTERVAL={{ atop_interval }}"
mode: 0755
owner: root
group: root
- name: Ensure required packages are installed
apt:
name: "{{ item }}"
state: latest
when: ansible_os_family == "Debian"
with_items:
- atop
- dnsutils
- ntp
- netcat
- dbus
- name: Set hostname
hostname:
name: "{{ inventory_hostname }}"
# FIXME(mattymo): Opts are set too late (https://github.com/kubespray/kargo/issues/487)
- name: Set Docker options early
template:
src: docker.j2
dest: /etc/default/docker
owner: root
group: root
mode: 0644

View File

@ -1,40 +0,0 @@
---
- name: Create temp cert files
local_action: copy content={{ item.pem }} dest=/tmp/{{ item.name }} mode="0400"
become: false
run_once: true
with_items: "{{ certificates|default([]) }}"
- name: Determine selfsigned certs
local_action: shell echo /tmp/"{{ item.name }}" ;
[ "$(openssl x509 -noout -subject -in /tmp/{{ item.name }}|awk '{print $2}')" = "$(openssl x509 -noout -issuer -in /tmp/{{ item.name }}|awk '{print $2}')" ]
become: false
run_once: true
register: certscheck
ignore_errors: yes
with_items: "{{ certificates|default([]) }}"
- name: Get self signed certs
local_action: slurp src={{ item.stdout }}
become: false
run_once: true
register: self_signed_certs
when: item.rc == 0
with_items: "{{ certscheck.results }}"
- name: Remove temp cert files
local_action: file dest=/tmp/{{ item.name }} state=absent
become: false
run_once: true
with_items: "{{ certificates|default([]) }}"
- name: Mark self signed certs as trusted for nodes
copy:
content: "{{ item.content|b64decode }}"
dest: "/usr/local/share/ca-certificates/{{ item.source|basename }}"
force: yes
mode: "0600"
owner: "root"
notify: Preinstall | update certs
become: true
with_items: "{{ self_signed_certs.results }}"

View File

@ -1,2 +0,0 @@
# Deployed by Ansible
DOCKER_OPTS="{% if docker_options is defined %}{{ docker_options }}{% endif %}"

View File

@ -1,20 +0,0 @@
etcd_memory_limit: 0
kube_apiserver_memory_limit: ""
kube_apiserver_cpu_limit: ""
kube_apiserver_memory_requests: ""
kube_apiserver_cpu_requests: ""
kube_scheduler_memory_limit: ""
kube_scheduler_cpu_limit: ""
kube_scheduler_memory_requests: ""
kube_scheduler_cpu_requests: ""
kube_controller_memory_limit: ""
kube_controller_cpu_limit: ""
kube_controller_memory_requests: ""
kube_controller_cpu_requests: ""
kube_controller_node_monitor_grace_period: 2m
kube_controller_node_monitor_period: 20s
kube_controller_pod_eviction_timeout: 5m0s

View File

@ -1,2 +0,0 @@
packer_cache
*.retry

View File

@ -1,6 +0,0 @@
Vagrant.configure("2") do |config|
config.vm.provider :libvirt do |domain|
domain.disk_bus = "virtio"
end
end

View File

@ -1,174 +0,0 @@
{
"_comment": "Build with `PACKER_LOG=1 DEBIAN_MAJOR_VERSION=8 DEBIAN_MINOR_VERSION=5 ARCH=amd64 HEADLESS=true packer build debian.json`",
"_comment": "Use `build -only=qemu` or `-only=virtualbox-iso`",
"_comment": "See checksums at {{ user `debian_mirror` }}/{{ user `debian_version` }}/amd64/iso-cd/SHA256SUMS",
"variables": {
"name": "debian-{{ env `DEBIAN_MAJOR_VERSION` }}.{{ env `DEBIAN_MINOR_VERSION` }}.0-{{ env `ARCH` }}",
"iso_name": "debian-{{ env `DEBIAN_MAJOR_VERSION` }}.{{ env `DEBIAN_MINOR_VERSION` }}.0-{{ env `ARCH` }}-netinst",
"debian_type": "{{ env `DEBIAN_TYPE` }}",
"debian_version": "{{ env `DEBIAN_MAJOR_VERSION` }}.{{ env `DEBIAN_MINOR_VERSION` }}.0",
"debian_mirror": "http://cdimage.debian.org/cdimage/release/",
"debian_archive": "http://cdimage.debian.org/mirror/cdimage/archive/",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_wait_timeout": "30m",
"preseed_file_name": "debian-{{ env `DEBIAN_MAJOR_VERSION`}}/preseed.cfg",
"accelerator": "kvm",
"cpus": "1",
"memory": "1024",
"disk_size": "102400",
"headless": "{{ env `HEADLESS` }}",
"boot_wait": "10s",
"install_vagrant_key": "true",
"update": "true",
"cleanup": "true",
"pull_images": "true"
},
"builders":
[
{
"type": "qemu",
"vm_name": "qemu-{{ user `name` }}",
"iso_checksum_type": "sha256",
"iso_checksum_url": "{{ user `debian_mirror` }}/{{ user `debian_version` }}/amd64/iso-cd/SHA256SUMS",
"iso_url": "{{ user `debian_mirror` }}/{{ user `debian_version` }}/amd64/iso-cd/{{ user `iso_name` }}.iso",
"shutdown_command": "echo '{{ user `ssh_password` }}' | sudo -S shutdown -P now",
"disk_size": "{{ user `disk_size` }}",
"headless": "{{ user `headless` }}",
"http_directory": "http",
"ssh_username": "{{ user `ssh_username` }}",
"ssh_password": "{{ user `ssh_password` }}",
"ssh_wait_timeout": "{{ user `ssh_wait_timeout` }}",
"accelerator": "{{ user `accelerator` }}",
"qemuargs": [
[ "-smp", "{{ user `cpus` }}" ],
[ "-m", "{{ user `memory` }}M" ]
],
"boot_wait": "{{ user `boot_wait` }}",
"boot_command":
[
"<esc><wait>",
"install ",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{ user `preseed_file_name` }} <wait>",
"debian-installer=en_US ",
"auto=true ",
"locale=en_US ",
"kbd-chooser/method=us ",
"keyboard-configuration/xkb-keymap=us ",
"fb=false ",
"debconf/frontend=noninteractive ",
"console-setup/ask_detect=false ",
"console-keymaps-at/keymap=us ",
"domain=localhost ",
"hostname=localhost ",
"<enter><wait>"
]
},
{
"type": "virtualbox-iso",
"vm_name": "virtualbox-{{ user `name` }}",
"iso_checksum_type": "sha256",
"iso_checksum_url": "{{ user `debian_mirror` }}/{{ user `debian_version` }}/amd64/iso-cd/SHA256SUMS",
"iso_url": "{{ user `debian_mirror` }}/{{ user `debian_version` }}/amd64/iso-cd/{{ user `iso_name` }}.iso",
"shutdown_command": "echo 'vagrant' | sudo -S shutdown -P now",
"disk_size": "{{ user `disk_size` }}",
"headless": "{{ user `headless` }}",
"http_directory": "http",
"ssh_username": "{{ user `ssh_username` }}",
"ssh_password": "{{ user `ssh_password` }}",
"ssh_wait_timeout": "{{ user `ssh_wait_timeout` }}",
"guest_os_type": "Ubuntu_64",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version",
"vboxmanage": [
[ "modifyvm", "{{.Name}}", "--cpus", "{{ user `cpus` }}" ],
[ "modifyvm", "{{.Name}}", "--memory", "{{ user `memory` }}" ]
],
"boot_wait": "{{ user `boot_wait` }}",
"boot_command":
[
"<esc><wait>",
"install ",
"preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/{{ user `preseed_file_name` }} <wait>",
"debian-installer=en_US ",
"auto=true ",
"locale=en_US ",
"kbd-chooser/method=us ",
"keyboard-configuration/xkb-keymap=us ",
"fb=false ",
"debconf/frontend=noninteractive ",
"console-setup/ask_detect=false ",
"console-keymaps-at/keymap=us ",
"domain=localhost ",
"hostname=localhost ",
"<enter><wait>"
]
}
],
"provisioners": [
{
"type": "shell",
"environment_vars": [
"INSTALL_VAGRANT_KEY={{ user `install_vagrant_key` }}",
"UPDATE={{ user `update` }}",
"PULL_IMAGES={{ user `pull_images` }}",
"DEBIAN_FRONTEND=noninteractive"
],
"execute_command": "echo '{{ user `ssh_password` }}' | {{.Vars}} sudo -S -E bash -x '{{.Path}}'",
"scripts": [
"scripts/debian/mirrors.sh",
"scripts/debian/update.sh",
"scripts/debian/debian-jessie/packages.sh",
"scripts/docker_images.sh",
"scripts/debian/console.sh",
"scripts/debian/setup.sh",
"scripts/vagrant.sh",
"scripts/sshd.sh",
"scripts/vmtool.sh",
"scripts/cloud-init.sh"
]
},
{
"type": "shell",
"environment_vars": [
"CLEANUP={{ user `cleanup` }}"
],
"execute_command": "echo '{{ user `ssh_password` }}' | {{.Vars}} sudo -S -E bash -x '{{.Path}}'",
"scripts": [
"scripts/debian/cleanup.sh",
"scripts/minimize.sh"
]
}
],
"post-processors": [
{
"type": "shell-local",
"only": [ "qemu" ],
"inline": [
"qemu-img convert -c -f qcow2 -O qcow2 -o cluster_size=2M ./output-qemu/qemu-{{user `name`}} {{user `name`}}.qcow2"
]
},
{
"type": "vagrant",
"only": [ "qemu" ],
"compression_level": 9,
"vagrantfile_template": "Vagrantfile-qemu.template",
"output": "{{ user `name` }}-{{.Provider}}.box"
},
{
"type": "vagrant",
"only": [ "virtualbox-iso" ],
"compression_level": 9,
"output": "{{ user `name` }}-{{.Provider}}.box"
}
]
}

View File

@ -1,74 +0,0 @@
#!/bin/bash -x
# VERSION=0.1.0 DEBIAN_MAJOR_VERSION=8 DEBIAN_MINOR_VERSION=5 ARCH=amd64 OSTYPE=debian TYPE=libvirt ATLAS_USER=john NAME=foobox ./deploy.sh
# UBUNTU_MAJOR_VERSION=16.04 UBUNTU_MINOR_VERSION=.1 UBUNTU_TYPE=server ARCH=amd64 OSTYPE=ubuntu TYPE=virtualbox ATLAS_USER=doe ./deploy.sh
USER=${ATLAS_USER:-mirantis}
DEBIAN_MAJOR_VERSION=${DEBIAN_MAJOR_VERSION:-8}
DEBIAN_MINOR_VERSION=${DEBIAN_MINOR_VERSION:-5}
UBUNTU_MAJOR_VERSION=${UBUNTU_MAJOR_VERSION:-16.04}
UBUNTU_MINOR_VERSION=${UBUNTU_MINOR_VERSION:-.1}
UBUNTU_TYPE=${UBUNTU_TYPE:-server}
ARCH=${ARCH:-amd64}
TYPE=${TYPE:-libvirt}
VERSION=${VERSION:-0.0.0}
case ${OSTYPE} in
ubuntu) NAME="ubuntu-${UBUNTU_MAJOR_VERSION}${UBUNTU_MINOR_VERSION}-${UBUNTU_TYPE}-${ARCH}" ;;
debian) NAME="debian-${DEBIAN_MAJOR_VERSION}.${DEBIAN_MINOR_VERSION}.0-${ARCH}" ;;
*) echo "Unsupported OSTYPE" >&2; exit 1;;
esac
BOXNAME="${BOXNAME:-${NAME}-${TYPE}.box}"
create_atlas_box() {
if curl -sSL https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME} | grep -q "Resource not found"; then
#Create box, because it doesn't exists
echo "*** Creating box: ${NAME}, Short Description: ${SHORT_DESCRIPTION}"
set +x
curl -s https://atlas.hashicorp.com/api/v1/boxes -X POST -d box[name]="${NAME}" -d box[short_description]="${SHORT_DESCRIPTION}" -d box[is_private]=false -d access_token="${ATLAS_TOKEN}"
set -x
fi
}
remove_atlas_box() {
echo "*** Removing box: ${USER}/${NAME}"
set +x
curl -sSL https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME} -X DELETE -d access_token="${ATLAS_TOKEN}"
set -x
}
remove_atlas_box_version() {
echo "*** Removing previous version: https://atlas.hashicorp.com/api/v1/box/$USER/$NAME/version/$1"
set +x
curl -s https://atlas.hashicorp.com/api/v1/box/$USER/$NAME/version/$1 -X DELETE -d access_token="$ATLAS_TOKEN" > /dev/null
set -x
}
upload_boxfile_to_atlas() {
echo "*** Getting current version of the box (if exists)"
local VER
set +x
local CURRENT_VERSION=$(curl -sS -L https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME} -X GET -d access_token="${ATLAS_TOKEN}" | jq 'if .current_version.version == null then "0" else .current_version.version end | tonumber')
set -x
if [ "${VERSION}" == "0.0.0" ]; then
VER=$(echo "${CURRENT_VERSION} + 0.1" | bc | sed 's/^\./0./')
else
VER=${VERSION}
fi
echo "*** Uploading a version: ${VER}"
set +x
curl -sSL https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME}/versions -X POST -d version[version]="${VER}" -d access_token="${ATLAS_TOKEN}" > /dev/null
curl -sSL https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME}/version/${VER} -X PUT -d version[description]="${DESCRIPTION}" -d access_token="${ATLAS_TOKEN}" > /dev/null
curl -sSL https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME}/version/${VER}/providers -X POST -d provider[name]="${TYPE}" -d access_token="${ATLAS_TOKEN}" > /dev/null
UPLOAD_PATH=$(curl -sS https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME}/version/${VER}/provider/${TYPE}/upload?access_token=${ATLAS_TOKEN} | jq -r '.upload_path')
set -x
echo "*** Uploading \"${BOXNAME}\" to ${UPLOAD_PATH}"
curl -sSL -X PUT --upload-file ${BOXNAME} ${UPLOAD_PATH}
set +x
curl -sSL https://atlas.hashicorp.com/api/v1/box/${USER}/${NAME}/version/${VERSION}/release -X PUT -d access_token="${ATLAS_TOKEN}" > /dev/null
set -x
}
export DESCRIPTION=$(cat ../../doc/PACKER.md)
export SHORT_DESCRIPTION="${NAME} for ${TYPE}"
create_atlas_box
upload_boxfile_to_atlas
#remove_atlas_box

View File

@ -1,438 +0,0 @@
#### Contents of the preconfiguration file (for jessie)
### Localization
# Preseeding only locale sets language, country and locale.
d-i debian-installer/locale string en_US
# The values can also be preseeded individually for greater flexibility.
d-i debian-installer/language string en
d-i debian-installer/country string US
d-i debian-installer/locale string en_US.UTF-8
# Optionally specify additional locales to be generated.
#d-i localechooser/supported-locales multiselect en_US.UTF-8, nl_NL.UTF-8
# Keyboard selection.
d-i keyboard-configuration/xkb-keymap select us
# d-i keyboard-configuration/toggle select No toggling
### Network configuration
# Disable network configuration entirely. This is useful for cdrom
# installations on non-networked devices where the network questions,
# warning and long timeouts are a nuisance.
#d-i netcfg/enable boolean false
# netcfg will choose an interface that has link if possible. This makes it
# skip displaying a list if there is more than one interface.
d-i netcfg/choose_interface select auto
# To pick a particular interface instead:
#d-i netcfg/choose_interface select eth1
# To set a different link detection timeout (default is 3 seconds).
# Values are interpreted as seconds.
#d-i netcfg/link_wait_timeout string 10
# If you have a slow dhcp server and the installer times out waiting for
# it, this might be useful.
#d-i netcfg/dhcp_timeout string 60
#d-i netcfg/dhcpv6_timeout string 60
# If you prefer to configure the network manually, uncomment this line and
# the static network configuration below.
#d-i netcfg/disable_autoconfig boolean true
# If you want the preconfiguration file to work on systems both with and
# without a dhcp server, uncomment these lines and the static network
# configuration below.
#d-i netcfg/dhcp_failed note
#d-i netcfg/dhcp_options select Configure network manually
# Static network configuration.
#
# IPv4 example
#d-i netcfg/get_ipaddress string 192.168.1.42
#d-i netcfg/get_netmask string 255.255.255.0
#d-i netcfg/get_gateway string 192.168.1.1
#d-i netcfg/get_nameservers string 192.168.1.1
#d-i netcfg/confirm_static boolean true
#
# IPv6 example
#d-i netcfg/get_ipaddress string fc00::2
#d-i netcfg/get_netmask string ffff:ffff:ffff:ffff::
#d-i netcfg/get_gateway string fc00::1
#d-i netcfg/get_nameservers string fc00::1
#d-i netcfg/confirm_static boolean true
# Any hostname and domain names assigned from dhcp take precedence over
# values set here. However, setting the values still prevents the questions
# from being shown, even if values come from dhcp.
d-i netcfg/get_hostname string unassigned-hostname
d-i netcfg/get_domain string unassigned-domain
# If you want to force a hostname, regardless of what either the DHCP
# server returns or what the reverse DNS entry for the IP is, uncomment
# and adjust the following line.
#d-i netcfg/hostname string somehost
# Disable that annoying WEP key dialog.
d-i netcfg/wireless_wep string
# The wacky dhcp hostname that some ISPs use as a password of sorts.
#d-i netcfg/dhcp_hostname string radish
# If non-free firmware is needed for the network or other hardware, you can
# configure the installer to always try to load it, without prompting. Or
# change to false to disable asking.
#d-i hw-detect/load_firmware boolean true
### Network console
# Use the following settings if you wish to make use of the network-console
# component for remote installation over SSH. This only makes sense if you
# intend to perform the remainder of the installation manually.
#d-i anna/choose_modules string network-console
#d-i network-console/authorized_keys_url string http://10.0.0.1/openssh-key
#d-i network-console/password password r00tme
#d-i network-console/password-again password r00tme
### Mirror settings
# If you select ftp, the mirror/country string does not need to be set.
#d-i mirror/protocol string ftp
d-i mirror/country string manual
d-i mirror/http/hostname string http.us.debian.org
d-i mirror/http/directory string /debian
d-i mirror/http/proxy string
# Suite to install.
#d-i mirror/suite string testing
# Suite to use for loading installer components (optional).
#d-i mirror/udeb/suite string testing
### Account setup
# Skip creation of a root account (normal user account will be able to
# use sudo).
d-i passwd/root-login boolean false
# Alternatively, to skip creation of a normal user account.
#d-i passwd/make-user boolean false
# Root password, either in clear text
d-i passwd/root-password password vagrant
d-i passwd/root-password-again password vagrant
# or encrypted using an MD5 hash.
#d-i passwd/root-password-crypted password [MD5 hash]
# To create a normal user account.
d-i passwd/user-fullname string vagrant
d-i passwd/username string vagrant
# Normal user's password, either in clear text
d-i passwd/user-password password vagrant
d-i passwd/user-password-again password vagrant
# or encrypted using an MD5 hash.
#d-i passwd/user-password-crypted password [MD5 hash]
# Create the first user with the specified UID instead of the default.
#d-i passwd/user-uid string 1010
d-i user-setup/allow-password-weak boolean true
# The user account will be added to some standard initial groups. To
# override that, use this.
#d-i passwd/user-default-groups string audio cdrom video
### Clock and time zone setup
# Controls whether or not the hardware clock is set to UTC.
d-i clock-setup/utc boolean true
# You may set this to any valid setting for $TZ; see the contents of
# /usr/share/zoneinfo/ for valid values.
d-i time/zone string UTC
# Controls whether to use NTP to set the clock during the install
d-i clock-setup/ntp boolean true
# NTP server to use. The default is almost always fine here.
#d-i clock-setup/ntp-server string ntp.example.com
### Partitioning
## Partitioning example
# If the system has free space you can choose to only partition that space.
# This is only honoured if partman-auto/method (below) is not set.
#d-i partman-auto/init_automatically_partition select biggest_free
# Alternatively, you may specify a disk to partition. If the system has only
# one disk the installer will default to using that, but otherwise the device
# name must be given in traditional, non-devfs format (so e.g. /dev/sda
# and not e.g. /dev/discs/disc0/disc).
# For example, to use the first SCSI/SATA hard disk:
#d-i partman-auto/disk string /dev/sda
# In addition, you'll need to specify the method to use.
# The presently available methods are:
# - regular: use the usual partition types for your architecture
# - lvm: use LVM to partition the disk
# - crypto: use LVM within an encrypted partition
d-i partman-auto/method string regular
# If one of the disks that are going to be automatically partitioned
# contains an old LVM configuration, the user will normally receive a
# warning. This can be preseeded away...
d-i partman-lvm/device_remove_lvm boolean true
# The same applies to pre-existing software RAID array:
d-i partman-md/device_remove_md boolean true
# And the same goes for the confirmation to write the lvm partitions.
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
# You can choose one of the three predefined partitioning recipes:
# - atomic: all files in one partition
# - home: separate /home partition
# - multi: separate /home, /var, and /tmp partitions
d-i partman-auto/choose_recipe select atomic
# Or provide a recipe of your own...
# If you have a way to get a recipe file into the d-i environment, you can
# just point at it.
#d-i partman-auto/expert_recipe_file string /hd-media/recipe
# If not, you can put an entire recipe into the preconfiguration file in one
# (logical) line. This example creates a small /boot partition, suitable
# swap, and uses the rest of the space for the root partition:
#d-i partman-auto/expert_recipe string \
# boot-root :: \
# 40 50 100 ext3 \
# $primary{ } $bootable{ } \
# method{ format } format{ } \
# use_filesystem{ } filesystem{ ext3 } \
# mountpoint{ /boot } \
# . \
# 500 10000 1000000000 ext3 \
# method{ format } format{ } \
# use_filesystem{ } filesystem{ ext3 } \
# mountpoint{ / } \
# . \
# 64 512 300% linux-swap \
# method{ swap } format{ } \
# .
d-i partman-basicfilesystems/no_swap boolean false
d-i partman-auto/expert_recipe string
boot-root :: \
10240 1 -1 ext4 \
$primary{ } $bootable{ } \
method{ format } format{ } \
use_filesystem{ } filesystem{ ext4 } \
label{ root } mountpoint{ / } \
# The full recipe format is documented in the file partman-auto-recipe.txt
# included in the 'debian-installer' package or available from D-I source
# repository. This also documents how to specify settings such as file
# system labels, volume group names and which physical devices to include
# in a volume group.
# This makes partman automatically partition without confirmation, provided
# that you told it what to do using one of the methods above.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
## Partitioning using RAID
# The method should be set to "raid".
#d-i partman-auto/method string raid
# Specify the disks to be partitioned. They will all get the same layout,
# so this will only work if the disks are the same size.
#d-i partman-auto/disk string /dev/sda /dev/sdb
# Next you need to specify the physical partitions that will be used.
#d-i partman-auto/expert_recipe string \
# multiraid :: \
# 1000 5000 4000 raid \
# $primary{ } method{ raid } \
# . \
# 64 512 300% raid \
# method{ raid } \
# . \
# 500 10000 1000000000 raid \
# method{ raid } \
# .
# Last you need to specify how the previously defined partitions will be
# used in the RAID setup. Remember to use the correct partition numbers
# for logical partitions. RAID levels 0, 1, 5, 6 and 10 are supported;
# devices are separated using "#".
# Parameters are:
# <raidtype> <devcount> <sparecount> <fstype> <mountpoint> \
# <devices> <sparedevices>
#d-i partman-auto-raid/recipe string \
# 1 2 0 ext3 / \
# /dev/sda1#/dev/sdb1 \
# . \
# 1 2 0 swap - \
# /dev/sda5#/dev/sdb5 \
# . \
# 0 2 0 ext3 /home \
# /dev/sda6#/dev/sdb6 \
# .
# For additional information see the file partman-auto-raid-recipe.txt
# included in the 'debian-installer' package or available from D-I source
# repository.
# This makes partman automatically partition without confirmation.
d-i partman-md/confirm boolean true
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
## Controlling how partitions are mounted
# The default is to mount by UUID, but you can also choose "traditional" to
# use traditional device names, or "label" to try filesystem labels before
# falling back to UUIDs.
#d-i partman/mount_style select uuid
### Base system installation
# Configure APT to not install recommended packages by default. Use of this
# option can result in an incomplete system and should only be used by very
# experienced users.
#d-i base-installer/install-recommends boolean false
# The kernel image (meta) package to be installed; "none" can be used if no
# kernel is to be installed.
#d-i base-installer/kernel/image string linux-image-586
### Apt setup
# You can choose to install non-free and contrib software.
#d-i apt-setup/non-free boolean true
#d-i apt-setup/contrib boolean true
# Uncomment this if you don't want to use a network mirror.
#d-i apt-setup/use_mirror boolean false
# Select which update services to use; define the mirrors to be used.
# Values shown below are the normal defaults.
#d-i apt-setup/services-select multiselect security, updates
#d-i apt-setup/security_host string security.debian.org
# Additional repositories, local[0-9] available
#d-i apt-setup/local0/repository string \
# http://local.server/debian stable main
#d-i apt-setup/local0/comment string local server
# Enable deb-src lines
#d-i apt-setup/local0/source boolean true
# URL to the public key of the local repository; you must provide a key or
# apt will complain about the unauthenticated repository and so the
# sources.list line will be left commented out
#d-i apt-setup/local0/key string http://local.server/key
# By default the installer requires that repositories be authenticated
# using a known gpg key. This setting can be used to disable that
# authentication. Warning: Insecure, not recommended.
#d-i debian-installer/allow_unauthenticated boolean true
# Uncomment this to add multiarch configuration for i386
#d-i apt-setup/multiarch string i386
### Package selection
tasksel tasksel/first multiselect none
# Individual additional packages to install
d-i pkgsel/include string openssh-server sudo
# Whether to upgrade packages after debootstrap.
# Allowed values: none, safe-upgrade, full-upgrade
d-i pkgsel/upgrade select full-upgrade
d-i pkgsel/update-policy select unattended-upgrades
# Some versions of the installer can report back on what software you have
# installed, and what software you use. The default is not to report back,
# but sending reports helps the project determine what software is most
# popular and include it on CDs.
popularity-contest popularity-contest/participate boolean false
### Boot loader installation
# Grub is the default boot loader (for x86). If you want lilo installed
# instead, uncomment this:
#d-i grub-installer/skip boolean true
# To also skip installing lilo, and install no bootloader, uncomment this
# too:
#d-i lilo-installer/skip boolean true
# This is fairly safe to set, it makes grub install automatically to the MBR
# if no other operating system is detected on the machine.
d-i grub-installer/only_debian boolean true
# This one makes grub-installer install to the MBR if it also finds some other
# OS, which is less safe as it might not be able to boot that other OS.
d-i grub-installer/with_other_os boolean true
# Due notably to potential USB sticks, the location of the MBR can not be
# determined safely in general, so this needs to be specified:
#d-i grub-installer/bootdev string /dev/sda
# To install to the first device (assuming it is not a USB stick):
d-i grub-installer/bootdev string default
# Alternatively, if you want to install to a location other than the mbr,
# uncomment and edit these lines:
#d-i grub-installer/only_debian boolean false
#d-i grub-installer/with_other_os boolean false
#d-i grub-installer/bootdev string (hd0,1)
# To install grub to multiple disks:
#d-i grub-installer/bootdev string (hd0,1) (hd1,1) (hd2,1)
# Optional password for grub, either in clear text
#d-i grub-installer/password password r00tme
#d-i grub-installer/password-again password r00tme
# or encrypted using an MD5 hash, see grub-md5-crypt(8).
#d-i grub-installer/password-crypted password [MD5 hash]
# Use the following option to add additional boot parameters for the
# installed system (if supported by the bootloader installer).
# Note: options passed to the installer will be added automatically.
#d-i debian-installer/add-kernel-opts string net.ifnames=0
### Finishing up the installation
# During installations from serial console, the regular virtual consoles
# (VT1-VT6) are normally disabled in /etc/inittab. Uncomment the next
# line to prevent this.
#d-i finish-install/keep-consoles boolean true
# Avoid that last message about the install being complete.
d-i finish-install/reboot_in_progress note
# This will prevent the installer from ejecting the CD during the reboot,
# which is useful in some situations.
#d-i cdrom-detect/eject boolean false
# This is how to make the installer shutdown when finished, but not
# reboot into the installed system.
#d-i debian-installer/exit/halt boolean true
# This will power off the machine instead of just halting it.
#d-i debian-installer/exit/poweroff boolean true
### Preseeding other packages
# Depending on what software you choose to install, or if things go wrong
# during the installation process, it's possible that other questions may
# be asked. You can preseed those too, of course. To get a list of every
# possible question that could be asked during an install, do an
# installation, and then run these commands:
# debconf-get-selections --installer > file
# debconf-get-selections >> file
#### Advanced options
### Running custom commands during the installation
# d-i preseeding is inherently not secure. Nothing in the installer checks
# for attempts at buffer overflows or other exploits of the values of a
# preconfiguration file like this one. Only use preconfiguration files from
# trusted locations! To drive that home, and because it's generally useful,
# here's a way to run any shell command you'd like inside the installer,
# automatically.
# This first command is run as early as possible, just after
# preseeding is read.
#d-i preseed/early_command string anna-install some-udeb
# This command is run immediately before the partitioner starts. It may be
# useful to apply dynamic partitioner preseeding that depends on the state
# of the disks (which may not be visible when preseed/early_command runs).
#d-i partman/early_command \
# string debconf-set partman-auto/disk "$(list-devices disk | head -n1)"
# This command is run just before the install finishes, but when there is
# still a usable /target directory. You can chroot to /target and use it
# directly, or use the apt-install and in-target commands to easily install
# packages and run commands in the target system.
#d-i preseed/late_command string apt-install zsh; in-target chsh -s /bin/zsh

View File

@ -1,40 +0,0 @@
d-i debian-installer/locale string en_US
d-i debian-installer/language string en
d-i debian-installer/country string US
d-i debian-installer/locale string en_US.UTF-8
d-i keyboard-configuration/xkb-keymap select us
d-i base-installer/kernel/override-image string linux-generic-hwe-16.04
d-i clock-setup/utc-auto boolean true
d-i clock-setup/utc boolean true
d-i debconf/frontend select noninteractive
d-i debian-installer/framebuffer boolean false
d-i finish-install/reboot_in_progress note
d-i grub-installer/only_debian boolean true
d-i grub-installer/with_other_os boolean true
d-i netcfg/choose_interface select auto
d-i netcfg/get_domain string unassigned-domain
d-i netcfg/get_hostname string unassigned-hostname
d-i partman-auto/choose_recipe select atomic
d-i partman-auto/method string regular
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
d-i partman/confirm_write_new_label boolean true
d-i passwd/user-fullname string vagrant
d-i passwd/username string vagrant
d-i passwd/user-password-again password vagrant
d-i passwd/user-password password vagrant
d-i pkgsel/include string openssh-server
d-i pkgsel/install-language-support boolean false
d-i pkgsel/update-policy select none
d-i pkgsel/upgrade select none
d-i time/zone string UTC
d-i user-setup/allow-password-weak boolean true
d-i user-setup/encrypt-home boolean false
tasksel tasksel/first multiselect standard, ubuntu-server
cloud-init cloud-init/datasources multiselect NoCloud, None
d-i mirror/country string manual
d-i mirror/http/hostname string archive.ubuntu.com
d-i mirror/http/mirror select archive.ubuntu.com
d-i mirror/http/directory string /ubuntu
d-i mirror/http/proxy string

View File

@ -1,5 +0,0 @@
#!/bin/bash -eux
### Install packages
echo "==> Installing cloud-init"
apt-get install -y cloud-init cloud-guest-utils cloud-initramfs-growroot cloud-initramfs-copymods

View File

@ -1,37 +0,0 @@
#!/bin/bash -euxo
# Clean up the apt cache
apt-get -y autoremove --purge
apt-get -y autoclean
echo "==> Cleaning up udev rules"
rm -rf /dev/.udev/ /lib/udev/rules.d/75-persistent-net-generator.rules
echo "==> Cleaning up leftover dhcp leases"
if [ -d "/var/lib/dhcp" ]; then
rm /var/lib/dhcp/*
fi
if [ "${CLEANUP}" = "true" ] ; then
echo "==> Removing man pages"
rm -rf /usr/share/man/*
echo "==> Removing anything in /usr/src"
rm -rf /usr/src/*
echo "==> Removing any docs"
rm -rf /usr/share/doc/*
fi
echo "==> Removing caches"
find /var/cache -type f -exec rm -rf {} \;
echo "==> Cleaning up log files"
find /var/log -type f -exec sh -c 'echo -n > {}' \;
echo "==> Cleaning up tmp"
rm -rf /tmp/*
echo "==> Clearing last login information"
> /var/log/lastlog
> /var/log/wtmp
> /var/log/btmp
echo "==> Removing bash history"
unset HISTFILE
rm -f /root/.bash_history
rm -f /home/vagrant/.bash_history

View File

@ -1,10 +0,0 @@
#!/bin/bash -eux
echo "==> Configuring serial console"
cat >> /etc/default/grub <<EOF
GRUB_TERMINAL=serial
GRUB_CMDLINE_LINUX='0 console=ttyS0,19200n8 cgroup_enable=memory swapaccount=1'
GRUB_SERIAL_COMMAND="serial --speed=19200 --unit=0 --word=8 --parity=no --stop=1"
EOF
update-grub2

View File

@ -1,55 +0,0 @@
#!/bin/bash -eux
apt-get update
apt-get -y dist-upgrade
PACKAGES="
curl
ethtool
htop
isc-dhcp-client
nfs-common
vim
git-review
python-tox
screen
sshpass
tmux
python-dev
python3-dev
python-netaddr
software-properties-common
python-setuptools
dnsutils
gcc
"
echo "==> Installing packages"
apt-get -y install $PACKAGES
# Install pip
curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
python /tmp/get-pip.py
rm /tmp/get-pip.py
pip install --upgrade pip
# Preinstall Docker version required by Kargo:
apt-get -y purge "lxc-docker*"
apt-get -y purge "docker.io*"
KARGO_DEFAULT_DEBIAN_URL="https://raw.githubusercontent.com/openstack/fuel-ccp-installer/master/utils/kargo/kargo_default_debian.yaml"
DOCKER_REQUIRED_VERSION=`wget --output-document=- ${KARGO_DEFAULT_DEBIAN_URL}\
| awk '/docker_version/ {gsub(/"/, "", $NF); print $NF}'`
apt-get -y install apt-transport-https ca-certificates
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo debian-jessie main" \
> /etc/apt/sources.list.d/docker.list
apt-get update
DOCKER_APT_PACKAGE_VERSION=`apt-cache policy docker-engine \
| grep ${DOCKER_REQUIRED_VERSION} | head -1 \
| grep -o "${DOCKER_REQUIRED_VERSION}.*\ "`
DOCKER_APT_PACKAGE_FULL_NAME="docker-engine"
# FIXME(mzawadzki): this is workaround for non-maching Docker version
[ ! -z ${DOCKER_APT_PACKAGE_VERSION} ] \
&& DOCKER_APT_PACKAGE_FULL_NAME+="="${DOCKER_APT_PACKAGE_VERSION}
apt-get -y install ${DOCKER_APT_PACKAGE_FULL_NAME}

View File

@ -1,8 +0,0 @@
#!/bin/bash -eux
# Use only http://httpredir.debian.org/ over cdn.debian.net or fixed mirrors
cat > /etc/apt/sources.list << EOF
deb http://httpredir.debian.org/debian jessie main
deb http://httpredir.debian.org/debian jessie-updates main
deb http://httpredir.debian.org/debian jessie-backports main
deb http://security.debian.org jessie/updates main
EOF

View File

@ -1,15 +0,0 @@
#!/bin/bash -eux
echo "==> Setting up sudo"
sed -i "s/^.*requiretty/#Defaults requiretty/" /etc/sudoers
echo "==> Configuring logging"
touch /var/log/daemon.log
chmod 666 /var/log/daemon.log
echo "daemon.* /var/log/daemon.log" >> /etc/rsyslog.d/50-default.conf
echo "==> Setting vim as a default editor"
update-alternatives --set editor /usr/bin/vim.basic
echo "==> Setting default locale to en_US.UTF-8"
echo "LC_ALL=en_US.UTF-8" >> /etc/environment

View File

@ -1,53 +0,0 @@
#!/bin/bash -eux
apt-get update
apt-get -y dist-upgrade
PACKAGES="
curl
ethtool
htop
isc-dhcp-client
nfs-common
vim
git-review
python-tox
screen
sshpass
tmux
python-dev
python3-dev
python-netaddr
software-properties-common
python-setuptools
gcc
ntp
"
echo "==> Installing packages"
apt-get -y install $PACKAGES
# Install pip
curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
python /tmp/get-pip.py
rm /tmp/get-pip.py
pip install --upgrade pip
# Preinstall Docker version required by Kargo:
KARGO_DEFAULT_UBUNTU_URL="https://raw.githubusercontent.com/openstack/fuel-ccp-installer/master/utils/kargo/kargo_default_ubuntu.yaml"
DOCKER_REQUIRED_VERSION=`wget --output-document=- ${KARGO_DEFAULT_UBUNTU_URL}\
| awk '/docker_version/ {gsub(/"/, "", $NF); print $NF}'`
apt-get -y install apt-transport-https ca-certificates
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 \
--recv-keys 58118E89F3A912897C070ADBF76221572C52609D
echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" \
> /etc/apt/sources.list.d/docker.list
apt-get update
DOCKER_APT_PACKAGE_VERSION=`apt-cache policy docker-engine \
| grep ${DOCKER_REQUIRED_VERSION} | head -1 \
| grep -o "${DOCKER_REQUIRED_VERSION}.*\ "`
DOCKER_APT_PACKAGE_FULL_NAME="docker-engine"
# FIXME(mzawadzki): this is workaround for non-maching Docker version
[ ! -z ${DOCKER_APT_PACKAGE_VERSION} ] \
&& DOCKER_APT_PACKAGE_FULL_NAME+="="${DOCKER_APT_PACKAGE_VERSION}
apt-get -y install ${DOCKER_APT_PACKAGE_FULL_NAME}

View File

@ -1,10 +0,0 @@
#!/bin/bash -eux
if [[ $UPDATE =~ true || $UPDATE =~ 1 || $UPDATE =~ yes ]]; then
echo "==> Updating list of repositories"
# apt-get update does not actually perform updates, it just downloads and indexes the list of packages
apt-get -y update
echo "==> Performing dist-upgrade (all packages and kernel)"
apt-get -y dist-upgrade --force-yes
fi

View File

@ -1,27 +0,0 @@
#!/bin/bash -eux
[ "${PULL_IMAGES}" = "true" ] || exit 0
# Pull K8s hyperkube image version required by Kargo:
KARGO_DEFAULT_COMMON_URL="https://raw.githubusercontent.com/openstack/fuel-ccp-installer/master/utils/kargo/kargo_default_common.yaml"
TMP_FILE=output_${RANDOM}.yaml
wget --output-document ${TMP_FILE} ${KARGO_DEFAULT_COMMON_URL}
HYPERKUBE_IMAGE_NAME=`awk '/hyperkube_image_repo/ {gsub(/"/, "", $NF); \
print $NF}' ${TMP_FILE}`
HYPERKUBE_IMAGE_TAG=`awk '/hyperkube_image_tag/ {gsub(/"/, "", $NF); \
print $NF}' ${TMP_FILE}`
docker pull ${HYPERKUBE_IMAGE_NAME}:${HYPERKUBE_IMAGE_TAG}
# Pull K8s calico images version required by Kargo:
CALICO_IMAGE_TAG=`awk '/calico_version/ {gsub(/"/, "", $NF); \
print $NF}' ${TMP_FILE}`
docker pull calico/ctl:${CALICO_IMAGE_TAG}
docker pull calico/node:${CALICO_IMAGE_TAG}
# Pull K8s etcd image version required by Kargo:
ETCD_IMAGE_TAG=`awk '/etcd_version/ {gsub(/"/, "", $NF); \
print $NF}' ${TMP_FILE}`
docker pull quay.io/coreos/etcd:${ETCD_IMAGE_TAG}
# Pull required images w/o version preferences:
docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0
rm ${TMP_FILE}

View File

@ -1,5 +0,0 @@
#!/bin/bash -eux
# Make sure we wait until all the data is written to disk, otherwise
# Packer might quit too early before the large files are deleted
sync

View File

@ -1,3 +0,0 @@
#!/bin/bash -eux
echo "UseDNS no" >> /etc/ssh/sshd_config

View File

@ -1,71 +0,0 @@
#!/bin/bash -euxo
date > /etc/vagrant_box_build_time
SSH_USER=${SSH_USER:-vagrant}
SSH_USER_HOME=${SSH_USER_HOME:-/home/${SSH_USER}}
VAGRANT_INSECURE_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
VAGRANT_SECURE_KEY="-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzI
w+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoP
kcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2
hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NO
Td0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcW
yLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQIBIwKCAQEA4iqWPJXtzZA68mKd
ELs4jJsdyky+ewdZeNds5tjcnHU5zUYE25K+ffJED9qUWICcLZDc81TGWjHyAqD1
Bw7XpgUwFgeUJwUlzQurAv+/ySnxiwuaGJfhFM1CaQHzfXphgVml+fZUvnJUTvzf
TK2Lg6EdbUE9TarUlBf/xPfuEhMSlIE5keb/Zz3/LUlRg8yDqz5w+QWVJ4utnKnK
iqwZN0mwpwU7YSyJhlT4YV1F3n4YjLswM5wJs2oqm0jssQu/BT0tyEXNDYBLEF4A
sClaWuSJ2kjq7KhrrYXzagqhnSei9ODYFShJu8UWVec3Ihb5ZXlzO6vdNQ1J9Xsf
4m+2ywKBgQD6qFxx/Rv9CNN96l/4rb14HKirC2o/orApiHmHDsURs5rUKDx0f9iP
cXN7S1uePXuJRK/5hsubaOCx3Owd2u9gD6Oq0CsMkE4CUSiJcYrMANtx54cGH7Rk
EjFZxK8xAv1ldELEyxrFqkbE4BKd8QOt414qjvTGyAK+OLD3M2QdCQKBgQDtx8pN
CAxR7yhHbIWT1AH66+XWN8bXq7l3RO/ukeaci98JfkbkxURZhtxV/HHuvUhnPLdX
3TwygPBYZFNo4pzVEhzWoTtnEtrFueKxyc3+LjZpuo+mBlQ6ORtfgkr9gBVphXZG
YEzkCD3lVdl8L4cw9BVpKrJCs1c5taGjDgdInQKBgHm/fVvv96bJxc9x1tffXAcj
3OVdUN0UgXNCSaf/3A/phbeBQe9xS+3mpc4r6qvx+iy69mNBeNZ0xOitIjpjBo2+
dBEjSBwLk5q5tJqHmy/jKMJL4n9ROlx93XS+njxgibTvU6Fp9w+NOFD/HvxB3Tcz
6+jJF85D5BNAG3DBMKBjAoGBAOAxZvgsKN+JuENXsST7F89Tck2iTcQIT8g5rwWC
P9Vt74yboe2kDT531w8+egz7nAmRBKNM751U/95P9t88EDacDI/Z2OwnuFQHCPDF
llYOUI+SpLJ6/vURRbHSnnn8a/XG+nzedGH5JGqEJNQsz+xT2axM0/W/CRknmGaJ
kda/AoGANWrLCz708y7VYgAtW2Uf1DPOIYMdvo6fxIB5i9ZfISgcJ/bbCUkFrhoH
+vq/5CIWxCPp0f85R4qxxQ5ihxJ0YDQT9Jpx4TMss4PSavPaBH3RXow5Ohe+bYoQ
NE5OgEXk2wVfZczCZpigBKbKZHNYcelXtTt/nP3rsCuGcM4h53s=
-----END RSA PRIVATE KEY-----"
# Packer passes boolean user variables through as '1', but this might change in
# the future, so also check for 'true'.
if [ "${INSTALL_VAGRANT_KEY}" = "true" ] || [ "${INSTALL_VAGRANT_KEY}" = "1" ]; then
# Create Vagrant user (if not already present)
if ! id -u ${SSH_USER} >/dev/null 2>&1; then
echo "==> Creating ${SSH_USER} user"
/usr/sbin/groupadd ${SSH_USER}
/usr/sbin/useradd ${SSH_USER} -g ${SSH_USER} -G sudo -d ${SSH_USER_HOME} --create-home
echo "${SSH_USER}:${SSH_USER}" | chpasswd
fi
# Set up sudo
echo "==> Giving ${SSH_USER} sudo powers"
echo "${SSH_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers.d/vagrant
echo "==> Installing vagrant keys"
mkdir ${SSH_USER_HOME}/.ssh
chmod 700 ${SSH_USER_HOME}/.ssh
pushd ${SSH_USER_HOME}/.ssh
# https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub
echo "${VAGRANT_INSECURE_KEY}" > ${SSH_USER_HOME}/.ssh/authorized_keys
chmod 600 ${SSH_USER_HOME}/.ssh/authorized_keys
echo "${VAGRANT_SECURE_KEY}" > ${SSH_USER_HOME}/.ssh/id_rsa
chmod 600 ${SSH_USER_HOME}/.ssh/id_rsa
chown -R ${SSH_USER}:${SSH_USER} ${SSH_USER_HOME}/.ssh
popd
# add default user to necessary groups:
# workaround for Docker not being installed yet:
groupadd -f docker
usermod -aG docker vagrant
fi

View File

@ -1,47 +0,0 @@
#!/bin/bash -eux
if [[ $PACKER_BUILDER_TYPE =~ vmware ]]; then
echo "==> Installing VMware Tools"
# Assuming the following packages are installed
# apt-get install -y linux-headers-$(uname -r) build-essential perl
cd /tmp
mkdir -p /mnt/cdrom
mount -o loop /home/vagrant/linux.iso /mnt/cdrom
tar zxf /mnt/cdrom/VMwareTools-*.tar.gz -C /tmp/
/tmp/vmware-tools-distrib/vmware-install.pl -d
rm /home/vagrant/linux.iso
umount /mnt/cdrom
rmdir /mnt/cdrom
rm -rf /tmp/VMwareTools-*
fi
if [[ $PACKER_BUILDER_TYPE =~ virtualbox ]]; then
echo "==> Installing VirtualBox guest additions"
# Assuming the following packages are installed
# apt-get install -y linux-headers-$(uname -r) build-essential perl
# apt-get install -y dkms
VBOX_VERSION=$(cat /home/vagrant/.vbox_version)
mount -o loop /home/vagrant/VBoxGuestAdditions_$VBOX_VERSION.iso /mnt
sh /mnt/VBoxLinuxAdditions.run
umount /mnt
rm /home/vagrant/VBoxGuestAdditions_$VBOX_VERSION.iso
rm /home/vagrant/.vbox_version
if [[ $VBOX_VERSION = "4.3.10" ]]; then
ln -s /opt/VBoxGuestAdditions-4.3.10/lib/VBoxGuestAdditions /usr/lib/VBoxGuestAdditions
fi
fi
if [[ $PACKER_BUILDER_TYPE =~ parallels ]]; then
echo "==> Installing Parallels tools"
mount -o loop /home/vagrant/prl-tools-lin.iso /mnt
/mnt/install --install-unattended-with-deps
umount /mnt
rm -rf /home/vagrant/prl-tools-lin.iso
rm -f /home/vagrant/.prlctl_version
fi

Some files were not shown because too many files have changed in this diff Show More