Retire project

Change-Id: I290fe34314c62807ab48be73b6a0442bb4c827cd
This commit is contained in:
Filip Pytloun 2017-01-25 18:22:19 +01:00
parent 989d6db4a7
commit e4a00028a9
89 changed files with 7 additions and 4130 deletions

4
.gitignore vendored
View File

@ -1,4 +0,0 @@
tests/build/
*.swp
*.pyc
.ropeproject

View File

@ -1,4 +0,0 @@
[gerrit]
host=review.openstack.org
port=29418
project=openstack/salt-formula-kubernetes.git

View File

@ -1,34 +0,0 @@
kubernetes formula
==================
2017.1.2 (2017-01-19)
- fix cni copy order
2017.1.1 (2017-01-18)
- move basic k8s setup to common
- copy cni from hyperkube
- configurable calico node image
- use calico/cni image for obtaining cnis
- use calico/ctl image for obtaining calicoctl binary
- add cross requirement for k8s services and hyperkube
- update metadata for new pillar model
- update manifests to use hyperkube from common
2016.8.3 (2016-08-12)
- remove obsolete kube-addons scripts
2016.8.2 (2016-08-10)
- minor fixes
2016.8.1 (2016-08-05)
- second release
0.0.1 (2016-06-13)
- Initial formula setup

175
LICENSE
View File

@ -1,175 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,26 +0,0 @@
DESTDIR=/
SALTENVDIR=/usr/share/salt-formulas/env
RECLASSDIR=/usr/share/salt-formulas/reclass
FORMULANAME=$(shell grep name: metadata.yml|head -1|cut -d : -f 2|grep -Eo '[a-z0-9\-]*')
all:
@echo "make install - Install into DESTDIR"
@echo "make test - Run tests"
@echo "make clean - Cleanup after tests run"
install:
# Formula
[ -d $(DESTDIR)/$(SALTENVDIR) ] || mkdir -p $(DESTDIR)/$(SALTENVDIR)
cp -a $(FORMULANAME) $(DESTDIR)/$(SALTENVDIR)/
[ ! -d _modules ] || cp -a _modules $(DESTDIR)/$(SALTENVDIR)/
[ ! -d _states ] || cp -a _states $(DESTDIR)/$(SALTENVDIR)/ || true
# Metadata
[ -d $(DESTDIR)/$(RECLASSDIR)/service/$(FORMULANAME) ] || mkdir -p $(DESTDIR)/$(RECLASSDIR)/service/$(FORMULANAME)
cp -a metadata/service/* $(DESTDIR)/$(RECLASSDIR)/service/$(FORMULANAME)
test:
[ ! -d tests ] || (cd tests; ./run_tests.sh)
clean:
[ ! -d tests/build ] || rm -rf tests/build
[ ! -d build ] || rm -rf build

View File

@ -1,752 +1,9 @@
Project moved
=============
==================
Kubernetes Formula
==================
This repository as a part of openstack-salt project was moved to join rest of
salt-formulas ecosystem.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
This formula deploys production ready Kubernetes and generate Kubernetes manifests as well.
Based on official Kubernetes salt
https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase
Extended on Contrail contribution https://github.com/Juniper/kubernetes/blob/opencontrail-integration/docs/getting-started-guides/opencontrail.md
Sample pillars
==============
**REQUIRED:** Define image to use for hyperkube, CNIs and calicoctl image
.. code-block:: yaml
parameters:
kubernetes:
common:
hyperkube:
image: gcr.io/google_containers/hyperkube:v1.4.6
pool:
network:
calicoctl:
image: calico/ctl
cni:
image: calico/cni
Containers on pool definitions in pool.service.local
.. code-block:: yaml
parameters:
kubernetes:
pool:
service:
local:
enabled: False
service: libvirt
cluster: openstack-compute
namespace: default
role: ${linux:system:name}
type: LoadBalancer
kind: Deployment
apiVersion: extensions/v1beta1
replicas: 1
host_pid: True
nodeSelector:
- key: openstack
value: ${linux:system:name}
hostNetwork: True
container:
libvirt-compute:
privileged: True
image: ${_param:docker_repository}/libvirt-compute
tag: ${_param:openstack_container_tag}
Master definition
.. code-block:: yaml
kubernetes:
master:
addons:
dns:
domain: cluster.local
enabled: true
replicas: 1
server: 10.254.0.10
admin:
password: password
username: admin
apiserver:
address: 10.0.175.100
port: 8080
ca: kubernetes
enabled: true
etcd:
host: 127.0.0.1
members:
- host: 10.0.175.100
name: node040
name: node040
token: ca939ec9c2a17b0786f6d411fe019e9b
kubelet:
allow_privileged: true
network:
engine: calico
hash: fb5e30ebe6154911a66ec3fb5f1195b2
private_ip_range: 10.150.0.0/16
version: v0.19.0
service_addresses: 10.254.0.0/16
storage:
engine: glusterfs
members:
- host: 10.0.175.101
port: 24007
- host: 10.0.175.102
port: 24007
- host: 10.0.175.103
port: 24007
port: 24007
token:
admin: DFvQ8GJ9JD4fKNfuyEddw3rjnFTkUKsv
controller_manager: EreGh6AnWf8DxH8cYavB2zS029PUi7vx
dns: RAFeVSE4UvsCz4gk3KYReuOI5jsZ1Xt3
kube_proxy: DFvQ8GelB7afH3wClC9romaMPhquyyEe
kubelet: 7bN5hJ9JD4fKjnFTkUKsvVNfuyEddw3r
logging: MJkXKdbgqRmTHSa2ykTaOaMykgO6KcEf
monitoring: hnsj0XqABgrSww7Nqo7UVTSZLJUt2XRd
scheduler: HY1UUxEPpmjW4a1dDLGIANYQp1nZkLDk
version: v1.2.4
kubernetes:
pool:
address: 0.0.0.0
allow_privileged: true
ca: kubernetes
cluster_dns: 10.254.0.10
cluster_domain: cluster.local
enabled: true
kubelet:
allow_privileged: true
config: /etc/kubernetes/manifests
frequency: 5s
master:
apiserver:
members:
- host: 10.0.175.100
etcd:
members:
- host: 10.0.175.100
host: 10.0.175.100
network:
engine: calico
hash: fb5e30ebe6154911a66ec3fb5f1195b2
version: v0.19.0
token:
kube_proxy: DFvQ8GelB7afH3wClC9romaMPhquyyEe
kubelet: 7bN5hJ9JD4fKjnFTkUKsvVNfuyEddw3r
version: v1.2.4
Kubernetes with OpenContrail network plugin
------------------------------------------------
On Master:
.. code-block:: yaml
kubernetes:
master:
network:
engine: opencontrail
host: 10.0.170.70
port: 8082
default_domain: default-domain
default_project: default-domain:default-project
public_network: default-domain:default-project:Public
public_ip_range: 185.22.97.128/26
private_ip_range: 10.150.0.0/16
service_cluster_ip_range: 10.254.0.0/16
network_label: name
service_label: uses
cluster_service: kube-system/default
network_manager:
image: pupapaik/opencontrail-kube-network-manager
tag: release-1.1-jpa-final-1
On pools:
.. code-block:: yaml
kubernetes:
pool:
network:
engine: opencontrail
Kubernetes control plane running in systemd
-------------------------------------------
By default kube-apiserver, kube-scheduler, kube-controllermanager, kube-proxy, etcd running in docker containers through manifests. For stable production environment this should be run in systemd.
.. code-block:: yaml
kubernetes:
master:
container: false
kubernetes:
pool:
container: false
Because k8s services run under kube user without root privileges, there is need to change secure port for apiserver.
.. code-block:: yaml
kubernetes:
master:
apiserver:
secure_port: 8081
Kubernetes with Flannel
-----------------------
On Master:
.. code-block:: yaml
kubernetes:
master:
network:
engine: flannel
# If you don't register master as node:
etcd:
members:
- host: 10.0.175.101
port: 4001
- host: 10.0.175.102
port: 4001
- host: 10.0.175.103
port: 4001
common:
network:
engine: flannel
On pools:
.. code-block:: yaml
kubernetes:
pool:
network:
engine: flannel
etcd:
members:
- host: 10.0.175.101
port: 4001
- host: 10.0.175.102
port: 4001
- host: 10.0.175.103
port: 4001
common:
network:
engine: flannel
Kubernetes with Calico
-----------------------
On Master:
.. code-block:: yaml
kubernetes:
master:
network:
engine: calico
# If you don't register master as node:
etcd:
members:
- host: 10.0.175.101
port: 4001
- host: 10.0.175.102
port: 4001
- host: 10.0.175.103
port: 4001
On pools:
.. code-block:: yaml
kubernetes:
pool:
network:
engine: calico
etcd:
members:
- host: 10.0.175.101
port: 4001
- host: 10.0.175.102
port: 4001
- host: 10.0.175.103
port: 4001
Post deployment configuration
.. code-block:: bash
# set ETCD
export ETCD_AUTHORITY=10.0.111.201:4001
# Set NAT for pods subnet
calicoctl pool add 192.168.0.0/16 --nat-outgoing
# Status commands
calicoctl status
calicoctl node show
Kubernetes with GlusterFS for storage
---------------------------------------------
.. code-block:: yaml
kubernetes:
master
...
storage:
engine: glusterfs
port: 24007
members:
- host: 10.0.175.101
port: 24007
- host: 10.0.175.102
port: 24007
- host: 10.0.175.103
port: 24007
...
Kubernetes namespaces
---------------------
Create namespace:
.. code-block:: yaml
kubernetes:
master
...
namespace:
kube-system:
enabled: True
namespace2:
enabled: True
namespace3:
enabled: False
...
Kubernetes labels
-----------------
Create namespace:
.. code-block:: yaml
kubernetes:
pool
...
host:
label:
key01:
value: value01
enable: True
key02:
value: value02
enable: False
name: ${linux:system:name}
...
Pull images from private registries
-----------------------------------
.. code-block:: yaml
kubernetes:
master
...
registry:
secret:
registry01:
enabled: True
key: (get from `cat /root/.docker/config.json | base64`)
namespace: default
...
control:
...
service:
service01:
...
image_pull_secretes: registry01
...
Kubernetes Service Definitions in pillars
==========================================
Following samples show how to generate kubernetes manifest as well and provide single tool for complete infrastructure management.
Deployment manifest
---------------------
.. code-block:: yaml
salt:
control:
enabled: True
hostNetwork: True
service:
memcached:
privileged: True
service: memcached
role: server
type: LoadBalancer
replicas: 3
kind: Deployment
apiVersion: extensions/v1beta1
ports:
- port: 8774
name: nova-api
- port: 8775
name: nova-metadata
volume:
volume_name:
type: hostPath
mount: /certs
path: /etc/certs
container:
memcached:
image: memcached
tag:2
ports:
- port: 8774
name: nova-api
- port: 8775
name: nova-metadata
variables:
- name: HTTP_TLS_CERTIFICATE:
value: /certs/domain.crt
- name: HTTP_TLS_KEY
value: /certs/domain.key
volumes:
- name: /etc/certs
type: hostPath
mount: /certs
path: /etc/certs
PetSet manifest
---------------------
.. code-block:: yaml
service:
memcached:
apiVersion: apps/v1alpha1
kind: PetSet
service_name: 'memcached'
container:
memcached:
...
Configmap
---------
You are able to create configmaps using support layer between formulas.
It works simple, eg. in nova formula there's file ``meta/config.yml`` which
defines config files used by that service and roles.
Kubernetes formula is able to generate these files using custom pillar and
grains structure. This way you are able to run docker images built by any way
while still re-using your configuration management.
Example pillar:
.. code-block:: bash
kubernetes:
control:
config_type: default|kubernetes # Output is yaml k8s or default single files
configmap:
nova-control:
grains:
# Alternate grains as OS running in container may differ from
# salt minion OS. Needed only if grains matters for config
# generation.
os_family: Debian
pillar:
# Generic pillar for nova controller
nova:
controller:
enabled: true
versionn: liberty
...
To tell which services supports config generation, you need to ensure pillar
structure like this to determine support:
.. code-block:: yaml
nova:
_support:
config:
enabled: true
initContainers
--------------
Example pillar:
.. code-block:: bash
kubernetes:
control:
service:
memcached:
init_containers:
- name: test-mysql
image: busybox
command:
- sleep
- 3600
volumes:
- name: config
mount: /test
- name: test-memcached
image: busybox
command:
- sleep
- 3600
volumes:
- name: config
mount: /test
Affinity
--------
podAffinity
===========
Example pillar:
.. code-block:: bash
kubernetes:
control:
service:
memcached:
affinity:
pod_affinity:
name: podAffinity
expression:
label_selector:
name: labelSelector
selectors:
- key: app
value: memcached
topology_key: kubernetes.io/hostname
podAntiAffinity
===============
Example pillar:
.. code-block:: bash
kubernetes:
control:
service:
memcached:
affinity:
anti_affinity:
name: podAntiAffinity
expression:
label_selector:
name: labelSelector
selectors:
- key: app
value: opencontrail-control
topology_key: kubernetes.io/hostname
nodeAffinity
===============
Example pillar:
.. code-block:: bash
kubernetes:
control:
service:
memcached:
affinity:
node_affinity:
name: nodeAffinity
expression:
match_expressions:
name: matchExpressions
selectors:
- key: key
operator: In
values:
- value1
- value2
Volumes
-------
hostPath
==========
.. code-block:: yaml
service:
memcached:
container:
memcached:
volumes:
- name: volume1
mountPath: /volume
readOnly: True
...
volume:
volume1:
name: /etc/certs
type: hostPath
path: /etc/certs
emptyDir
========
.. code-block:: yaml
service:
memcached:
container:
memcached:
volumes:
- name: volume1
mountPath: /volume
readOnly: True
...
volume:
volume1:
name: /etc/certs
type: emptyDir
configMap
=========
.. code-block:: yaml
service:
memcached:
container:
memcached:
volumes:
- name: volume1
mountPath: /volume
readOnly: True
...
volume:
volume1:
type: config_map
item:
configMap1:
key: config.conf
path: config.conf
configMap2:
key: policy.json
path: policy.json
To mount single configuration file instead of whole directory:
.. code-block:: yaml
service:
memcached:
container:
memcached:
volumes:
- name: volume1
mountPath: /volume/config.conf
sub_path: config.conf
Generating Jobs
===============
Example pillar:
.. code-block:: yaml
kubernetes:
control:
job:
sleep:
job: sleep
restart_policy: Never
container:
sleep:
image: busybox
tag: latest
command:
- sleep
- "3600"
Volumes and Variables can be used as the same way as during Deployment generation.
Custom params:
.. code-block:: yaml
kubernetes:
control:
job:
host_network: True
host_pid: True
container:
sleep:
privileged: True
node_selector:
key: node
value: one
image_pull_secretes: password
Documentation and Bugs
======================
To learn how to deploy OpenStack Salt, consult the documentation available
online at:
https://wiki.openstack.org/wiki/OpenStackSalt
In the unfortunate event that bugs are discovered, they should be reported to
the appropriate bug tracker. If you obtained the software from a 3rd party
operating system vendor, it is often wise to use their own bug tracker for
reporting problems. In all other cases use the master OpenStack bug tracker,
available at:
http://bugs.launchpad.net/openstack-salt
Developers wishing to work on the OpenStack Salt project should always base
their work on the latest formulas code, available from the master GIT
repository at:
https://git.openstack.org/cgit/openstack/salt-formula-kubernetes
Developers should also join the discussion on the IRC list, at:
https://wiki.openstack.org/wiki/Meetings/openstack-salt
Copyright and authors
=====================
(c) 2016 tcp cloud a.s.
(c) 2016 OpenStack Foundation
Github: https://github.com/salt-formulas
Launchpad https://launchpad.net/salt-formulas
IRC: #salt-formulas @ irc.freenode.net

View File

@ -1 +0,0 @@
2017.1.2

View File

@ -1 +0,0 @@
python-yaml

View File

@ -1,118 +0,0 @@
{% from "kubernetes/map.jinja" import common with context %}
kubernetes_pkgs:
pkg.installed:
- names: {{ common.pkgs }}
{%- if common.network.get('engine', 'none') == 'flannel' %}
flannel-tar:
archive:
- extracted
- user: root
- name: /usr/local/src
- makedirs: True
- source: https://storage.googleapis.com/kubernetes-release/flannel/flannel-0.5.5-linux-amd64.tar.gz
- tar_options: v
- source_hash: md5=972c717254775bef528f040af804f2cc
- archive_format: tar
- if_missing: /usr/local/src/flannel/flannel-0.5.5/
{%- endif %}
{%- if common.hyperkube %}
/tmp/hyperkube:
file.directory:
- user: root
- group: root
hyperkube-copy:
dockerng.running:
- image: {{ common.hyperkube.image }}
- command: cp -v /hyperkube /tmp/hyperkube
- binds:
- /tmp/hyperkube/:/tmp/hyperkube/
- force: True
- require:
- file: /tmp/hyperkube
/usr/bin/hyperkube:
file.managed:
- source: /tmp/hyperkube/hyperkube
- mode: 751
- makedirs: true
- user: root
- group: root
- require:
- dockerng: hyperkube-copy
/usr/bin/kubectl:
file.symlink:
- target: /usr/bin/hyperkube
- require:
- file: /usr/bin/hyperkube
/etc/systemd/system/kubelet.service:
file.managed:
- source: salt://kubernetes/files/systemd/kubelet.service
- template: jinja
- user: root
- group: root
- mode: 644
/etc/kubernetes/config:
file.absent
/etc/kubernetes/manifests:
file.directory:
- user: root
- group: root
- mode: 0751
{%- if not pillar.kubernetes.pool is defined %}
/etc/default/kubelet:
file.managed:
- source: salt://kubernetes/files/kubelet/default.master
- template: jinja
- user: root
- group: root
- mode: 644
{%- else %}
/etc/default/kubelet:
file.managed:
- source: salt://kubernetes/files/kubelet/default.pool
- template: jinja
- user: root
- group: root
- mode: 644
{%- endif %}
manifest_dir_create:
file.directory:
- name: /etc/kubernetes/manifests
- user: root
- group: root
- mode: 0751
/etc/kubernetes/kubelet.kubeconfig:
file.managed:
- source: salt://kubernetes/files/kubelet/kubelet.kubeconfig
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
kubelet_service:
service.running:
- name: kubelet
- enable: True
- watch:
- file: /etc/default/kubelet
- file: /usr/bin/hyperkube
- file: /etc/kubernetes/kubelet.kubeconfig
- file: manifest_dir_create
{% endif %}

View File

@ -1,153 +0,0 @@
{% from "kubernetes/map.jinja" import control with context %}
{%- if control.enabled %}
/srv/kubernetes:
file.directory:
- makedirs: true
{%- if control.job is defined %}
{%- for job_name, job in control.job.iteritems() %}
/srv/kubernetes/jobs/{{ job_name }}-job.yml:
file.managed:
- source: salt://kubernetes/files/job.yml
- user: root
- group: root
- template: jinja
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
job: {{ job|yaml }}
{%- endfor %}
{%- endif %}
{%- for service_name, service in control.service.iteritems() %}
{%- if service.enabled %}
/srv/kubernetes/services/{{ service.cluster }}/{{ service_name }}-svc.yml:
file.managed:
- source: salt://kubernetes/files/svc.yml
- user: root
- group: root
- template: jinja
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
service: {{ service|yaml }}
{%- endif %}
/srv/kubernetes/{{ service.kind|lower }}/{{ service_name }}-{{ service.kind }}.yml:
file.managed:
- source: salt://kubernetes/files/rc.yml
- user: root
- group: root
- template: jinja
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
service: {{ service|yaml }}
{%- endfor %}
{%- for node_name, node_grains in salt['mine.get']('*', 'grains.items').iteritems() %}
{%- if node_grains.get('kubernetes', {}).service is defined %}
{%- set service = node_grains.get('kubernetes', {}).get('service', {}) %}
{%- if service.enabled %}
/srv/kubernetes/services/{{ node_name }}-svc.yml:
file.managed:
- source: salt://kubernetes/files/svc.yml
- user: root
- group: root
- template: jinja
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
service: {{ service|yaml }}
{%- endif %}
/srv/kubernetes/{{ service.kind|lower }}/{{ node_name }}-{{ service.kind }}.yml:
file.managed:
- source: salt://kubernetes/files/rc.yml
- user: root
- group: root
- template: jinja
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
service: {{ service|yaml }}
{%- endif %}
{%- endfor %}
{%- for configmap_name, configmap in control.get('configmap', {}).iteritems() %}
{%- if configmap.enabled|default(True) %}
{%- if configmap.pillar is defined %}
{%- if control.config_type == "default" %}
{%- for service_name in configmap.pillar.keys() %}
{%- if pillar.get(service_name, {}).get('_support', {}).get('config', {}).get('enabled', False) %}
{%- set support_fragment_file = service_name+'/meta/config.yml' %}
{% macro load_support_file(pillar, grains) %}{% include support_fragment_file %}{% endmacro %}
{%- set service_config_files = load_support_file(configmap.pillar, configmap.get('grains', {}))|load_yaml %}
{%- for service_config_name, service_config in service_config_files.config.iteritems() %}
/srv/kubernetes/configmap/{{ configmap_name }}/{{ service_config_name }}:
file.managed:
- source: {{ service_config.source }}
- user: root
- group: root
- template: {{ service_config.template }}
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
pillar: {{ configmap.pillar|yaml }}
grains: {{ configmap.get('grains', {}) }}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- else %}
/srv/kubernetes/configmap/{{ configmap_name }}.yml:
file.managed:
- source: salt://kubernetes/files/configmap.yml
- user: root
- group: root
- template: jinja
- makedirs: true
- require:
- file: /srv/kubernetes
- defaults:
configmap_name: {{ configmap_name }}
configmap: {{ configmap|yaml }}
grains: {{ configmap.get('grains', {}) }}
{%- endif %}
{%- else %}
{# TODO: configmap not using support between formulas #}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- endif %}

View File

@ -1,3 +0,0 @@
include:
- kubernetes.control.cluster

View File

@ -1,2 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{{ master.admin.password }},{{ master.admin.username }},admin
Can't render this file because it contains an unexpected character in line 1 and column 10.

View File

@ -1,46 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
[Unit]
Description=calico-node
After=docker.service
Requires=docker.service
[Service]
ExecStartPre=-/usr/bin/docker rm -f calico-node
ExecStart=/usr/bin/docker run --net=host --privileged \
--name=calico-node \
-e HOSTNAME={{ master.host.name }} \
-e IP={{ master.apiserver.address }} \
-e IP6={{ master.get('ipv6_address', '') }} \
{%- if master.network.calico_network_backend is defined %}
-e CALICO_NETWORKING_BACKEND="{{ master.network.calico_network_backend }}"
{%- endif %}
-e AS={{ master.network.get('as', '64512') }} \
-e NO_DEFAULT_masterS={{ master.network.get('no_default_masters', false ) }} \
-e CALICO_LIBNETWORK_ENABLED={{ master.network.get('libnetwork_enabled', true ) }} \
-e ETCD_ENDPOINTS={% for member in master.network.etcd.members %}http://{{ member.host }}:{{ member.port }}{% if not loop.last %},{% endif %}{% endfor %} \
{%- if master.network.etcd.ssl is defined %}
##TO BE DONE
-e ETCD_CA_CERT_FILE= \
-e ETCD_CERT_FILE= \
-e ETCD_KEY_FILE= \
-v {{ calico_cert_dir }}:{{ calico_cert_dir }}:ro \
{{ calico_node_image_repo }}:{{ calico_node_image_tag }}
{%- endif %}
-v /var/log/calico:/var/log/calico \
-v /run/docker/plugins:/run/docker/plugins \
-v /lib/modules:/lib/modules \
-v /var/run/calico:/var/run/calico \
{%- if master.network.volumes is defined %}
{%- for volume in master.network.volumes %}
-v {{ volume }} \
{%- endfor %}
{%- endif %}
{{ master.network.get('image', 'calico/node') }}:{{ master.network.get('image', 'latest') }}
Restart=always
RestartSec=10s
ExecStop=-/usr/bin/docker stop calico-node
[Install]
WantedBy=multi-user.target

View File

@ -1,46 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
[Unit]
Description=calico-node
After=docker.service
Requires=docker.service
[Service]
ExecStartPre=-/usr/bin/docker rm -f calico-node
ExecStart=/usr/bin/docker run --net=host --privileged \
--name=calico-node \
-e HOSTNAME={{ pool.host.name }} \
-e IP={{ pool.address }} \
-e IP6={{ pool.get('ipv6_address', '') }} \
{%- if pool.network.calico_network_backend is defined %}
-e CALICO_NETWORKING_BACKEND="{{ pool.network.calico_network_backend }}"
{%- endif %}
-e AS={{ pool.network.get('as', '64512') }} \
-e NO_DEFAULT_POOLS={{ pool.network.get('no_default_pools', false ) }} \
-e CALICO_LIBNETWORK_ENABLED={{ pool.network.get('libnetwork_enabled', true ) }} \
-e ETCD_ENDPOINTS={% for member in pool.network.etcd.members %}http://{{ member.host }}:{{ member.port }}{% if not loop.last %},{% endif %}{% endfor %} \
{%- if pool.network.etcd.ssl is defined %}
##TO BE DONE
-e ETCD_CA_CERT_FILE= \
-e ETCD_CERT_FILE= \
-e ETCD_KEY_FILE= \
-v {{ calico_cert_dir }}:{{ calico_cert_dir }}:ro \
{{ calico_node_image_repo }}:{{ calico_node_image_tag }}
{%- endif %}
-v /var/log/calico:/var/log/calico \
-v /run/docker/plugins:/run/docker/plugins \
-v /lib/modules:/lib/modules \
-v /var/run/calico:/var/run/calico \
{%- if pool.network.volumes is defined %}
{%- for volume in pool.network.volumes %}
-v {{ volume }} \
{%- endfor %}
{%- endif %}
{{ pool.network.get('image', 'calico/node') }}
Restart=always
RestartSec=10s
ExecStop=-/usr/bin/docker stop calico-node
[Install]
WantedBy=multi-user.target

View File

@ -1,13 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{
"name": "calico-k8s-network",
"type": "calico",
"etcd_endpoints": "{% for member in pool.network.etcd.members %}http://{{ member.host }}:{{ member.port }}{% if not loop.last %},{% endif %}{% endfor %}",
"log_level": "info",
"ipam": {
"type": "calico-ipam"
},
"kubernetes": {
"kubeconfig": "/etc/kubernetes/kubelet.kubeconfig"
}
}

View File

@ -1,7 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
# This host's IPv4 address (the source IP address used to reach other nodes
# in the Kubernetes cluster).
DEFAULT_IPV4={{ master.apiserver.address }}
# IP and port of etcd instance used by Calico
ETCD_ENDPOINTS={% for member in master.network.etcd.members %}http://{{ member.host }}:{{ member.port }}{% if not loop.last %},{% endif %}{% endfor %}

View File

@ -1,10 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
# This host's IPv4 address (the source IP address used to reach other nodes
# in the Kubernetes cluster).
DEFAULT_IPV4={{ pool.address }}
# The Kubernetes master IP
KUBERNETES_MASTER={{ pool.apiserver.host }}
# IP and port of etcd instance used by Calico
ETCD_ENDPOINTS={% for member in pool.network.etcd.members %}http://{{ member.host }}:{{ member.port }}{% if not loop.last %},{% endif %}{% endfor %}

View File

@ -1,18 +0,0 @@
{%- from "kubernetes/map.jinja" import control with context %}
{%- macro load_support_file(file, pillar, grains) %}{% include file %}{% endmacro %}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ configmap_name }}-{{ configmap.get('version', '1') }}
namespace: {{ configmap.get('namespace', 'default') }}
data:
{%- for service_name in configmap.pillar.keys() %}
{%- if pillar.get(service_name, {}).get('_support', {}).get('config', {}).get('enabled', False) %}
{%- set support_fragment_file = service_name+'/meta/config.yml' %}
{%- set service_config_files = load_support_file(support_fragment_file, configmap.pillar, configmap.get('grains', {}))|load_yaml %}
{%- for service_config_name, service_config in service_config_files.config.iteritems() %}
{{ service_config_name }}: |
{{ load_support_file(service_config.source|replace('salt://', ''), configmap.pillar, configmap.get('grains', {}))|indent(4) }}
{%- endfor %}
{%- endif %}
{%- endfor %}

View File

@ -1,3 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
DAEMON_ARGS="--etcd-endpoints={% for member in master.network.etcd.members %}http://{{ member.host }}:4001{% if not loop.last %},{% endif %}{% endfor %} --ip-masq --etcd-prefix=/kubernetes.io/network"

View File

@ -1,3 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
DAEMON_ARGS="--etcd-endpoints={% for member in pool.network.etcd.members %}http://{{ member.host }}:4001{% if not loop.last %},{% endif %}{% endfor %} --ip-masq --etcd-prefix=/kubernetes.io/network"

View File

@ -1,9 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{
"Network": "{{ master.network.private_ip_range }}",
"SubnetLen": 24,
"Backend": {
"Type": "vxlan",
"VNI": 1
}
}

View File

@ -1,12 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
{%- for member in master.storage.members %}
- addresses:
- ip: {{ member.host }}
ports:
- port: {{ member.port }}
{%- endfor %}

View File

@ -1,8 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Service
metadata:
name: glusterfs-cluster
spec:
ports:
- port: {{ master.storage.port }}

View File

@ -1,89 +0,0 @@
{% from "kubernetes/map.jinja" import control with context %}
apiVersion: batch/v1
kind: Job
metadata:
name: {{ job.job }}-job
namespace: {{ job.get('namespace', 'default') }}
spec:
template:
metadata:
spec:
{%- if job.host_network is defined %}
hostNetwork: True
{%- endif %}
{%- if job.host_pid is defined %}
hostPID: True
{%- endif %}
containers:
{%- for container_name, container in job.container.iteritems() %}
- name: {{ container_name }}
image: {% if container.registry is defined %}{{ container.registry }}/{%- endif %}{{ container.image }}{%- if container.tag is defined %}:{{ container.tag }}{%- endif %}
imagePullPolicy: {{ container.get('image_pull_policy', 'IfNotPresent') }}
{%- if container.privileged is defined %}
securityContext:
privileged: True
{%- endif %}
{%- if container.variables is defined %}
env:
{%- for variable in container.variables %}
- name: {{ variable.name }}
{%- if variable.field_path is defined %}
valueFrom:
fieldRef:
fieldPath: {{ variable.fieldPath }}
{%- else %}
value: {{ variable.value }}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- if container.command is defined %}
command:
{%- for command in container.command %}
- {{ command }}
{%- endfor %}
{%- endif %}
{%- if container.volumes is defined %}
volumeMounts:
{%- for volume in container.volumes %}
- name: {{ volume.name }}
mountPath: {{ volume.mount }}
readOnly: {{ volume.get('read_only', 'False') }}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- if job.volume is defined %}
volumes:
{%- for volume_name, volume in job.volume.iteritems() %}
- name: {{ volume_name }}
{%- if volume.type == 'empty_dir' %}
emptyDir: {}
{%- elif volume.type == 'host_path' %}
hostPath:
path: {{ volume.path }}
{%- elif volume.type == 'glusterfs' %}
glusterfs:
endpoints: {{ volume.endpoints }}
path: {{ volume.path }}
readOnly: {{ volume.get('read_only', 'False') }}
{%- elif volume.type == 'config_map' %}
configMap:
name: {{ volume_name }}
items:
{%- for name, item in volume.item.iteritems() %}
- key: {{ item.key }}
path: {{ item.path }}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- endif %}
restartPolicy: {{ job.restart_policy }}
{%- if job.node_selector is defined %}
nodeSelector:
{%- for selector in job.node_selector %}
{{ selector.key }}: {{ selector.value }}
{%- endfor %}
{%- endif %}
{%- if job.image_pull_secretes is defined %}
imagePullSecrets:
- name: {{ job.image_pull_secretes }}
{%- endif %}

View File

@ -1,13 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{{ master.token.admin }},admin,admin
{{ master.token.kubelet }},kubelet,kubelet
{{ master.token.kube_proxy }},kube_proxy,kube_proxy
{{ master.token.scheduler }},system:scheduler,system:scheduler
{{ master.token.controller_manager }},system:controller_manager,system:controller_manager
{%- if master.addons.logging is defined %}
{{ master.token.logging }},system:logging,system:logging
{%- endif %}
{%- if master.addons.monitoring is defined %}
{{ master.token.monitoring }},system:monitoring,system:monitoring
{%- endif %}
{{ master.token.dns }},system:dns,system:dns
Can't render this file because it contains an unexpected character in line 1 and column 10.

View File

@ -1,17 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard-address
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
selector:
k8s-app: kubernetes-dashboard
deprecatedPublicIPs: ["{{ master.addons.ui.public_ip }}"]
type: LoadBalancer
ports:
- port: 80
targetPort: 9090

View File

@ -1,43 +0,0 @@
apiVersion: v1
kind: ReplicationController
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
name: dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
version: v1.4.0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 50Mi
requests:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30

View File

@ -1,16 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Endpoints
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
subsets:
- addresses:
- ip: {{ master.addons.ui.public_ip }}
ports:
- port: 9090
protocol: TCP

View File

@ -1,18 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
spec:
{%- if master.network.engine != 'opencontrail' %}
selector:
k8s-app: kubernetes-dashboard
type: NodePort
{%- endif %}
ports:
- port: 80
targetPort: 9090

View File

@ -1,104 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: ReplicationController
metadata:
name: dns
namespace: kube-system
labels:
k8s-app: kube-dns
version: v20
kubernetes.io/cluster-service: "true"
spec:
replicas: {{ master.addons.dns.replicas }}
selector:
k8s-app: kube-dns
version: v20
template:
metadata:
labels:
k8s-app: kube-dns
version: v20
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/kubedns-amd64:1.8
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthz-kubedns
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain={{ master.addons.dns.domain }}
- --dns-port=10053
- --kube-master-url=http://{{ master.apiserver.insecure_address }}:8080
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4
livenessProbe:
httpGet:
path: /healthz-dnsmasq
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
{%- if master.addons.dns.get('dnsmasq', {}) %}
{%- for option_name, option_value in master.addons.dns.dnsmasq.iteritems() %}
- --{{ option_name }}{% if option_value %}={{ option_value }}{% endif %}
{%- endfor %}
{%- endif %}
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: healthz
image: gcr.io/google_containers/exechealthz-amd64:1.2
resources:
limits:
memory: 50Mi
requests:
cpu: 10m
memory: 50Mi
args:
- --cmd=nslookup kubernetes.default.svc.{{ master.addons.dns.domain }} 127.0.0.1 >/dev/null
- --url=/healthz-dnsmasq
- --cmd=nslookup kubernetes.default.svc.{{ master.addons.dns.domain }} 127.0.0.1:10053 >/dev/null
- --url=/healthz-kubedns
- --port=8080
- --quiet
ports:
- containerPort: 8080
protocol: TCP
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -1,21 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ master.addons.dns.server }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -1,18 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: heapster
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: 'Heapster'
name: heapster-address
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082
selector:
k8s-app: heapster
deprecatedPublicIPs: ['{{ master.addons.heapster_influxdb.public_ip }}']
type: LoadBalancer

View File

@ -1,30 +0,0 @@
apiVersion: v1
kind: ReplicationController
metadata:
labels:
k8s-app: heapster
version: v6
name: heapster
namespace: kube-system
spec:
replicas: 1
selector:
k8s-app: heapster
version: v6
template:
metadata:
labels:
# name: heapster
uses: monitoring-influxdb
k8s-app: heapster
version: v6
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: heapster
image: kubernetes/heapster:canary
imagePullPolicy: Always
command:
- /heapster
- --source=kubernetes:https://kubernetes.default
- --sink=influxdb:http://monitoring-influxdb:8086

View File

@ -1,17 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Endpoints
metadata:
name: heapster
namespace: kube-system
labels:
k8s-app: heapster
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Heapster"
subsets:
- addresses:
- ip: {{ master.addons.heapster_influxdb.public_ip }}
ports:
- port: 8082
protocol: TCP

View File

@ -1,13 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: heapster
kubernetes.io/cluster-service: 'true'
kubernetes.io/name: 'Heapster'
name: heapster
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 8082

View File

@ -1,25 +0,0 @@
apiVersion: v1
kind: ReplicationController
metadata:
labels:
name: influxGrafana
name: influxdb-grafana
namespace: kube-system
spec:
replicas: 1
selector:
name: influxGrafana
template:
metadata:
labels:
name: influxGrafana
spec:
containers:
- name: influxdb
image: kubernetes/heapster_influxdb:v0.6
volumeMounts:
- mountPath: /data
name: influxdb-storage
volumes:
- name: influxdb-storage
emptyDir: {}

View File

@ -1,17 +0,0 @@
apiVersion: v1
kind: Service
metadata:
labels:
name: monitoring-influxdb
name: monitoring-influxdb
namespace: kube-system
spec:
ports:
- name: http
port: 8083
targetPort: 8083
- name: api
port: 8086
targetPort: 8086
selector:
name: influxGrafana

View File

@ -1,59 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: ReplicationController
metadata:
name: registry
namespace: kube-system
labels:
k8s-app: kube-registry
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry
version: v0
template:
metadata:
labels:
k8s-app: kube-registry
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2.5.1
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: {{ master.addons.registry.bind.get('host', '0.0.0.0'}}:{{ master.addons.registry.bind.get('port', '5000'}}
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
ports:
- containerPort: {{ master.addons.registry.bind.get('port', '5000') }}
name: registry
protocol: TCP
{%- if master.addons.registry.volume is defined %}
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
volumes:
- name: image-store
{%- if master.addons.registry.volume.get('type', 'emptyDir') %}
emptyDir: {}
{%- elif master.addons.registry.volume.type == 'hostPath' %}
hostPath:
path: {{ master.addons.registry.volume.path }}
{%- elif master.addons.registry.volume.type == 'glusterfs' %}
glusterfs:
endpoints: {{ master.addons.registry.volume.endpoints }}
path: {{ master.addons.registry.volume.path }}
readOnly: {{ master.addons.registry.volume.read_only }}
{%- endif %}
{%- endif %}

View File

@ -1,17 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeRegistry"
spec:
selector:
k8s-app: kube-registry
ports:
- name: registry
port: {{ master.addons.registry.bind.get('port', '5000') }}
protocol: TCP

View File

@ -1,20 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
apiVersion: v1
kind: Config
current-context: proxy-to-cluster.local
preferences: {}
contexts:
- context:
cluster: cluster.local
user: kube_proxy
name: proxy-to-cluster.local
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/kubelet-client.crt
# server: https://{{ pool.apiserver.host }}:443
name: cluster.local
users:
- name: kube_proxy
user:
token: {{ pool.token.kube_proxy}}

View File

@ -1,4 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
# test_args has to be kept at the end, so they'll overwrite any prior configuration
DAEMON_ARGS="--config=/etc/kubernetes/manifests --allow-privileged={{ master.kubelet.allow_privileged }} --cluster_dns={{ master.addons.dns.server }} --register-node=false --cluster_domain={{ master.addons.dns.domain }} --v=2"

View File

@ -1,4 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
# test_args has to be kept at the end, so they'll overwrite any prior configuration
DAEMON_ARGS="--require-kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --allow-privileged={{ pool.kubelet.allow_privileged }} --cluster_dns={{ pool.cluster_dns }} --cluster_domain={{ pool.cluster_domain }} --v=2 {% if pool.network.engine == 'opencontrail' %}--network-plugin={{ pool.network.engine }}{% endif %} {% if pool.network.engine == 'calico' %}--network-plugin=cni --network-plugin-dir=/etc/cni/net.d{% endif %} --file-check-frequency={{ pool.kubelet.frequency }}"

View File

@ -1,24 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
apiVersion: v1
kind: Config
current-context: kubelet-to-cluster.local
preferences: {}
clusters:
- cluster:
certificate-authority: /etc/kubernetes/ssl/kubelet-client.crt
server: https://{{ pool.apiserver.host }}:443
name: cluster.local
- cluster:
certificate-authority: /etc/kubernetes/ssl/kubelet-client.crt
server: http://{{ pool.apiserver.host }}:8080
name: cluster-http.local
contexts:
- context:
cluster: cluster-http.local
user: kubelet
name: kubelet-to-cluster.local
users:
- name: kubelet
user:
token: {{ pool.token.kubelet }}

View File

@ -1,7 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context -%}
{%- if pool.get('service', {})|length > 0 %}
{%- set service_grains = {'kubernetes': {'service': pool.get('service', {}).get('local', {})}} -%}
{% else %}
{%- set service_grains = {'kubernetes': {}} -%}
{%- endif %}
{{ service_grains|yaml(False) }}

View File

@ -1,47 +0,0 @@
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name":"contrail-vrouter-agent",
"namespace": "kube-system"
},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "vrouter-agent",
"image": "opencontrail/vrouter-agent:2.20",
"securityContext": {
"Privileged": true
},
"resources": {
"limits": {
"cpu": "250m"
}
},
"command": [
"/usr/bin/contrail-vrouter-agent"
],
"volumeMounts": [
{"name": "contrail-configs",
"mountPath": "/etc/contrail",
"readOnly": false
},
{"name": "contrail-logs",
"mountPath": "/var/log/contrail",
"readOnly": false
}
]
}
],
"volumes":[
{ "name": "contrail-configs",
"hostPath": {
"path": "/etc/contrail"}
},
{ "name": "contrail-logs",
"hostPath": {
"path": "/var/log/contrail"}
}
]
}}

View File

@ -1,78 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "flannel-server",
"namespace": "kube-system",
"labels": {
"app": "flannel-server",
"version": "v0.1"
}
},
"spec": {
"volumes": [
{
"name": "varlog",
"hostPath": {
"path": "/var/log"
}
},
{
"name": "etcdstorage",
"emptyDir": {}
},
{
"name": "networkconfig",
"hostPath": {
"path": "/etc/kubernetes/network.json"
}
}
],
"containers": [
{
"name": "flannel-server-helper",
"image": "gcr.io/google_containers/flannel-server-helper:0.1",
"args": [
"--network-config=/etc/kubernetes/network.json",
"--etcd-prefix=/kubernetes.io/network",
"--etcd-server=http://127.0.0.1:4001"
],
"volumeMounts": [
{
"name": "networkconfig",
"mountPath": "/etc/kubernetes/network.json"
}
],
"imagePullPolicy": "Always"
},
{
"name": "flannel-container",
"image": "quay.io/coreos/flannel:0.5.5",
"command": [
"/bin/sh",
"-c",
"/opt/bin/flanneld -listen 0.0.0.0:10253 -etcd-endpoints {% for member in master.network.etcd.members %}http://{{ member.host }}:4001{% if not loop.last %},{% endif %}{% endfor %} -etcd-prefix /kubernetes.io/network 2>&1 | tee -a /var/log/flannel-server.log"
],
"ports": [
{
"hostPort": 10253,
"containerPort": 10253
}
],
"resources": {
"requests": {
"cpu": "100m"
}
},
"volumeMounts": [
{
"name": "varlog",
"mountPath": "/var/log"
}
]
}
],
"hostNetwork": true
}
}

View File

@ -1,84 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- from "kubernetes/map.jinja" import common with context %}
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
spec:
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- name: kube-apiserver
image: {{ common.hyperkube.image }}
command:
- /hyperkube
- apiserver
--insecure-bind-address={{ master.apiserver.insecure_address }}
--etcd-servers={% for member in master.etcd.members %}http://{{ member.host }}:4001{% if not loop.last %},{% endif %}{% endfor %}
--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
--service-cluster-ip-range={{ master.service_addresses }}
--client-ca-file=/etc/kubernetes/ssl/ca-{{ master.ca }}.crt
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--tls-cert-file=/etc/kubernetes/ssl/kubernetes-server.crt
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-server.key
--secure-port={{ master.apiserver.get('secure_port', '443') }}
--bind-address={{ master.apiserver.address }}
--token-auth-file=/srv/kubernetes/known_tokens.csv
--etcd-quorum-read=true
--v=2
--allow-privileged=True
1>>/var/log/kube-apiserver.log 2>&1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
ports:
- containerPort: {{ master.apiserver.get('secure_port', '443') }}
hostPort: {{ master.apiserver.get('secure_port', '443') }}
name: https
protocol: TCP
- containerPort: 8080
hostPort: 8080
name: local
protocol: TCP
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /srv/kubernetes
name: srvkube
readOnly: true
- mountPath: /var/log/kube-apiserver.log
name: logfile
- mountPath: /etc/kubernetes/ssl
name: etcssl
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usrsharecacerts
readOnly: true
- mountPath: /srv/sshproxy
name: srvsshproxy
volumes:
- hostPath:
path: /srv/kubernetes
name: srvkube
- hostPath:
path: /var/log/kube-apiserver.log
name: logfile
- hostPath:
path: /etc/kubernetes/ssl
name: etcssl
- hostPath:
path: /usr/share/ca-certificates
name: usrsharecacerts
- hostPath:
path: /srv/sshproxy
name: srvsshproxy

View File

@ -1,64 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- from "kubernetes/map.jinja" import common with context %}
apiVersion: v1
kind: Pod
metadata:
name: kube-controller-manager
namespace: kube-system
spec:
dnsPolicy: ClusterFirst
hostNetwork: true
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- name: kube-controller-manager
image: {{ common.hyperkube.image }}
command:
- /hyperkube
- controller-manager
--master={{ master.apiserver.insecure_address }}:8080
--cluster-name=kubernetes
--service-account-private-key-file=/etc/kubernetes/ssl/kubernetes-server.key
--v=2
--root-ca-file=/etc/kubernetes/ssl/ca-{{ master.ca }}.crt
--leader-elect=true
1>>/var/log/kube-controller-manager.log 2>&1
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
resources:
limits:
cpu: 200m
requests:
cpu: 200m
volumeMounts:
- mountPath: /srv/kubernetes
name: srvkube
readOnly: true
- mountPath: /var/log/kube-controller-manager.log
name: logfile
- mountPath: /etc/kubernetes/ssl
name: etcssl
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usrsharecacerts
readOnly: true
volumes:
- hostPath:
path: /srv/kubernetes
name: srvkube
- hostPath:
path: /var/log/kube-controller-manager.log
name: logfile
- hostPath:
path: /etc/kubernetes/ssl
name: etcssl
- hostPath:
path: /usr/share/ca-certificates
name: usrsharecacerts

View File

@ -1,24 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": "opencontrail",
"name": "kube-network-manager"
},
"spec":{
"hostNetwork": true,
"containers":[{
"name": "kube-network-manager",
"image": "{{ master.network.network_manager.image }}:{{ master.network.network_manager.tag }}",
"volumeMounts": [{
"name": "config",
"mountPath": "/etc/kubernetes"
}]
}],
"volumes": [{
"name": "config",
"hostPath": {"path": "/etc/kubernetes"}
}]
}
}

View File

@ -1,52 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- from "kubernetes/map.jinja" import common with context %}
apiVersion: v1
kind: Pod
metadata:
name: kube-proxy
namespace: kube-system
spec:
hostNetwork: true
containers:
- name: kube-proxy
image: {{ common.hyperkube.image }}
resources:
requests:
cpu: 200m
command:
- /hyperkube
- proxy
--logtostderr=true
--v=2
--kubeconfig=/etc/kubernetes/proxy.kubeconfig
--master={%- if pool.apiserver.insecure.enabled %}http://{{ pool.apiserver.host }}:8080{%- else %}https://{{ pool.apiserver.host }}{%- endif %}
{%- if pool.network.engine == 'calico' %} --proxy-mode=iptables{% endif %}
1>>/var/log/kube-proxy.log 2>&1
securityContext:
privileged: true
volumeMounts:
- mountPath: /etc/kuberbetes/ssl
name: ssl-certs-host
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: false
- mountPath: /etc/kubernetes/proxy.kubeconfig
name: kubeconfig
readOnly: false
- mountPath: /var/run/dbus/system_bus_socket
name: dbus
readOnly: false
volumes:
- hostPath:
path: /etc/kubernetes/ssl
name: ssl-certs-host
- hostPath:
path: /etc/kubernetes/proxy.kubeconfig
name: kubeconfig
- hostPath:
path: /var/log
name: varlog
- hostPath:
path: /var/run/dbus/system_bus_socket
name: dbus

View File

@ -1,42 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- from "kubernetes/map.jinja" import common with context %}
apiVersion: v1
kind: Pod
metadata:
name: kube-scheduler
namespace: kube-system
spec:
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: kubernetes-master
restartPolicy: Always
terminationGracePeriodSeconds: 30
containers:
- name: kube-scheduler
image: {{ common.hyperkube.image }}
imagePullPolicy: IfNotPresent
command:
- hyperkube
- scheduler
--master={{ master.apiserver.insecure_address }}:8080
--v=2
--leader-elect=true
1>>/var/log/kube-scheduler.log 2>&1
livenessProbe:
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /var/log/kube-scheduler.log
name: logfile
volumes:
- hostPath:
path: /var/log/kube-scheduler.log
name: logfile

View File

@ -1,15 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
[DEFAULT]
service-cluster-ip-range = {{ master.network.service_cluster_ip_range }}
[opencontrail]
default-domain = {{ master.network.default_domain }}
public-ip-range = {{ master.network.public_ip_range }}
cluster-service = {{ master.network.cluster_service }}
api-server = {{ master.network.host }}
api-port = {{ master.network.port }}
default-project = {{ master.network.default_project }}
public-network = {{ master.network.public_network }}
private-ip-range = {{ master.network.private_ip_range }}
network-label = {{ master.network.network_label }}
service-label = {{ master.network.service_label }}

View File

@ -1,211 +0,0 @@
{% from "kubernetes/map.jinja" import control with context %}
apiVersion: {{ service.apiVersion }}
kind: {{ service.kind }}
metadata:
name: {{ service.service }}-{{ service.role }}
namespace: {{ service.namespace }}
labels:
app: {{ service.service }}-{{ service.role }}
spec:
replicas: {{ service.replicas }}
{%- if service.kind == 'PetSet' %}
serviceName: {{ service.service_name }}
{%- endif %}
template:
metadata:
labels:
app: {{ service.service }}-{{ service.role }}
annotations:
{%- if service.hostname is defined %}
pod.beta.kubernetes.io/hostname: {{ service.hostname }}
{%- endif %}
{%- if service.init_containers is defined %}
pod.alpha.kubernetes.io/init-containers: '[
{%- for container in service.init_containers %}
{
"name": "{{ container.name }}",
"image": "{% if container.registry is defined %}{{ container.registry }}/{%- endif %}{{ container.image }}{%- if container.tag is defined %}:{{ container.tag }}{%- endif %}",
"command": [{%- for command in container.command %}"{{ command }}"{% if not loop.last %},{% endif %}{%- endfor %}]
{%- if container.volumes is defined -%}
,
"volumeMounts": [
{%- for volume in container.volumes %}
{
"name": "{{ volume.name }}",
{%- if volume.sub_path is defined %}
"subPath": "{{ volume.sub_path }}",
{%- endif %}
"mountPath": "{{ volume.mount }}"
}
{%- if not loop.last %},{% endif %}{%- endfor %}
]
{%- endif %}
}
{%- if not loop.last %},{% endif %}{% endfor %}
]'
{%- endif %}
{%- if service.affinity is defined %}
scheduler.alpha.kubernetes.io/affinity: >
{
{%- for affinity_name, affinity in service.affinity.iteritems() %}
"{{ affinity.name }}": {
{%- for expression_name, expression in affinity.expression.iteritems() %}
{%- if expression.name == 'matchExpressions' %}
"{{ affinity.get('type','required') }}DuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{%- for selector in expression.selectors %}
{
"key": "{{ selector.key }}",
"operator": "{{ selector.operator }}",
"values": [{%- for value in selector['values'] %}"{{ value }}"{%- if not loop.last %},{% endif %}{%- endfor %}]
}{%- if not loop.last %},{% endif %}
{% endfor %}
]
}
]
}
{%- elif expression.name == 'labelSelector' %}
"{{ affinity.get('type','required') }}DuringSchedulingIgnoredDuringExecution": [
{
"labelSelector": {
"matchLabels": {
{%- for selector in expression.selectors %}
"{{ selector.key }}": "{{ selector.value }}"
{%- if not loop.last %},{% endif %}{%- endfor %}
}
},
{%- if affinity.name == 'podAntiAffinity' or affinity.name == 'podAffinity' %}
"topologyKey": "{{ affinity.topology_key }}"
{%- endif %}
}
]
{%- endif %}
{%- endfor %}
{%- if not loop.last %}},{% endif %}
{%- endfor %}
}
}
{%- endif %}
spec:
{%- if service.hostNetwork is defined %}
hostNetwork: True
{%- endif %}
{%- if service.host_pid is defined %}
hostPID: True
{%- endif %}
containers:
{%- for container_name, container in service.container.iteritems() %}
- name: {{ container_name }}
image: {% if container.registry is defined %}{{ container.registry }}/{%- endif %}{{ container.image }}{%- if container.tag is defined %}:{{ container.tag }}{%- endif %}
imagePullPolicy: {{ container.get('image_pull_policy','IfNotPresent') }}
{%- if container.privileged is defined %}
securityContext:
privileged: True
{%- endif %}
{%- if container.variables is defined %}
env:
{%- for variable in container.variables %}
- name: {{ variable.name }}
{%- if variable.fieldPath is defined %}
valueFrom:
fieldRef:
fieldPath: {{ variable.fieldPath }}
{%- else %}
value: {{ variable.value }}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- if container.ports is defined %}
ports:
{%- for port in container.ports %}
- containerPort: {{ port.port }}
name: {{ port.name }}
{%- endfor %}
{%- endif %}
{%- if container.command is defined %}
command:
{%- for command in container.command %}
- {{ command }}
{%- endfor %}
{%- endif %}
{%- if container.volumes is defined %}
volumeMounts:
{%- for volume in container.volumes %}
- name: {{ volume.name }}
mountPath: {{ volume.mount }}
readOnly: {{ volume.get('read_only', 'False') }}
{%- if volume.sub_path is defined %}
subPath: {{ volume.sub_path }}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- if container.liveness_probe is defined %}
livenessProbe:
{%- if container.liveness_probe.type == 'http' %}
httpGet:
path: {{ container.liveness_probe.path }}
port: {{ container.liveness_probe.port }}
{%- elif container.liveness_probe.type == 'exec' %}
exec:
command:
{%- for command in container.liveness_probe.command %}
- {{ command }}
{%- endfor %}
{%- endif %}
initialDelaySeconds: {{ container.liveness_probe.initial_delay }}
timeoutSeconds: {{ container.liveness_probe.timeout }}
{%- endif %}
{%- if container.readiness_probe is defined %}
readinessProbe:
{%- if container.readiness_probe.type == 'http' %}
httpGet:
path: {{ container.readiness_probe.path }}
port: {{ container.readiness_probe.port }}
{%- elif container.readiness_probe.type == 'exec' %}
exec:
command:
{%- for command in container.readiness_probe.command %}
- {{ command }}
{%- endfor %}
{%- endif %}
initialDelaySeconds: {{ container.readiness_probe.initial_delay }}
timeoutSeconds: {{ container.readiness_probe.timeout }}
{%- endif %}
{%- endfor %}
{%- if service.volume is defined %}
volumes:
{%- for volume_name, volume in service.volume.iteritems() %}
- name: {{ volume_name }}
{%- if volume.type == 'emptyDir' %}
emptyDir: {}
{%- elif volume.type == 'hostPath' %}
hostPath:
path: {{ volume.path }}
{%- elif volume.type == 'glusterfs' %}
glusterfs:
endpoints: {{ volume.endpoints }}
path: {{ volume.path }}
readOnly: {{ volume.read_only }}
{%- elif volume.type == 'config_map' %}
configMap:
name: {{ volume_name }}-{{ volume.get('version', '1') }}
items:
{%- for name, item in volume.item.iteritems() %}
- key: {{ item.key }}
path: {{ item.path }}
{%- endfor %}
{%- endif %}
{%- endfor %}
{%- endif %}
{%- if service.nodeSelector is defined %}
nodeSelector:
{%- for selector in service.nodeSelector %}
{{ selector.key }}: {{ selector.value }}
{%- endfor %}
{%- endif %}
{%- if service.image_pull_secretes is defined %}
imagePullSecrets:
- name: {{ service.image_pull_secretes }}
{%- endif %}

View File

@ -1,25 +0,0 @@
{% from "kubernetes/map.jinja" import control with context %}
apiVersion: v1
kind: Service
metadata:
labels:
name: {{ service.service }}-{{ service.role }}
app: {{ service.service }}-{{ service.role }}
name: {{ service.service }}-{{ service.role }}
namespace: {{ service.namespace }}
spec:
ports:
{%- for port in service.ports %}
- port: {{ port.port }}
name: {{ port.name }}
{%- endfor %}
type: {{ service.type }}
selector:
app: {{ service.service }}-{{ service.role }}
{%- if service.cluster_ip is defined %}
clusterIP: {{ service.cluster_ip }}
{%- endif %}
{%- if service.external_ip is defined %}
externalIPs:
- "{{ service.external_ip }}"
{%- endif -%}

View File

@ -1,30 +0,0 @@
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
Documentation=man:kube-apiserver
After=network.target
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/default/%p
User=root
ExecStart=/usr/bin/hyperkube \
apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ALLOW_PRIV \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ETCD_SERVERS \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$DAEMON_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -1,21 +0,0 @@
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
Documentation=man:kube-controller-manager
After=network.target
[Service]
Environment=KUBE_MASTER=--master=127.0.0.1:8080
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/default/%p
User=root
ExecStart=/usr/bin/hyperkube \
controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$DAEMON_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -1,22 +0,0 @@
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
Documentation=man:kube-proxy
After=network.target
[Service]
Environment=KUBE_MASTER=--master=127.0.0.1:8080
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/default/%p
User=root
ExecStart=/usr/bin/hyperkube \
proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$DAEMON_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -1,22 +0,0 @@
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/kubernetes/kubernetes
Documentation=man:kube-scheduler
After=network.target
[Service]
Environment=KUBE_MASTER=--master=127.0.0.1:8080
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/default/%p
User=root
ExecStart=/usr/bin/hyperkube \
scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$DAEMON_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -1,30 +0,0 @@
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/kubernetes/kubernetes
Documentation=man:kubelet
After=network.target
After=docker.service
Requires=docker.service
Conflicts=cadvisor.service
[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/default/%p
User=root
ExecStart=/usr/bin/hyperkube \
kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ALLOW_PRIV \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBELET_API_SERVER \
$DOCKER_ENDPOINT \
$CADVISOR_PORT \
$DAEMON_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
Alias=cadvisor.service

View File

@ -1,13 +0,0 @@
{%- if pillar.kubernetes is defined %}
include:
{%- if pillar.kubernetes.master is defined %}
- kubernetes.master
{%- endif %}
{%- if pillar.kubernetes.pool is defined %}
- kubernetes.pool
{%- endif %}
{%- if pillar.kubernetes.control is defined %}
- kubernetes.control
{%- endif %}
{%- endif %}

View File

@ -1,44 +0,0 @@
{% set common = salt['grains.filter_by']({
'Debian': {
'pkgs': ['curl', 'git', 'apt-transport-https', 'python-apt', 'nfs-common', 'socat', 'netcat-traditional', 'openssl'],
'services': [],
},
'RedHat': {
'pkgs': ['curl', 'git', 'apt-transport-https', 'python-apt', 'nfs-common', 'socat', 'netcat-traditional', 'python'],
'services': [],
},
}, merge=salt['pillar.get']('kubernetes:common')) %}
{% set master = salt['grains.filter_by']({
'Debian': {
'pkgs': [],
'services': ['kube-apiserver','kube-scheduler','kube-controller-manager'],
},
'RedHat': {
'pkgs': [],
'services': [],
},
}, merge=salt['pillar.get']('kubernetes:master')) %}
{% set pool = salt['grains.filter_by']({
'Debian': {
'pkgs': [],
'services': ['kube-proxy'],
},
'RedHat': {
'pkgs': [],
'services': [],
},
}, merge=salt['pillar.get']('kubernetes:pool')) %}
{% set control = salt['grains.filter_by']({
'Debian': {
'service': {},
'config_type': "default",
},
'RedHat': {
'service': {},
'config_type': "default",
},
}, merge=salt['pillar.get']('kubernetes:control')) %}

View File

@ -1,40 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
/etc/calico/network-environment:
file.managed:
- source: salt://kubernetes/files/calico/network-environment.master
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
- template: jinja
/usr/bin/calicoctl:
file.managed:
- source: {{ master.network.get('source', 'https://github.com/projectcalico/calico-containers/releases/download/') }}{{ master.network.version }}/calicoctl
- source_hash: md5={{ master.network.hash }}
- mode: 751
- user: root
- group: root
{%- if master.network.get('systemd', true) %}
/etc/systemd/system/calico-node.service:
file.managed:
- source: salt://kubernetes/files/calico/calico-node.service.pool.master
- user: root
- group: root
- template: jinja
calico_node:
service.running:
- name: calico-node
- enable: True
- watch:
- file: /etc/systemd/system/calico-node.service
{%- endif %}
{%- endif %}

View File

@ -1,170 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
/srv/kubernetes/known_tokens.csv:
file.managed:
- source: salt://kubernetes/files/known_tokens.csv
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
/srv/kubernetes/basic_auth.csv:
file.managed:
- source: salt://kubernetes/files/basic_auth.csv
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
{%- if master.get('container', 'true') %}
/var/log/kube-apiserver.log:
file.managed:
- user: root
- group: root
- mode: 644
/etc/kubernetes/manifests/kube-apiserver.manifest:
file.managed:
- source: salt://kubernetes/files/manifest/kube-apiserver.manifest
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
/etc/kubernetes/manifests/kube-controller-manager.manifest:
file.managed:
- source: salt://kubernetes/files/manifest/kube-controller-manager.manifest
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
/var/log/kube-controller-manager.log:
file.managed:
- user: root
- group: root
- mode: 644
/etc/kubernetes/manifests/kube-scheduler.manifest:
file.managed:
- source: salt://kubernetes/files/manifest/kube-scheduler.manifest
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
/var/log/kube-scheduler.log:
file.managed:
- user: root
- group: root
- mode: 644
{%- else %}
/etc/default/kube-apiserver:
file.managed:
- user: root
- group: root
- mode: 644
- contents: DAEMON_ARGS=" --insecure-bind-address={{ master.apiserver.insecure_address }} --etcd-servers={% for member in master.etcd.members %}http://{{ member.host }}:4001{% if not loop.last %},{% endif %}{% endfor %} --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range={{ master.service_addresses }} --client-ca-file=/etc/kubernetes/ssl/ca-{{ master.ca }}.crt --basic-auth-file=/srv/kubernetes/basic_auth.csv --tls-cert-file=/etc/kubernetes/ssl/kubernetes-server.crt --tls-private-key-file=/etc/kubernetes/ssl/kubernetes-server.key --secure-port={{ master.apiserver.get('secure_port', '443') }} --bind-address={{ master.apiserver.address }} --token-auth-file=/srv/kubernetes/known_tokens.csv --v=2 --allow-privileged=True --etcd-quorum-read=true"
/etc/default/kube-controller-manager:
file.managed:
- user: root
- group: root
- mode: 644
- contents: DAEMON_ARGS=" --master={{ master.apiserver.insecure_address }}:8080 --cluster-name=kubernetes --service-account-private-key-file=/etc/kubernetes/ssl/kubernetes-server.key --v=2 --root-ca-file=/etc/kubernetes/ssl/ca-{{ master.ca }}.crt --leader-elect=true"
/etc/default/kube-scheduler:
file.managed:
- user: root
- group: root
- mode: 644
- contents: DAEMON_ARGS=" --master={{ master.apiserver.insecure_address }}:8080 --v=2 --leader-elect=true"
/etc/systemd/system/kube-apiserver.service:
file.managed:
- source: salt://kubernetes/files/systemd/kube-apiserver.service
- template: jinja
- user: root
- group: root
- mode: 644
/etc/systemd/system/kube-scheduler.service:
file.managed:
- source: salt://kubernetes/files/systemd/kube-scheduler.service
- template: jinja
- user: root
- group: root
- mode: 644
/etc/systemd/system/kube-controller-manager.service:
file.managed:
- source: salt://kubernetes/files/systemd/kube-controller-manager.service
- template: jinja
- user: root
- group: root
- mode: 644
master_services:
service.running:
- names: {{ master.services }}
- enable: True
- watch:
- file: /etc/default/kube-apiserver
- file: /etc/default/kube-scheduler
- file: /etc/default/kube-controller-manager
- file: /usr/bin/hyperkube
{%- endif %}
{%- for name,namespace in master.namespace.iteritems() %}
{%- if namespace.enabled %}
/registry/namespaces/{{ name }}:
etcd.set:
- value: '{"kind":"Namespace","apiVersion":"v1","metadata":{"name":"{{ name }}"},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}}'
{%- else %}
/registry/namespaces/{{ name }}:
etcd.rm
{%- endif %}
{%- endfor %}
{%- if master.registry.secret is defined %}
{%- for name,registry in master.registry.secret.iteritems() %}
{%- if registry.enabled %}
/registry/secrets/{{ registry.namespace }}/{{ name }}:
etcd.set:
- value: '{"kind":"Secret","apiVersion":"v1","metadata":{"name":"{{ name }}","namespace":"{{ registry.namespace }}"},"data":{".dockerconfigjson":"{{ registry.key }}"},"type":"kubernetes.io/dockerconfigjson"}'
{%- else %}
/registry/secrets/{{ registry.namespace }}/{{ name }}:
etcd.rm
{%- endif %}
{%- endfor %}
{%- endif %}
{%- endif %}

View File

@ -1,66 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
/etc/kubernetes/network.json:
file.managed:
- source: salt://kubernetes/files/flannel/network.json
- makedirs: True
- user: root
- group: root
- mode: 755
- template: jinja
/etc/kubernetes/manifests/flannel-server.manifest:
file.managed:
- source: salt://kubernetes/files/manifest/flannel-server.manifest
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
- template: jinja
/var/log/etcd-flannel.log:
file.managed:
- user: root
- group: root
- mode: 644
/var/log/flannel.log:
file.managed:
- user: root
- group: root
- mode: 644
{%- if not pillar.kubernetes.pool is defined %}
flannel-tar:
archive:
- extracted
- user: root
- name: /opt/flannel
- source: https://storage.googleapis.com/kubernetes-release/flannel/flannel-0.5.5-linux-amd64.tar.gz
- tar_options: v
- source_hash: md5=972c717254775bef528f040af804f2cc
- archive_format: tar
- if_missing: /usr/local/src/flannel/flannel-0.5.5/
flannel-symlink:
file.symlink:
- name: /usr/local/bin/flanneld
- target: /usr/local/src/flannel-0.5.5/flanneld
- force: true
- watch:
- archive: flannel-tar
/etc/default/flannel:
file.managed:
- source: salt://kubernetes/files/flannel/default.master
- template: jinja
- user: root
- group: root
- mode: 644
{%- endif %}
{%- endif %}

View File

@ -1,22 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
/etc/kubernetes/glusterfs/glusterfs-endpoints.yml:
file.managed:
- source: salt://kubernetes/files/glusterfs/glusterfs-endpoints.yml
- makedirs: True
- user: root
- group: root
- mode: 644
- template: jinja
/etc/kubernetes/glusterfs/glusterfs-svc.yml:
file.managed:
- source: salt://kubernetes/files/glusterfs/glusterfs-svc.yml
- makedirs: True
- user: root
- group: root
- mode: 644
- template: jinja
{%- endif %}

View File

@ -1,20 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
include:
- kubernetes.master.service
- kubernetes.master.kube-addons
{%- if master.network.engine == "opencontrail" %}
- kubernetes.master.opencontrail-network-manager
{%- endif %}
{%- if master.network.engine == "flannel" %}
- kubernetes.master.flannel
{%- endif %}
{%- if master.network.engine == "calico" %}
{%- if not pillar.kubernetes.pool is defined %}
- kubernetes.master.calico
{%- endif %}
{%- endif %}
{%- if master.storage.get('engine', 'none') == 'glusterfs' %}
- kubernetes.master.glusterfs
{%- endif %}
- kubernetes.master.controller
- kubernetes.master.setup

View File

@ -1,122 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
addon-dir-create:
file.directory:
- name: /etc/kubernetes/addons
- user: root
- group: root
- mode: 0755
{%- if master.addons.dns.enabled %}
/etc/kubernetes/addons/dns/skydns-svc.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/dns/skydns-svc.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/dns/skydns-rc.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/dns/skydns-rc.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
{% endif %}
{%- if master.addons.dashboard.enabled %}
/etc/kubernetes/addons/dashboard/dashboard-service.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/dashboard/dashboard-service.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/dashboard/dashboard-controller.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/dashboard/dashboard-controller.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
{%- if master.network.engine == "opencontrail" %}
/etc/kubernetes/addons/dashboard/dashboard-address.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/dashboard/dashboard-address.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/dashboard/dashboard-endpoint.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/dashboard/dashboard-endpoint.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
{% endif %}
{% endif %}
{%- if master.addons.heapster_influxdb.enabled %}
/etc/kubernetes/addons/heapster-influxdb/heapster-address.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/heapster-influxdb/heapster-address.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/heapster-influxdb/heapster-controller.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/heapster-influxdb/heapster-controller.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/heapster-influxdb/heapster-endpoint.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/heapster-influxdb/heapster-endpoint.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/heapster-influxdb/heapster-service.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/heapster-influxdb/heapster-service.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/heapster-influxdb/influxdb-controller.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/heapster-influxdb/influxdb-controller.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
/etc/kubernetes/addons/heapster-influxdb/influxdb-service.yaml:
file.managed:
- source: salt://kubernetes/files/kube-addons/heapster-influxdb/influxdb-service.yaml
- template: jinja
- group: root
- dir_mode: 755
- makedirs: True
{% endif %}
{% endif %}

View File

@ -1,23 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
/etc/kubernetes/manifests/kube-network-manager.manifest:
file.managed:
- source: salt://kubernetes/files/manifest/kube-network-manager.manifest
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
/etc/kubernetes/network.conf:
file.managed:
- source: salt://kubernetes/files/opencontrail/network.conf
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
{%- endif %}

View File

@ -1,8 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- from "kubernetes/map.jinja" import common with context %}
{%- if master.enabled %}
include:
- kubernetes._common
{%- endif %}

View File

@ -1,15 +0,0 @@
{%- from "kubernetes/map.jinja" import master with context %}
{%- if master.enabled %}
{%- for addon_name, addon in master.addons.iteritems() %}
{%- if addon.enabled %}
kubernetes_addons_{{ addon_name }}:
cmd.run:
- name: |
hyperkube kubectl apply -f /etc/kubernetes/addons/{{ addon_name }}
- unless: "hyperkube kubectl get rc {{ addon.get('name', addon_name) }} --namespace=kube-system"
{%- endif %}
{%- endfor %}
{%- endif %}

View File

@ -1,15 +0,0 @@
doc:
name: Kubernetes
description: Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops.
role:
{%- if pillar.kubernetes.pool is defined %}
{%- from "kubernetes/map.jinja" import client with context %}
pool:
name: pool
param: {}
{%- endif %}
{%- if pillar.kubernetes.master is defined %}
master:
name: master
param: {}
{%- endif %}

View File

@ -1,88 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- if pool.enabled %}
/tmp/calico/:
file.directory:
- user: root
- group: root
copy-calico-ctl:
dockerng.running:
- image: {{ pool.network.calicoctl.image }}
copy-calico-ctl-cmd:
cmd.run:
- name: docker cp copy-calico-ctl:calicoctl /tmp/calico/
- require:
- dockerng: copy-calico-ctl
/usr/bin/calicoctl:
file.managed:
- source: /tmp/calico/calicoctl
- mode: 751
- user: root
- group: root
- require:
- cmd: copy-calico-ctl-cmd
copy-calico-cni:
dockerng.running:
- image: {{ pool.network.cni.image }}
- command: cp -vr /opt/cni/bin/ /tmp/calico/
- binds:
- /tmp/calico/:/tmp/calico/
- force: True
{%- for filename in ['calico', 'calico-ipam'] %}
/opt/cni/bin/{{ filename }}:
file.managed:
- source: /tmp/calico/bin/{{ filename }}
- mode: 751
- makedirs: true
- user: root
- group: root
- require:
- dockerng: copy-calico-cni
- require_in:
- service: calico_node
{%- endfor %}
/etc/cni/net.d/10-calico.conf:
file.managed:
- source: salt://kubernetes/files/calico/calico.conf
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
- template: jinja
/etc/calico/network-environment:
file.managed:
- source: salt://kubernetes/files/calico/network-environment.pool
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
- template: jinja
{%- if pool.network.get('systemd', true) %}
/etc/systemd/system/calico-node.service:
file.managed:
- source: salt://kubernetes/files/calico/calico-node.service.pool
- user: root
- group: root
- template: jinja
calico_node:
service.running:
- name: calico-node
- enable: True
- watch:
- file: /etc/systemd/system/calico-node.service
{%- endif %}
{%- endif %}

View File

@ -1,39 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- from "kubernetes/map.jinja" import common with context %}
{%- if pool.enabled %}
{%- if common.hyperkube %}
/tmp/cni/:
file.directory:
- user: root
- group: root
copy-network-cni:
dockerng.running:
- image: {{ common.hyperkube.image }}
- command: cp -vr /opt/cni/bin/ /tmp/cni/
- binds:
- /tmp/cni/:/tmp/cni/
- force: True
- require:
- file: /tmp/cni/
{%- for filename in ['cnitool', 'flannel', 'tuning', 'bridge', 'ipvlan', 'loopback', 'macvlan', 'ptp', 'dhcp', 'host-local', 'noop'] %}
/opt/cni/bin/{{ filename }}:
file.managed:
- source: /tmp/cni/bin/{{ filename }}
- user: root
- group: root
- mode: 755
- makedirs: True
- watch_in:
- service: kubelet_service
- require:
- dockerng: copy-network-cni
{%- endfor %}
{%- endif %}
{%- endif %}

View File

@ -1,31 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- if pool.enabled %}
flannel-tar:
archive:
- extracted
- user: root
- name: /opt/flannel
- source: https://storage.googleapis.com/kubernetes-release/flannel/flannel-0.5.5-linux-amd64.tar.gz
- tar_options: v
- source_hash: md5=972c717254775bef528f040af804f2cc
- archive_format: tar
- if_missing: /usr/local/src/flannel/flannel-0.5.5/
flannel-symlink:
file.symlink:
- name: /usr/local/bin/flanneld
- target: /usr/local/src/flannel-0.5.5/flanneld
- force: true
- watch:
- archive: flannel-tar
/etc/default/flannel:
file.managed:
- source: salt://kubernetes/files/flannel/default.pool
- template: jinja
- user: root
- group: root
- mode: 644
{%- endif %}

View File

@ -1,12 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
include:
- kubernetes.pool.cni
{%- if pool.network.engine == "calico" %}
- kubernetes.pool.calico
{%- endif %}
- kubernetes.pool.service
- kubernetes.pool.kubelet
{%- if pool.network.engine == "flannel" %}
- kubernetes.pool.flannel
{%- endif %}
- kubernetes.pool.kube-proxy

View File

@ -1,52 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- if pool.enabled %}
{%- if pool.get('container', 'true') %}
/etc/kubernetes/manifests/kube-proxy.manifest:
file.managed:
- source: salt://kubernetes/files/manifest/kube-proxy.manifest.pool
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
- dir_mode: 755
{%- else %}
/etc/kubernetes/proxy.kubeconfig:
file.managed:
- source: salt://kubernetes/files/kube-proxy/proxy.kubeconfig
- template: jinja
- user: root
- group: root
- mode: 644
- makedirs: true
/etc/systemd/system/kube-proxy.service:
file.managed:
- source: salt://kubernetes/files/systemd/kube-proxy.service
- template: jinja
- user: root
- group: root
- mode: 644
/etc/default/kube-proxy:
file.managed:
- user: root
- group: root
- mode: 644
- contents: DAEMON_ARGS=" --logtostderr=true --v=2 --kubeconfig=/etc/kubernetes/proxy.kubeconfig --master={%- if pool.apiserver.insecure.enabled %}http://{{ pool.apiserver.host }}:8080{%- else %}https://{{ pool.apiserver.host }}{%- endif %}{%- if pool.network.engine == 'calico' %} --proxy-mode=iptables{% endif %}"
pool_services:
service.running:
- names: {{ pool.services }}
- enable: True
- watch:
- file: /etc/default/kube-proxy
- file: /usr/bin/hyperkube
{%- endif %}
{%- endif %}

View File

@ -1,31 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- if pool.enabled %}
{%- if pool.host.label is defined %}
{%- for name,label in pool.host.label.iteritems() %}
{%- if label.enabled %}
{{ name }}:
k8s.label_present:
- name: {{ name }}
- value: {{ label.value }}
- node: {{ pool.host.name }}
- apiserver: http://{{ pool.apiserver.host }}:8080
{%- else %}
{{ name }}:
k8s.label_absent:
- name: {{ name }}
- node: {{ pool.host.name }}
- apiserver: http://{{ pool.apiserver.host }}:8080
{%- endif %}
{%- endfor %}
{%- endif %}
{%- endif %}

View File

@ -1,24 +0,0 @@
{%- from "kubernetes/map.jinja" import pool with context %}
{%- from "kubernetes/map.jinja" import common with context %}
{%- if pool.enabled %}
include:
- kubernetes._common
kubernetes_pool_container_grains_dir:
file.directory:
- name: /etc/salt/grains.d
- mode: 700
- makedirs: true
- user: root
kubernetes_pool_container_grain:
file.managed:
- name: /etc/salt/grains.d/kubernetes
- source: salt://kubernetes/files/kubernetes.grain
- template: jinja
- mode: 600
- require:
- file: kubernetes_pool_container_grains_dir
{%- endif %}

View File

@ -1,3 +0,0 @@
name: "kubernetes"
version: "2017.1.2"
source: "https://github.com/openstack/salt-formula-kubernetes"

View File

@ -1,5 +0,0 @@
parameters:
kubernetes:
common:
network:
engine: none

View File

@ -1,6 +0,0 @@
applications:
- kubernetes
parameters:
kubernetes:
control:
enabled: true

View File

@ -1,65 +0,0 @@
applications:
- kubernetes
classes:
- service.kubernetes.support
- service.kubernetes.common
parameters:
kubernetes:
master:
enabled: true
registry:
host: tcpcloud
service_addresses: 10.254.0.0/16
admin:
username: ${_param:kubernetes_admin_user}
password: ${_param:kubernetes_admin_password}
kubelet:
allow_privileged: True
apiserver:
address: ${_param:cluster_local_address}
insecure_address: ${_param:cluster_local_address}
etcd:
members:
- host: ${_param:cluster_node01_address}
name: ${_param:cluster_node01_hostname}
- host: ${_param:cluster_node02_address}
name: ${_param:cluster_node02_hostname}
- host: ${_param:cluster_node03_address}
name: ${_param:cluster_node03_hostname}
addons:
dns:
enabled: true
replicas: 1
domain: cluster.local
server: 10.254.0.10
dnsmasq:
cache-size: 1000
no-resolv:
server: 127.0.0.1#10053
log-facility: "-"
dashboard:
enabled: True
heapster_influxdb:
enabled: False
token:
admin: ${_param:kubernetes_admin_token}
kubelet: ${_param:kubernetes_kubelet_token}
kube_proxy: ${_param:kubernetes_kube-proxy_token}
scheduler: ${_param:kubernetes_scheduler_token}
controller_manager: ${_param:kubernetes_controller-manager_token}
dns: ${_param:kubernetes_dns_token}
ca: kubernetes
storage:
engine: none
namespace:
kube-system:
enabled: True
network:
etcd:
members:
- host: ${_param:cluster_node01_address}
port: 4001
- host: ${_param:cluster_node02_address}
port: 4001
- host: ${_param:cluster_node03_address}
port: 4001

View File

@ -1,57 +0,0 @@
applications:
- kubernetes
classes:
- service.kubernetes.support
- service.kubernetes.common
parameters:
kubernetes:
master:
enabled: true
registry:
host: tcpcloud
service_addresses: 10.254.0.0/16
admin:
username: ${_param:kubernetes_admin_user}
password: ${_param:kubernetes_admin_password}
kubelet:
allow_privileged: True
apiserver:
address: ${_param:single_address}
insecure_address: 0.0.0.0
etcd:
members:
- host: ${_param:single_address}
name: ${linux:system:name}
addons:
dns:
enabled: true
replicas: 1
domain: cluster.local
server: 10.254.0.10
dnsmasq:
cache-size: 1000
no-resolv:
server: 127.0.0.1#10053
log-facility: "-"
dashboard:
enabled: True
heapster_influxdb:
enabled: False
token:
admin: ${_param:kubernetes_admin_token}
kubelet: ${_param:kubernetes_kubelet_token}
kube_proxy: ${_param:kubernetes_kube-proxy_token}
scheduler: ${_param:kubernetes_scheduler_token}
controller_manager: ${_param:kubernetes_controller-manager_token}
dns: ${_param:kubernetes_dns_token}
ca: kubernetes
storage:
engine: none
namespace:
kube-system:
enabled: True
network:
etcd:
members:
- host: ${_param:single_address}
port: 4001

View File

@ -1,43 +0,0 @@
applications:
- kubernetes
classes:
- service.kubernetes.support
- service.kubernetes.common
parameters:
kubernetes:
pool:
enabled: true
registry:
host: tcpcloud
host:
name: ${linux:system:name}
apiserver:
host: ${_param:cluster_vip_address}
insecure:
enabled: True
members:
- host: ${_param:cluster_vip_address}
# Temporary disabled until kubelet HA would be fixed
# - host: ${_param:cluster_node01_address}
# - host: ${_param:cluster_node02_address}
# - host: ${_param:cluster_node03_address}
address: ${_param:cluster_local_address}
cluster_dns: 10.254.0.10
cluster_domain: cluster.local
kubelet:
config: /etc/kubernetes/manifests
allow_privileged: True
frequency: 5s
token:
kubelet: ${_param:kubernetes_kubelet_token}
kube_proxy: ${_param:kubernetes_kube-proxy_token}
ca: kubernetes
network:
etcd:
members:
- host: ${_param:cluster_node01_address}
port: 4001
- host: ${_param:cluster_node02_address}
port: 4001
- host: ${_param:cluster_node03_address}
port: 4001

View File

@ -1,36 +0,0 @@
applications:
- kubernetes
classes:
- service.kubernetes.support
- service.kubernetes.common
parameters:
kubernetes:
pool:
enabled: true
registry:
host: tcpcloud
host:
name: ${linux:system:name}
apiserver:
host: ${_param:master_address}
insecure:
enabled: True
members:
- host: ${_param:master_address}
address: 0.0.0.0
cluster_dns: 10.254.0.10
allow_privileged: True
cluster_domain: cluster.local
kubelet:
config: /etc/kubernetes/manifests
allow_privileged: True
frequency: 5s
token:
kubelet: ${_param:kubernetes_kubelet_token}
kube_proxy: ${_param:kubernetes_kube-proxy_token}
ca: kubernetes
network:
etcd:
members:
- host: ${_param:master_address}
port: 4001

View File

@ -1,11 +0,0 @@
parameters:
kubernetes:
_support:
collectd:
enabled: false
heka:
enabled: false
sensu:
enabled: false
sphinx:
enabled: true

View File

@ -1,66 +0,0 @@
kubernetes:
common:
network:
engine: none
hyperkube:
image: hyperkube-amd64:v1.5.0-beta.3-1
master:
addons:
dns:
domain: cluster.local
enabled: true
replicas: 1
server: 10.254.0.10
heapster_influxdb:
enabled: true
public_ip: 185.22.97.132
dashboard:
enabled: true
public_ip: 185.22.97.131
admin:
password: password
username: admin
registry:
host: tcpcloud
apiserver:
address: 10.0.175.100
port: 8080
ca: kubernetes
enabled: true
etcd:
members:
- host: 10.0.175.100
name: node040
kubelet:
allow_privileged: true
network:
engine: calico
hash: fb5e30ebe6154911a66ec3fb5f1195b2
private_ip_range: 10.150.0.0/16
version: v0.19.0
service_addresses: 10.254.0.0/16
storage:
engine: glusterfs
members:
- host: 10.0.175.101
port: 24007
- host: 10.0.175.102
port: 24007
- host: 10.0.175.103
port: 24007
port: 24007
token:
admin: DFvQ8GJ9JD4fKNfuyEddw3rjnFTkUKsv
controller_manager: EreGh6AnWf8DxH8cYavB2zS029PUi7vx
dns: RAFeVSE4UvsCz4gk3KYReuOI5jsZ1Xt3
kube_proxy: DFvQ8GelB7afH3wClC9romaMPhquyyEe
kubelet: 7bN5hJ9JD4fKjnFTkUKsvVNfuyEddw3r
logging: MJkXKdbgqRmTHSa2ykTaOaMykgO6KcEf
monitoring: hnsj0XqABgrSww7Nqo7UVTSZLJUt2XRd
scheduler: HY1UUxEPpmjW4a1dDLGIANYQp1nZkLDk
version: v1.2.4
namespace:
kube-system:
enabled: True
hyperkube:
hash: hnsj0XqABgrSww7Nqo7UVTSZLJUt2XRd

View File

@ -1,51 +0,0 @@
kubernetes:
common:
network:
engine: none
hyperkube:
image: hyperkube-amd64:v1.5.0-beta.3-1
pool:
enabled: true
version: v1.2.0
host:
name: ${linux:system:name}
apiserver:
host: 127.0.0.1
insecure:
enabled: True
members:
- host: 127.0.0.1
- host: 127.0.0.1
- host: 127.0.0.1
address: 0.0.0.0
cluster_dns: 10.254.0.10
cluster_domain: cluster.local
kubelet:
config: /etc/kubernetes/manifests
allow_privileged: True
frequency: 5s
token:
kubelet: 7bN5hJ9JD4fKjnFTkUKsvVNfuyEddw3r
kube_proxy: DFvQ8GelB7afH3wClC9romaMPhquyyEe
ca: kubernetes
network:
calicoctl:
image: calico/ctl
cni:
image: calico/cni
engine: calico
hash: c15ae251b633109e63bf128c2fbbc34a
ipam:
hash: 6e6d7fac0567a8d90a334dcbfd019a99
version: v1.3.1
version: v0.20.0
etcd:
members:
- host: 127.0.0.1
port: 4001
- host: 127.0.0.1
port: 4001
- host: 127.0.0.1
port: 4001
hyperkube:
hash: hnsj0XqABgrSww7Nqo7UVTSZLJUt2XRd

View File

@ -1,163 +0,0 @@
#!/usr/bin/env bash
set -e
[ -n "$DEBUG" ] && set -x
CURDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
METADATA=${CURDIR}/../metadata.yml
FORMULA_NAME=$(cat $METADATA | python -c "import sys,yaml; print yaml.load(sys.stdin)['name']")
## Overrideable parameters
PILLARDIR=${PILLARDIR:-${CURDIR}/pillar}
BUILDDIR=${BUILDDIR:-${CURDIR}/build}
VENV_DIR=${VENV_DIR:-${BUILDDIR}/virtualenv}
DEPSDIR=${BUILDDIR}/deps
SALT_FILE_DIR=${SALT_FILE_DIR:-${BUILDDIR}/file_root}
SALT_PILLAR_DIR=${SALT_PILLAR_DIR:-${BUILDDIR}/pillar_root}
SALT_CONFIG_DIR=${SALT_CONFIG_DIR:-${BUILDDIR}/salt}
SALT_CACHE_DIR=${SALT_CACHE_DIR:-${SALT_CONFIG_DIR}/cache}
SALT_OPTS="${SALT_OPTS} --retcode-passthrough --local -c ${SALT_CONFIG_DIR} --log-file=/dev/null"
if [ "x${SALT_VERSION}" != "x" ]; then
PIP_SALT_VERSION="==${SALT_VERSION}"
fi
## Functions
log_info() {
echo "[INFO] $*"
}
log_err() {
echo "[ERROR] $*" >&2
}
setup_virtualenv() {
log_info "Setting up Python virtualenv"
virtualenv $VENV_DIR
source ${VENV_DIR}/bin/activate
pip install salt${PIP_SALT_VERSION}
}
setup_pillar() {
[ ! -d ${SALT_PILLAR_DIR} ] && mkdir -p ${SALT_PILLAR_DIR}
echo "base:" > ${SALT_PILLAR_DIR}/top.sls
for pillar in ${PILLARDIR}/*; do
state_name=$(basename ${pillar%.sls})
echo -e " ${state_name}:\n - ${state_name}" >> ${SALT_PILLAR_DIR}/top.sls
done
}
setup_salt() {
[ ! -d ${SALT_FILE_DIR} ] && mkdir -p ${SALT_FILE_DIR}
[ ! -d ${SALT_CONFIG_DIR} ] && mkdir -p ${SALT_CONFIG_DIR}
[ ! -d ${SALT_CACHE_DIR} ] && mkdir -p ${SALT_CACHE_DIR}
echo "base:" > ${SALT_FILE_DIR}/top.sls
for pillar in ${PILLARDIR}/*.sls; do
state_name=$(basename ${pillar%.sls})
echo -e " ${state_name}:\n - ${FORMULA_NAME}" >> ${SALT_FILE_DIR}/top.sls
done
cat << EOF > ${SALT_CONFIG_DIR}/minion
file_client: local
cachedir: ${SALT_CACHE_DIR}
verify_env: False
minion_id_caching: False
file_roots:
base:
- ${SALT_FILE_DIR}
- ${CURDIR}/..
- /usr/share/salt-formulas/env
pillar_roots:
base:
- ${SALT_PILLAR_DIR}
- ${PILLARDIR}
EOF
}
fetch_dependency() {
dep_name="$(echo $1|cut -d : -f 1)"
dep_source="$(echo $1|cut -d : -f 2-)"
dep_root="${DEPSDIR}/$(basename $dep_source .git)"
dep_metadata="${dep_root}/metadata.yml"
[ -d /usr/share/salt-formulas/env/${dep_name} ] && log_info "Dependency $dep_name already present in system-wide salt env" && return 0
[ -d $dep_root ] && log_info "Dependency $dep_name already fetched" && return 0
log_info "Fetching dependency $dep_name"
[ ! -d ${DEPSDIR} ] && mkdir -p ${DEPSDIR}
git clone $dep_source ${DEPSDIR}/$(basename $dep_source .git)
ln -s ${dep_root}/${dep_name} ${SALT_FILE_DIR}/${dep_name}
METADATA="${dep_metadata}" install_dependencies
}
install_dependencies() {
grep -E "^dependencies:" ${METADATA} >/dev/null || return 0
(python - | while read dep; do fetch_dependency "$dep"; done) << EOF
import sys,yaml
for dep in yaml.load(open('${METADATA}', 'ro'))['dependencies']:
print '%s:%s' % (dep["name"], dep["source"])
EOF
}
clean() {
log_info "Cleaning up ${BUILDDIR}"
[ -d ${BUILDDIR} ] && rm -rf ${BUILDDIR} || exit 0
}
salt_run() {
[ -e ${VEN_DIR}/bin/activate ] && source ${VENV_DIR}/bin/activate
salt-call ${SALT_OPTS} $*
}
prepare() {
[ -d ${BUILDDIR} ] && mkdir -p ${BUILDDIR}
which salt-call || setup_virtualenv
setup_pillar
setup_salt
install_dependencies
}
run() {
for pillar in ${PILLARDIR}/*.sls; do
state_name=$(basename ${pillar%.sls})
salt_run --id=${state_name} state.show_sls ${FORMULA_NAME} || (log_err "Execution of ${FORMULA_NAME}.${state_name} failed"; exit 1)
done
}
_atexit() {
RETVAL=$?
trap true INT TERM EXIT
if [ $RETVAL -ne 0 ]; then
log_err "Execution failed"
else
log_info "Execution successful"
fi
return $RETVAL
}
## Main
trap _atexit INT TERM EXIT
case $1 in
clean)
clean
;;
prepare)
prepare
;;
run)
run
;;
*)
prepare
run
;;
esac