Kubernetes workload

This workload accomplishes the following:

 1. Provision 3 nodes or the number of nodes configured by stack_size
 2. Create security group
 3. Add security rules to allow ping, ssh, and kubernetes ports
 4. Install common software onto each node such as docker
 5. Download all the required software onto the master node
 6. Setup the master node with kube-apiserver, kube-controller-manager
    and kube-scheduler and configur each kubernetes services on the
    master node
 7. Download software for worker node from the master node.
 8. Setup flanneld, docker, kubelet and kube-proxy on each work node.
 9. Install kubernetes dashboard and dns services.
 10. Install cockroachdb.

Change-Id: I3f7ec234aaa72dd6b2542e53e7eae2673ef7b408
This commit is contained in:
Tong Li 2017-03-21 15:09:15 -04:00
parent 634e7a72f9
commit 2c89e57bc5
31 changed files with 1683 additions and 0 deletions

8
workloads/ansible/shade/k8s/.gitignore vendored Executable file
View File

@ -0,0 +1,8 @@
*.out
vars/*
run/*
site.retry
*/**/*.log
*/**/.DS_Store
*/**/._
*/**/*.tfstate*

View File

@ -0,0 +1,230 @@
# Kubernetes Ansible deployments on OpenStack Cloud
This ansible playbook will install a 3 node kubernetes cluster. The first
node will be used as the master node, rest of the nodes will be used as
kubernetes worker node.
Once the script finishes, a kubernetes cluster should be ready for use.
## Status
In process
## Requirements
- [Install Ansible](http://docs.ansible.com/ansible/intro_installation.html)
- [Install openstack shade] (http://docs.openstack.org/infra/shade/installation.html)
- Make sure there is an Ubuntu cloud image available on your cloud.
- Clone this project into a directory.
If you will be using an Ubuntu system as Ansible controller, then you can
easily setup an environment by running the following script. If you have
other system as your Ansible controller, you can do similar steps to setup
the environment, the command may not be exact the same but the steps you
need to do should be identical.
sudo apt-get update
sudo apt-get install python-dev python-pip libssl-dev libffi-dev -y
sudo pip install --upgrade pip
sudo pip install six==1.10.0
sudo pip install shade==1.16.0
sudo pip install ansible==2.2.1.0
sudo ansible-galaxy install vmware.coreos-bootstrap
git clone https://github.com/openstack/interop-workloads.git
This workload requires that you use Ansible version 2.2.0.0 or above due to
floating IP allocation upgrades in Ansible OpenStack cloud modules.
### Prep
#### Deal with ssh keys for Openstack Authentication
If you do not have a ssh key, then you should create one by using a tool.
An example command to do that is provided below. Once you have a key pair,
ensure your local ssh-agent is running and your ssh key has been added.
This step is required. Not doing this, you will have to manually give
passphrase when script runs, and script can fail. If you really do not want
to deal with passphrase, you can create a key pair without passphrase::
ssh-keygen -t rsa -f ~/.ssh/interop
eval $(ssh-agent -s)
ssh-add ~/.ssh/interop
#### General Openstack Settings
Ansible's OpenStack cloud module is used to provision compute resources
against an OpenStack cloud. Before you run the script, the cloud environment
will have to be specified. Sample files have been provided in vars directory.
If you target ubuntu, you should use vars/ubuntu.yml as the sample, if you
target coreos, you should use vars/coreos.yml file as the sample to create
your own environment file. Here is an example of the file::
auth: {
auth_url: "http://x.x.x.x:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
target_os: "ubuntu",
image_name: "ubuntu-16.04",
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: True,
private_net_name: "my_tenant_net",
flavor_name: "m1.medium",
public_key_file: "/home/ubuntu/.ssh/interop.pub",
private_key_file: "/home/ubuntu/.ssh/interop"
stack_size: 3,
volume_size: 2,
block_device_name: "/dev/vdb",
domain: "cluster.local",
pod_network: {
Network: "172.17.0.0/16",
SubnetLen: 24,
SubnetMin: "172.17.0.0",
SubnetMax: "172.17.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.16.0.0/24",
dns_service_ip: "172.16.0.4",
flannel_repo: "https://github.com/coreos/flannel/releases/download/v0.7.0/flannel-v0.7.0-linux-amd64.tar.gz",
k8s_repo: "https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/"
}
The values of the auth section should be provided by your cloud provider. When
use keystone 2.0 API, you will not need to setup domain name. You can leave
region_name empty if you have just one region. You can also leave
private_net_name empty if your cloud does not support tenant network or you
only have one tenant network. The private_net_name is only needed when you
have multiple tenant networks. validate_certs should be normally set to True
when your cloud uses tls(ssl) and your cloud is not using self signed
certificate. If your cloud is using self signed certificate, then the
certificate can not be easily validated by ansible. You can skip it by setting
the parameter to False. currently the only value available for target_os is
ubuntu and coreos. Supported ubuntu releases are 16.04 and 16.10. Supported
coreos is the stable coreos openstack image.
You should use a network for your OpenStack VMs which will be able to access
internet. For example, in the example above, the parameter private_net_name
was configured as my_tenant_net, this will be a network that all your VMs
will be connected on and the network should have been connected with a router
which routes traffic to external network.
stack_size is set to 3 in the example configuration file, you can change that
to any number you wish, but it must be 2 at minumum. In that case, you will
have one master node and one worker node for k8s cluster. If you set stack_size
to a bigger number, one node will be used as the master, and the rest of the
nodes will be used as worker. Please note that master node will also act as
worker node.
public key and private key files should be created before you run the workload
these keys can be located in any directory that you prefer with read access.
volume_size and block_device_name are the parameter that you can set to allow
the workload script to provision the right size of cinder volume to create
k8s volumes. Each cinder volume will be created, partitioned, formated, and
mounted on each worker and master node. The mount point is /storage. A pod or
service should use hostPath to use the volume.
The workload is currently developed using flannel udp for k8s networking.
Other networking configurations can be used by simply changing the value of
flannel_backend parameter, but before you change the values, you will have to
make sure that the underlying networking is configured correctly.
The flanned_repo and k8s_repo point to the offical repositories of each
component, you may choose to setup a local repository to avoid long
download time especially when your cloud is very remote to these offical
repository. To do that, you only need to setup a http server and place the
following binaries in your http server directory.
- kubelet
- kubectl
- kube-proxy
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- flannel-v0.7.0-linux-amd64.tar.gz
## Run the script to create a kubernetes cluster using coreos image
Coreos images does not have python installed and it needs to be bootstraped.
To do that, you will have to install a bootstrap on your ansible controller
first by executing the following command, this only needs to be done once. We
simply use vmware coreos bootstrap, you can choose other ones, but this is
the one we have been using for testings.
ansible-galaxy install vmware.coreos-bootstrap
With your cloud environment set, you should be able to run the script::
ansible-playbook -e "action=apply env=coreos password=XXXXX" site.yml
The above command will stand up a kubernetes cluster at the environment
defined in vars/coreos.yml file. Replace xxxxx with your own password.
## Run the script to create a kubernetes cluster using ubuntu image
With your cloud environment set, you should be able to run the script::
ansible-playbook -e "action=apply env=ubuntu password=XXXXX" site.yml
The above command will stand up a kubernetes cluster at the environment
defined in vars/ubuntu.yml file. Replace xxxxx with your own password.
## The results of the work load successful run
If everything goes well, it will accomplish the following::
1. Provision 3 nodes or the number of nodes configured by stack_size
2. Create security group
3. Add security rules to allow ping, ssh, and kubernetes ports
4. Install common software onto each node such as docker
5. Download all the required software onto the master node
6. Setup the master node with kube-apiserver, kube-controller-manager and
kube-scheduler and configur each kubernetes services on the master node
7. Download software for worker node from the master node.
8. Setup flanneld, docker, kubelet and kube-proxy on each work node.
9. Install kubernetes dashboard and dns services.
## The method to run just a play, not the entire playbook
The script will create an ansible inventory file name runhosts at the very
first play, the inventory file will be place at the run directory of the
playbook root. If you like to only run specify plays, you will be able to run
the playbook like the following:
ansible-playbook -i run/runhosts -e "action=apply env=leap password=XXXXX" site.yml
--tags "common,master"
The above command will use the runhosts inventory file and only run plays
named common and master, all other plays in the play book will be skipped.
## Next Steps
### Check its up
If there are no errors, you can use kubectl to work with your kubernetes
cluster.
## Cleanup
Once you're done with it, don't forget to nuke the whole thing::
ansible-playbook -e "action=destroy env=leap password=XXXXX" site.yml
The above command will destroy all the resources created.

View File

@ -0,0 +1,3 @@
[defaults]
inventory = ./hosts
host_key_checking = False

View File

@ -0,0 +1,7 @@
---
k8suser: "k8suser"
k8spass: "{{ lookup('password',
'/tmp/k8spassword chars=ascii_letters,digits length=8') }}"
proxy_env: {
}

View File

@ -0,0 +1 @@
cloud ansible_host=127.0.0.1 ansible_python_interpreter=python

View File

@ -0,0 +1,66 @@
---
- name: Setup couple variables
set_fact:
service_path: "/etc/systemd/system/"
when: app_env.target_os == "coreos"
- name: Setup couple variables
set_fact:
service_path: "/lib/systemd/system/"
when: app_env.target_os == "ubuntu"
- name: Install Docker Engine
apt:
name: docker.io
update_cache: no
when: app_env.target_os == "ubuntu"
- name: Ensure config directories are present
file:
path: "{{ item }}"
state: directory
mode: 0755
owner: root
with_items:
- "/etc/kubernetes"
- "/opt"
- "/opt/bin"
- "~/.ssh"
- "~/.kube"
- name: Place the certificate in the right place
copy:
src: "{{ item.src }}"
dest: "{{ item.target }}"
mode: 0400
with_items:
- { src: "{{ app_env.public_key_file }}", target: "~/.ssh/id_rsa.pub" }
- { src: "{{ app_env.private_key_file }}", target: "~/.ssh/id_rsa" }
- name: List all k8s service on the node
stat:
path: "{{ service_path }}{{ item }}.service"
with_items:
- kubelet
- kube-proxy
- kube-controller-manager
- kube-schedule
- kube-apiserver
- docker
- flanneld
register: k8s_services
- name: Stop k8s related services if they exist
service:
name: "{{ item.item }}"
state: stopped
with_items: "{{ k8s_services.results }}"
when: item.stat.exists == true
no_log: True
- name: Setup /etc/hosts on every node
lineinfile:
dest: /etc/hosts
line: "{{ item }}"
state: present
with_lines: cat "{{ playbook_dir }}/run/k8shosts"

View File

@ -0,0 +1 @@
DAEMON_ARGS="{{ item.value }}"

View File

@ -0,0 +1,13 @@
[Unit]
Description=Kubernetes on OpenStack {{ item }} Service
[Service]
EnvironmentFile=/etc/kubernetes/{{ item }}
ExecStart=/opt/bin/{{ item }} "$DAEMON_ARGS"
Restart=always
RestartSec=2s
StartLimitInterval=0
KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,134 @@
---
- name: Setup public and privte IP variables
set_fact:
public_ip: "{{ ansible_host }}"
private_ip: "{{ hostvars[ansible_host].inter_ip }}"
- name: Setup service path variables for coreos
set_fact:
service_path: "/etc/systemd/system/"
when: app_env.target_os == "coreos"
- name: Setup service path variables for ubuntu
set_fact:
service_path: "/lib/systemd/system/"
when: app_env.target_os == "ubuntu"
- name: Install etcd
apt:
name: etcd
update_cache: no
when: app_env.target_os == "ubuntu"
- name: Download flannel package
get_url:
url: "{{ app_env.flannel_repo }}"
dest: /opt/bin/flanneld.tar.gz
force: no
- name: Unpack flannel binaries
unarchive:
src: /opt/bin/flanneld.tar.gz
dest: /opt/bin
exclude:
- README.me
- mk-docker-opts.sh
copy: no
- name: List all k8s binaries on the node
stat: "path=/opt/bin/{{ item }}"
with_items:
- kubelet
- kubectl
- kube-proxy
- kube-apiserver
- kube-controller-manager
- kube-scheduler
register: k8s_binaries
- name: Download k8s binary files if they are not already on the master node
get_url:
url: "{{ app_env.k8s_repo }}{{ item.item }}"
dest: "/opt/bin/{{ item.item }}"
mode: "0555"
with_items: "{{ k8s_binaries.results }}"
when: item.stat.exists == false
no_log: True
- name: Config services
template:
src: "roles/master/templates/etcd.{{ app_env.target_os }}.j2"
dest: "{{ service_path }}etcd.service"
mode: 0644
- name: Reload services
command: systemctl daemon-reload
- name: Enable and start etcd services
service:
name: "etcd"
enabled: yes
state: restarted
- name: Reset etcd
uri:
url: "http://{{ private_ip }}:2379/v2/keys/{{ item }}?recursive=true"
method: DELETE
status_code: 200,202,204,404
with_items:
- coreos.com
- registry
- name: Initialize the flanneld configuration in etcd
uri:
url: http://{{ private_ip }}:2379/v2/keys/coreos.com/network/config
method: PUT
body: >-
value={{ app_env.pod_network | to_nice_json(indent=2) }}
status_code: 200,201
- name: Setup service parameters
set_fact:
apiserver_params: >-
--etcd-servers=http://{{ private_ip }}:2379
--service-cluster-ip-range={{ app_env.service_ip_range }}
--advertise-address={{ public_ip }}
--bind-address={{ private_ip }}
--insecure-bind-address={{ private_ip }}
controller_params: >-
--master=http://{{ private_ip }}:8080
--cluster-cidr={{ app_env.pod_network.Network }}
--cluster-name=k8sonos
scheduler_params: >-
--master=http://{{ private_ip }}:8080
- name: Configure the services
template:
src: roles/common/templates/k8s.conf.j2
dest: "/etc/kubernetes/{{ item.name }}"
mode: 0644
with_items:
- { name: "kube-apiserver", value: "{{ apiserver_params }}" }
- { name: "kube-controller-manager", value: "{{ controller_params }}" }
- { name: "kube-scheduler", value: "{{ scheduler_params }}"}
- name: Setup services for master node
template:
src: "roles/common/templates/k8s.service.j2"
dest: "{{ service_path }}{{ item }}.service"
mode: 0644
with_items:
- kube-apiserver
- kube-controller-manager
- kube-scheduler
- name: Enable and start the services
service:
name: "{{ item }}"
enabled: yes
state: restarted
with_items:
- kube-apiserver
- kube-controller-manager
- kube-scheduler

View File

@ -0,0 +1,15 @@
[Unit]
Description=etcd2 even the name is called etcd
[Service]
Type=notify
ExecStart=/bin/etcd2 \
--advertise-client-urls=http://{{ private_ip }}:2379 \
--listen-client-urls=http://{{ private_ip }}:2379
Restart=always
RestartSec=10s
LimitNOFILE=40000
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
[Unit]
Description=etcd
[Service]
Type=notify
ExecStart=/usr/bin/etcd \
--advertise-client-urls=http://{{ private_ip }}:2379 \
--listen-client-urls=http://{{ private_ip }}:2379
Restart=always
RestartSec=10s
LimitNOFILE=40000
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,30 @@
---
- name: Setup couple variables
set_fact:
public_ip: "{{ ansible_host }}"
private_ip: "{{ hostvars[ansible_host].inter_ip }}"
- name: Upload addon service configuration files
template:
src: "roles/post/templates/{{ item }}.j2"
dest: "/etc/kubernetes/{{ item }}.yaml"
mode: 0644
with_items:
- dnscontroller
- dashboard
- cockroachdb
- name: Label the master node
command: >-
/opt/bin/kubectl --server={{ private_ip }}:8080 label --overwrite=true
nodes master dashboardId=master
- name: Create addon services
command: >-
/opt/bin/kubectl --server={{ private_ip }}:8080 create
-f /etc/kubernetes/{{ item }}.yaml
with_items:
- dnscontroller
- dashboard
- cockroachdb

View File

@ -0,0 +1 @@
---

View File

@ -0,0 +1,146 @@
# Claim: This deployment file was originally developed by cockroachdb Lab
#
# For details, please follow the following link:
# https://github.com/cockroachdb/cockroach/tree/master/cloud/kubernetes
#
apiVersion: v1
kind: Service
metadata:
name: cockroachdb-public
labels:
app: cockroachdb
spec:
type: NodePort
ports:
- port: 26257
targetPort: 26257
nodePort: 32257
name: grpc
- port: 8080
targetPort: 8080
nodePort: 32256
name: http
selector:
app: cockroachdb
---
apiVersion: v1
kind: Service
metadata:
name: cockroachdb
labels:
app: cockroachdb
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
prometheus.io/scrape: "true"
prometheus.io/path: "_status/vars"
prometheus.io/port: "8080"
spec:
ports:
- port: 26257
targetPort: 26257
name: grpc
- port: 8080
targetPort: 8080
name: http
clusterIP: None
selector:
app: cockroachdb
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: cockroachdb-budget
labels:
app: cockroachdb
spec:
selector:
matchLabels:
app: cockroachdb
minAvailable: 67%
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: cockroachdb
spec:
serviceName: "cockroachdb"
replicas: {{ app_env.stack_size - 1 }}
template:
metadata:
labels:
app: cockroachdb
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"podAntiAffinity": {
"preferredDuringSchedulingIgnoredDuringExecution": [{
"weight": 100,
"labelSelector": {
"matchExpressions": [{
"key": "app",
"operator": "In",
"values": ["cockroachdb"]
}]
},
"topologyKey": "kubernetes.io/hostname"
}]
}
}
pod.alpha.kubernetes.io/init-containers: '[
{
"name": "bootstrap",
"image": "cockroachdb/cockroach-k8s-init",
"imagePullPolicy": "IfNotPresent",
"args": [
"-on-start=/on-start.sh",
"-service=cockroachdb"
],
"env": [
{
"name": "POD_NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
}
],
"volumeMounts": [
{
"name": "datadir",
"mountPath": "/cockroach/cockroach-data"
}
]
}
]'
spec:
containers:
- name: cockroachdb
image: cockroachdb/cockroach
imagePullPolicy: IfNotPresent
ports:
- containerPort: 26257
name: grpc
- containerPort: 8080
name: http
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
command:
- "/bin/bash"
- "-ecx"
- |
CRARGS=("start" "--logtostderr" "--insecure" "--host" "$(hostname -f)" "--http-host" "0.0.0.0")
if [ ! "$(hostname)" == "cockroachdb-0" ] || \
[ -e "/cockroach/cockroach-data/cluster_exists_marker" ]
then
CRARGS+=("--join" "cockroachdb-public")
fi
exec /cockroach/cockroach ${CRARGS[*]}
terminationGracePeriodSeconds: 60
volumes:
- name: datadir
hostPath:
path: /storage/cockroachdb

View File

@ -0,0 +1,80 @@
# Copyright 2015 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Configuration to deploy release version of the Dashboard UI.
#
# Example usage: kubectl create -f <this_file>
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: kubernetes-dashboard
template:
metadata:
labels:
app: kubernetes-dashboard
# Comment the following annotation if Dashboard must not be deployed on master
annotations:
scheduler.alpha.kubernetes.io/tolerations: |
[
{
"key": "dedicated",
"operator": "Equal",
"value": "master",
"effect": "NoSchedule"
}
]
spec:
nodeSelector:
dashboardId: master
containers:
- name: kubernetes-dashboard
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0
imagePullPolicy: Always
ports:
- containerPort: 9090
protocol: TCP
args:
- --apiserver-host=http://{{ private_ip }}:8080
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
labels:
app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
clusterIP: {{ app_env.dashboard_service_ip }}
ports:
- port: 80
targetPort: 9090
nodePort: 30000
selector:
app: kubernetes-dashboard

View File

@ -0,0 +1,151 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
dnsPolicy: Default
volumes:
- name: kube-dns-config
hostPath:
path: /root/.kube/config
nodeSelector:
dashboardId: master
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.13.0
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain={{ app_env.domain }}.
- --dns-port=10053
- --kubecfg-file=/kube-dns-config
- --kube-master-url=http://{{ private_ip }}:8080
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.13.0
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --server=/{{ app_env.domain }}/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
resources:
requests:
cpu: 150m
memory: 10Mi
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.13.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.{{ app_env.domain }},5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.{{ app_env.domain }},5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ app_env.dns_service_ip }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -0,0 +1,87 @@
---
- name: Setup node group name for coreos
set_fact:
target_interpreter: "/home/core/bin/python"
wgroups: "cworkers"
mgroups: "cmasters"
when: app_env.target_os == "coreos"
- name: Setup node group name for ubuntu
set_fact:
target_interpreter: "python"
wgroups: "uworkers"
mgroups: "umasters"
when: app_env.target_os == "ubuntu"
- name: Remove the runhost file
file:
path: "{{ playbook_dir }}/run/runhosts"
state: absent
- name: Setup host cloud
lineinfile:
dest: "{{ playbook_dir }}/run/runhosts"
create: yes
insertafter: EOF
line: "cloud ansible_host=127.0.0.1 ansible_python_interpreter=python"
- name: Add the node to host group with private IP
add_host:
name: "{{ hostvars[item].public_ip }}"
inter_ip: "{{ hostvars[item].private_ip }}"
inter_name: "{{ item }}"
ansible_python_interpreter: "{{ hostvars[item].target_interpreter }}"
groups: "{{ hostvars[item].targetgroup }}"
with_items: "{{ groups['prohosts'] }}"
- name: Remove the k8shosts file
file:
path: "{{ playbook_dir }}/run/k8shosts"
state: absent
- name: Build up hosts file
lineinfile:
dest: "{{ playbook_dir }}/run/k8shosts"
line: "{{ hostvars[item].inter_ip }} {{ hostvars[item].inter_name }}"
state: present
create: yes
with_flattened:
- '{{ groups[mgroups] }}'
- '{{ groups[wgroups] }}'
- name: Add all the hosts to the file
lineinfile:
dest: "{{ playbook_dir }}/run/runhosts"
create: yes
insertafter: EOF
line: >-
{{ item }} inter_ip={{ hostvars[item].inter_ip }}
inter_name={{ hostvars[item].inter_name }}
ansible_python_interpreter={{ target_interpreter }}
with_items:
- '{{ groups[mgroups] }}'
- '{{ groups[wgroups] }}'
- name: Setup groups in the inventory file
lineinfile:
dest: "{{ playbook_dir }}/run/runhosts"
insertafter: EOF
line: "{{ item }}"
with_items:
- '[{{ mgroups }}]'
- '{{ groups[mgroups] }}'
- '[{{ wgroups }}]'
- '{{ groups[wgroups] }}'
- name: Wait until servers are up and runnning
wait_for:
host: "{{ item }}"
port: 22
state: started
delay: 15
connect_timeout: 20
timeout: 300
with_items:
- "{{ groups[mgroups] }}"
- "{{ groups[wgroups] }}"

View File

@ -0,0 +1,20 @@
---
- name: Delete key pairs
os_keypair:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "k8s"
public_key_file: "{{ app_env.public_key_file }}"
- name: Delete security group
os_security_group:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: k8s_sg
description: secuirty group for lampstack

View File

@ -0,0 +1,92 @@
---
- name: Ensure we have a working directory to save runtime files
file: "path={{ playbook_dir }}/run state=directory"
- name: Setup host couple variables
set_fact:
target_interpreter: "/home/core/bin/python"
wgroups: "cworkers"
mgroups: "cmasters"
when: app_env.target_os == "coreos"
- name: Setup couple variables
set_fact:
target_interpreter: "python"
wgroups: "uworkers"
mgroups: "umasters"
when: app_env.target_os == "ubuntu"
- name: Retrieve specified flavor
os_flavor_facts:
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "{{ app_env.flavor_name }}"
- name: Create a key-pair
os_keypair:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "k8s"
public_key_file: "{{ app_env.public_key_file }}"
- name: Create security group
os_security_group:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: k8s_sg
description: security group for lampstack
- name: Add security rules
os_security_group_rule:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
security_group: k8s_sg
protocol: "{{ item.protocol }}"
direction: "{{ item.dir }}"
port_range_min: "{{ item.p_min }}"
port_range_max: "{{ item.p_max }}"
remote_ip_prefix: 0.0.0.0/0
with_items:
- { p_min: 22, p_max: 22, dir: ingress, protocol: tcp }
- { p_min: 80, p_max: 80, dir: ingress, protocol: tcp }
- { p_min: 53, p_max: 53, dir: ingress, protocol: udp }
- { p_min: 53, p_max: 53, dir: egress, protocol: udp }
- { p_min: 8080, p_max: 8080, dir: ingress, protocol: tcp }
- { p_min: 8285, p_max: 8285, dir: ingress, protocol: udp }
- { p_min: 2379, p_max: 2380, dir: ingress, protocol: tcp }
- { p_min: 2379, p_max: 2380, dir: egress, protocol: tcp }
- { p_min: 10250, p_max: 10250, dir: ingress, protocol: tcp }
- { p_min: 30000, p_max: 32767, dir: ingress, protocol: tcp }
- { p_min: -1, p_max: -1, dir: ingress, protocol: icmp }
- { p_min: -1, p_max: -1, dir: egress, protocol: icmp }
- name: Add provisioning host group
add_host:
name: "worker-{{ item }}"
targetgroup: "{{ wgroups }}"
ansible_host: "127.0.0.1"
ansible_python_interpreter: "python"
groups: "prohosts"
with_sequence: count={{ app_env.stack_size - 1 }}
no_log: True
- name: Add provisioning host group
add_host:
name: "master"
targetgroup: "{{ mgroups }}"
ansible_host: "127.0.0.1"
ansible_python_interpreter: "python"
groups: "prohosts"
no_log: True

View File

@ -0,0 +1,34 @@
---
- name: Setup host couple variables
set_fact:
target_interpreter: "/home/core/bin/python"
wgroups: "cworkers"
mgroups: "cmasters"
when: app_env.target_os == "coreos"
- name: Setup couple variables
set_fact:
target_interpreter: "python"
wgroups: "uworkers"
mgroups: "umasters"
when: app_env.target_os == "ubuntu"
- name: Add provisioning host group
add_host:
name: "worker-{{ item }}"
targetgroup: "{{ wgroups }}"
ansible_host: "127.0.0.1"
ansible_python_interpreter: "python"
groups: "prohosts"
with_sequence: count={{ app_env.stack_size - 1 }}
no_log: True
- name: Add provisioning host group
add_host:
name: "master"
targetgroup: "{{ mgroups }}"
ansible_host: "127.0.0.1"
ansible_python_interpreter: "python"
groups: "prohosts"
no_log: True

View File

@ -0,0 +1,72 @@
---
- name: Setup variables
set_fact:
target_interpreter: "/home/core/bin/python"
tp_path: "roles/provision/templates/{{ app_env.target_os }}.j2"
when: app_env.target_os == "coreos"
- name: Setup variables
set_fact:
target_interpreter: "python"
tp_path: "roles/provision/templates/{{ app_env.target_os }}.j2"
when: app_env.target_os == "ubuntu"
- name: Create an OpenStack virtual machine
os_server:
state: "present"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "{{ inventory_hostname }}"
image: "{{ app_env.image_name }}"
key_name: "k8s"
timeout: 200
flavor: "{{ app_env.flavor_name }}"
network: "{{ app_env.private_net_name }}"
floating_ip_pools: "{{ app_env.public_net_name | default(omit) }}"
reuse_ips: False
userdata: "{{ lookup('template', tp_path) }}"
config_drive: "{{ app_env.config_drive | default('no') }}"
security_groups: k8s_sg
meta:
hostname: "{{ inventory_hostname }}"
register: osvm
- name: Setup variables for generate host groups
set_fact:
inter_name: "{{ osvm.openstack.name }}"
public_ip: "{{ osvm.openstack.public_v4 }}"
private_ip: "{{ osvm.openstack.private_v4 }}"
- name: Use public ip address when private ip is empty
set_fact:
private_ip: "{{ osvm.openstack.public_v4 }}"
when: osvm.openstack.private_v4 == ""
- name: Use private ip address when public ip is empty
set_fact:
public_ip: "{{ osvm.openstack.private_v4 }}"
when: osvm.openstack.public_v4 == ""
- name: Create volumes for the node
os_volume:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
size: "{{ app_env.volume_size }}"
wait: yes
display_name: "{{ inventory_hostname }}_volume"
- name: Attach a volume to the node
os_server_volume:
state: present
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
server: "{{ inventory_hostname }}"
volume: "{{ inventory_hostname }}_volume"
device: "{{ app_env.block_device_name }}"

View File

@ -0,0 +1,27 @@
---
- name: Destroy the OpenStack VM
os_server:
state: "absent"
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
name: "{{ inventory_hostname }}"
image: "{{ app_env.image_name }}"
delete_fip: True
key_name: "k8s"
timeout: 200
network: "{{ app_env.private_net_name }}"
meta:
hostname: "{{ inventory_hostname }}"
- name: Destroy the OpenStack volume
os_volume:
state: absent
auth: "{{ auth }}"
region_name: "{{ app_env.region_name }}"
availability_zone: "{{ app_env.availability_zone }}"
validate_certs: "{{ app_env.validate_certs }}"
wait: yes
display_name: "{{ inventory_hostname }}_volume"

View File

@ -0,0 +1,2 @@
#cloud-config
hostname: {{ inventory_hostname }}.

View File

@ -0,0 +1,6 @@
#cloud-config
hostname: {{ inventory_hostname }}.
packages:
- python
- bridge-utils

View File

@ -0,0 +1,150 @@
---
- name: Setup few variables for coreos target
set_fact:
public_ip: "{{ groups['cmasters'][0] }}"
private_ip: "{{ hostvars[groups['cmasters'][0]].inter_ip }}"
this_ip: "{{ hostvars[ansible_host].inter_ip }}"
service_path: "/etc/systemd/system/"
when: app_env.target_os == "coreos"
- name: Setup few variables for ubuntu target
set_fact:
public_ip: "{{ groups['umasters'][0] }}"
private_ip: "{{ hostvars[groups['umasters'][0]].inter_ip }}"
this_ip: "{{ hostvars[ansible_host].inter_ip }}"
service_path: "/lib/systemd/system/"
when: app_env.target_os == "ubuntu"
- stat: path=/tmp/diskflag
register: diskflag
- shell: parted -s "{{ app_env.block_device_name }}" mklabel msdos
when: diskflag.stat.exists == false
- shell: parted -s "{{ app_env.block_device_name }}" mkpart primary ext4 1049kb 100%
when: diskflag.stat.exists == false
- lineinfile: dest=/tmp/diskflag line="disk is now partitioned!" create=yes
- name: Create file system on the volume
filesystem: fstype=ext4 dev="{{ app_env.block_device_name }}1"
- name: Mount the volume at /storage
mount: name=/storage src="{{ app_env.block_device_name }}1" fstype=ext4 state=mounted
- name: Get the network interface name
shell: >-
ip -4 -o addr | grep "{{ this_ip }}" | awk '{print $2}'
register: nodeif_name
- name: List all k8s service on the node
stat: "path=/opt/bin/{{ item }}"
with_items:
- kubelet
- kubectl
- kube-proxy
- flanneld
register: k8s_binaries
- name: Pull k8s binaries from the master
command: >-
scp -i "~/.ssh/id_rsa" -o "StrictHostKeyChecking no" "{{ app_env.
ssh_user }}@{{ private_ip }}:/opt/bin/{{ item.item }}"
"/opt/bin/{{ item.item }}"
with_items: " {{ k8s_binaries.results }} "
when: item.stat.exists == false
no_log: True
- name: Setup services for worker node
template:
src: roles/common/templates/k8s.service.j2
dest: "{{ service_path }}{{ item }}.service"
mode: 0644
with_items:
- flanneld
- kubelet
- kube-proxy
- name: Setup kubeconfig for each node
template:
src: roles/worker/templates/kubeconfig.j2
dest: "~/.kube/config"
mode: 0600
- name: Setup worker node service variables
set_fact:
kubelet_params: >-
--api-servers={{ private_ip }}:8080
--container-runtime=docker
--cluster-dns={{ app_env.dns_service_ip }}
--cluster-domain={{ app_env.domain }}
--hostname-override={{ inter_name }}
--resolv-conf=''
proxy_params: >-
--master={{ private_ip }}:8080
--cluster-cidr={{ app_env.pod_network.Network }}
flanneld_params: >-
-iface={{ nodeif_name.stdout }}
-etcd-endpoints=http://{{ private_ip }}:2379
-ip-masq=false
-etcd-prefix=/coreos.com/network/
- name: Configure the worker node services
template:
src: roles/common/templates/k8s.conf.j2
dest: "/etc/kubernetes/{{ item.name }}"
mode: 0644
with_items:
- { name: "kubelet", value: "{{ kubelet_params }}" }
- { name: "kube-proxy", value: "{{ proxy_params }}" }
- { name: "flanneld", value: "{{ flanneld_params }}" }
- name: Start the flanneld service
service:
name: flanneld
enabled: yes
state: started
- name: Wait for the flannel to setup the subnets
wait_for:
path: /run/flannel/subnet.env
search_regex: FLANNEL_SUBNET
- name: Get the bip address
shell: >-
. /run/flannel/subnet.env && echo $FLANNEL_SUBNET
register: bip
- name: Get the mtu
shell: >-
. /run/flannel/subnet.env && echo $FLANNEL_MTU
register: mtu
- name: Setup Docker service file
template:
src: "roles/worker/templates/docker.{{ app_env.target_os }}.j2"
dest: "{{ service_path }}docker.service"
- name: Reload daemon service
command: systemctl daemon-reload
- name: Start the worker services
service:
name: "{{ item }}"
enabled: yes
state: restarted
with_items:
- docker.socket
- docker
- kubelet
- kube-proxy
- name: Load cockroachdb images
command: "{{ item }}"
with_items:
- "wget -q -O /opt/bin/cockroachdb.tar.gz {{ app_env.cockroachdb_repo }}"
- "tar xf /opt/bin/cockroachdb.tar.gz -C /opt/bin"
- "docker load --input /opt/bin/cockroachdb.tar"
when: app_env.cockroachdb_repo != ""
no_log: True

View File

@ -0,0 +1,27 @@
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=containerd.service docker.socket network.target
Requires=containerd.service docker.socket
[Service]
Type=notify
Environment="DOCKER_OPT_BIP=--bip={{ bip.stdout }}"
Environment="DOCKER_OPT_MTU=--mtu={{ mtu.stdout }}"
ExecStart=/usr/lib/coreos/dockerd --host=fd:// \
--containerd=/var/run/docker/libcontainerd/docker-containerd.sock \
$DOCKER_OPTS $DOCKER_CGROUPS $DOCKER_OPT_BIP $DOCKER_OPT_MTU \
$DOCKER_OPT_IPMASQ
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,25 @@
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=docker.socket network.target
Requires=docker.socket
[Service]
Type=notify
Environment="DOCKER_OPT_BIP=--bip={{ bip.stdout }}"
Environment="DOCKER_OPT_MTU=--mtu={{ mtu.stdout }}"
ExecStart=/usr/bin/dockerd -H fd:// \
$DOCKER_OPTS $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPT_IPMASQ
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Delegate=yes
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,15 @@
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: http://{{ private_ip }}:8080
name: k8sonos
contexts:
- context:
cluster: k8sonos
user: ""
name: k8s
current-context: k8s
kind: Config
preferences: {}
users: []

View File

@ -0,0 +1,131 @@
---
- name: Get start timestamp
hosts: cloud
connection: local
tasks:
- set_fact:
starttime: "{{ ansible_date_time }}"
tags: "info"
- name: Prepare to run the workload
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
tasks:
- include: "roles/prepare/tasks/{{ action }}.yml"
roles:
- prepare
tags: "{{ action }}"
- name: provision servers
hosts: prohosts
connection: local
strategy: free
vars_files:
- "vars/{{ env }}.yml"
tasks:
- include: "roles/provision/tasks/{{ action }}.yml"
roles:
- provision
tags: "{{ action }}"
- name: Post provision process
hosts: cloud
connection: local
vars_files:
- "vars/{{ env }}.yml"
tasks:
- include: "roles/postprovision/tasks/{{ action }}.yml"
roles:
- postprovision
tags: "{{ action }}"
- name: Boot strap all the target nodes
hosts: cmasters, cworkers
gather_facts: False
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
strategy: free
vars_files:
- "vars/{{ env }}.yml"
roles:
- vmware.coreos-bootstrap
tags: "apply"
- name: Install required packages for all nodes
hosts: cworkers, cmasters, uworkers, umasters
gather_facts: False
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
strategy: free
vars_files:
- "vars/{{ env }}.yml"
roles:
- common
environment: "{{ proxy_env }}"
tags: "common"
- name: Setup master
hosts: cmasters, umasters
gather_facts: true
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
roles:
- master
environment: "{{ proxy_env }}"
tags: "master"
- name: Setup workers
hosts: cworkers, cmasters, uworkers, umasters
gather_facts: true
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
strategy: free
vars_files:
- "vars/{{ env }}.yml"
roles:
- worker
environment: "{{ proxy_env }}"
tags: "worker"
- name: Post configurations
hosts: cmasters, umasters
gather_facts: true
user: "{{ app_env.ssh_user }}"
become: true
become_user: root
vars_files:
- "vars/{{ env }}.yml"
tasks:
- include: "roles/post/tasks/{{ action }}.yml"
roles:
- post
environment: "{{ proxy_env }}"
tags: "post"
- name: Inform the installer
hosts: cloud
connection: local
tasks:
- debug:
msg: >-
Access kubernetes dashboard at
http://{{ groups['umasters'][0] }}:30000
when: groups['umasters'] is defined
- debug:
msg: >-
Access kubernetes dashboard at
http://{{ groups['cmasters'][0] }}:30000
when: groups['cmasters'] is defined
- debug:
msg: >-
The work load started at {{ hostvars.cloud.starttime.time }},
ended at {{ ansible_date_time.time }}
tags: "info"

View File

@ -0,0 +1,47 @@
---
# This is an example configuration file when use coreos image.
horizon_url: "http://9.30.217.9"
auth: {
auth_url: "http://9.30.217.9:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
target_os: "coreos",
image_name: "coreos",
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: False,
ssh_user: "core",
private_net_name: "demonet",
flavor_name: "m1.large",
public_key_file: "/home/ubuntu/.ssh/interop.pub",
private_key_file: "/home/ubuntu/.ssh/interop",
stack_size: 4,
volume_size: 1,
block_device_name: "/dev/vdb",
domain: "cluster.local",
pod_network: {
Network: "172.17.0.0/16",
SubnetLen: 24,
SubnetMin: "172.17.0.0",
SubnetMax: "172.17.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.16.0.0/24",
dns_service_ip: "172.16.0.4",
dashboard_service_ip: "172.16.0.5",
# The following section shows an example when use a local repo.
cockroachdb_repo: "http://10.0.10.12/cockroachdb.tar.gz",
flannel_repo: "http://10.0.10.12/flannel-v0.7.0-linux-amd64.tar.gz",
k8s_repo: "http://10.0.10.12/v1.5.4/"
}

View File

@ -0,0 +1,47 @@
---
# This is an example configuration file when use ubuntu image.
horizon_url: "http://9.30.217.9"
auth: {
auth_url: "http://9.30.217.9:5000/v3",
username: "demo",
password: "{{ password }}",
domain_name: "default",
project_name: "demo"
}
app_env: {
target_os: "ubuntu",
image_name: "ubuntu-16.04",
region_name: "RegionOne",
availability_zone: "nova",
validate_certs: False,
ssh_user: "ubuntu",
private_net_name: "demonet",
flavor_name: "m1.medium",
public_key_file: "/home/ubuntu/.ssh/interop.pub",
private_key_file: "/home/ubuntu/.ssh/interop",
stack_size: 3,
volume_size: 1,
block_device_name: "/dev/vdb",
domain: "cluster.local",
pod_network: {
Network: "172.17.0.0/16",
SubnetLen: 24,
SubnetMin: "172.17.0.0",
SubnetMax: "172.17.255.0",
Backend: {
Type: "udp",
Port: 8285
}
},
service_ip_range: "172.16.0.0/24",
dns_service_ip: "172.16.0.4",
dashboard_service_ip: "172.16.0.5",
# The following section shows an example when use a remote repo.
cockroachdb_repo: "",
flannel_repo: "https://github.com/coreos/flannel/releases/download/v0.7.0/flannel-v0.7.0-linux-amd64.tar.gz",
k8s_repo: "https://storage.googleapis.com/kubernetes-release/release/v1.5.3/bin/linux/amd64/"
}