Added support for containerized dev bringup

The intent of this change is to bring up kolla-kubernetes
on top of a base / fresh halcyon-vagrant-kubernetes environment.
Some assumptions at this point include the base environment
is using centos and ceph.  The scripts need to be improved to
handle failures and re-runs.

Change-Id: I49f332b97b41be8098b0141696c2ad205e231b00
This commit is contained in:
Borne Mace 2017-02-03 11:21:02 -08:00
parent 502dbf4a1f
commit 06d09b4cd5
9 changed files with 447 additions and 39 deletions

View File

@ -386,6 +386,11 @@ supported by the Vagrant VirtualBox and OpenStack providers.
Managing and interacting with the environment
=============================================
The kube2 system in your halcyon-vagrant environment should have a minimum
of 4gb of ram and all others should be set to 2gb of ram. In your
config.rb script kube_vcpus should be set to 2 and kube_count should be
set to 4.
Once the environment's dependencies have been resolved and configuration
completed, you can run the following commands to interact with it:
@ -463,21 +468,26 @@ To test that helm is working you can run the following:
development environment is not setup properly for the proxy server.
Setting up Kubernetes for Kolla-Kubernetes deployment
Containerized development environment requirements and usage
=====================================================
To set the cluster up for developing Kolla-Kubernetes: you will most likely
want to run the following commands to label the nodes for running OpenStack
services:
Make sure to run the ./get-k8s-creds.sh script or the development environment
container will not be able to connect to the vagrant kubernetes cluster.
The kolla-kubernetes and kolla-ansible project should be checked out into
the same base directory as halcyon-vagrant-kubernetes. The default assumed
in kolla-kubernetes/tools/Dockerfile is ~/devel. If that is not the case
in your environment then change that value in Dockerfile.
.. path .
.. code-block:: console
kubectl get nodes -L kubeadm.alpha.kubernetes.io/role --no-headers | awk '$NF ~ /^<none>/ { print $1}' | while read NODE ; do
kubectl label node $NODE --overwrite kolla_controller=true
kubectl label node $NODE --overwrite kolla_compute=true
done
git clone https://github.com/openstack/kolla-kubernetes.git
git clone https://github.com/openstack/kolla-ansible.git
# Edit kolla-kubernetes/tools/Dockerfile to match development base dir
kolla-kubernetes/tools/build_dev_image.sh
kolla-kubernetes/tools/run_dev_image.sh
.. end
This will mark all the workers as being available for both storage and API pods.

View File

@ -0,0 +1,4 @@
---
features:
- |
Container and scripts for simple development environment bring up.

View File

@ -10,3 +10,4 @@ oslo.log>=3.11.0 # Apache-2.0
six>=1.9.0 # MIT
Jinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 # BSD License (3 clause)
PyYAML>=3.10.0 # MIT
kubernetes>=1.0.0b1 # Apache-2.0

View File

@ -1,32 +1,51 @@
#!/bin/bash -xe
VERSION=0.5.0-1
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../.." && pwd )"
IP=172.18.0.1
gate_job="$1"
base_distro="$2"
IP=${3:-172.18.0.1}
tunnel_interface=${4:-docker0}
# Break out devenv behavior since we will use different polling logic
# and we also assume ceph-multi use in the devenv
devenv=false
if [ "x$gate_job" == "xdevenv" ]; then
devenv=true
gate_job="ceph-multi"
fi
. "$DIR/tests/bin/common_workflow_config.sh"
. "$DIR/tests/bin/common_ceph_config.sh"
function wait_for_pods {
if [ "$devenv" = true ]; then
$DIR/tools/wait_for_pods.py $1 $2 $3
else
$DIR/tools/pull_containers.sh $1
$DIR/tools/wait_for_pods.sh $1
fi
}
function general_config {
common_workflow_config $IP $base_distro $tunnel_interface
}
function ceph_config {
common_ceph_config $gate_job
common_ceph_config $gate_job
}
tunnel_interface=docker0
if [ "x$1" == "xceph-multi" ]; then
if [ "x$gate_job" == "xceph-multi" ]; then
interface=$(netstat -ie | grep -B1 \
$(cat /etc/nodepool/primary_node_private) \
| head -n 1 | awk -F: '{print $1}')
tunnel_interface=$interface
| head -n 1 | awk -F: '{print $gate_job}')
# if this is being run remotely the netstat will fail,
# so fallback to the passed in interface name
if [ ! -z "$interface" ]; then
tunnel_interface=$interface
fi
fi
base_distro="$2"
gate_job="$1"
general_config > /tmp/general_config.yaml
ceph_config > /tmp/ceph_config.yaml
@ -44,31 +63,31 @@ helm install kolla/rabbitmq --version $VERSION \
--namespace kolla --name rabbitmq \
--values /tmp/general_config.yaml --values /tmp/ceph_config.yaml
$DIR/tools/pull_containers.sh kolla
$DIR/tools/wait_for_pods.sh kolla
wait_for_pods kolla mariadb,memcached,rabbitmq running,succeeded
helm install kolla/keystone --version $VERSION \
--namespace kolla --name keystone \
--values /tmp/general_config.yaml --values /tmp/ceph_config.yaml
$DIR/tools/pull_containers.sh kolla
$DIR/tools/wait_for_pods.sh kolla
wait_for_pods kolla keystone running,succeeded
helm install kolla/openvswitch --version $VERSION \
--namespace kolla --name openvswitch \
--values /tmp/general_config.yaml --values /tmp/ceph_config.yaml
--namespace kolla --name openvswitch \
--values /tmp/general_config.yaml --values /tmp/ceph_config.yaml
$DIR/tools/pull_containers.sh kolla
$DIR/tools/wait_for_pods.sh kolla
wait_for_pods kolla openvswitch running
kollakube res create bootstrap openvswitch-set-external-ip
$DIR/tools/pull_containers.sh kolla
$DIR/tools/wait_for_pods.sh kolla
wait_for_pods kolla openvswitch-set-external succeeded
$DIR/tools/build_local_admin_keystonerc.sh
. ~/keystonerc_admin
if [ "$devenv" = true ]; then
$DIR/tools/build_local_admin_keystonerc.sh ext
. ~/keystonerc_admin
else
$DIR/tools/build_local_admin_keystonerc.sh
. ~/keystonerc_admin
fi
[ -d "$WORKSPACE/logs" ] &&
kubectl get jobs -o json > $WORKSPACE/logs/jobs-after-bootstrap.json \
@ -95,8 +114,7 @@ helm install kolla/neutron --version $VERSION \
--namespace kolla --name neutron \
--values /tmp/general_config.yaml --values /tmp/ceph_config.yaml
$DIR/tools/pull_containers.sh kolla
$DIR/tools/wait_for_pods.sh kolla
wait_for_pods kolla cinder,glance,neutron running,succeeded
helm ls
@ -114,9 +132,6 @@ helm install kolla/horizon --version $VERSION \
#kollakube res create pod keepalived
$DIR/tools/pull_containers.sh kolla
$DIR/tools/wait_for_pods.sh kolla
wait_for_pods kolla nova,horizon running,succeeded
kollakube res delete bootstrap openvswitch-set-external-ip
$DIR/tools/wait_for_pods.sh kolla

90
tools/Dockerfile Normal file
View File

@ -0,0 +1,90 @@
From centos:centos7
ENV helm_version=v2.1.3 \
development_env=docker
# build up base os layer
RUN set -e && \
set -x && \
export KUBERNETES_REPO=/etc/yum.repos.d/kubernetes.repo && \
echo '[kubernetes]' > ${KUBERNETES_REPO} && \
echo 'name=Kubernetes' >> ${KUBERNETES_REPO} && \
echo 'baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64' >> ${KUBERNETES_REPO} && \
echo 'enabled=1' >> ${KUBERNETES_REPO} && \
echo 'gpgcheck=1' >> ${KUBERNETES_REPO} && \
echo 'repo_gpgcheck=1' >> ${KUBERNETES_REPO} && \
echo 'repo_gpgcheck=1' >> ${KUBERNETES_REPO} && \
echo 'gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg' >> ${KUBERNETES_REPO} && \
echo ' https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' >> ${KUBERNETES_REPO} && \
yum install -y \
epel-release && \
yum install -y \
git \
git-review \
python-virtualenv \
python-devel \
python-pip \
gcc \
openssl-devel \
crudini \
sudo \
jq \
sshpass \
hostname \
kubectl \
iproute2 \
wget \
net-tools && \
pip install --upgrade pip setuptools && \
adduser kolla && \
# install helm
curl -L http://storage.googleapis.com/kubernetes-helm/helm-${helm_version}-linux-amd64.tar.gz | \
tar -zxv --strip 1 -C /tmp && \
chmod +x /tmp/helm && \
mv /tmp/helm /usr/local/bin/helm
# build up temporary dev environment specific layer
COPY repos /opt/
RUN pip install pip --upgrade && \
pip install "ansible" && \
pip install "python-openstackclient" && \
pip install "python-neutronclient" && \
pip install -r /opt/kolla-ansible/requirements.txt && \
pip install -e /opt/kolla-ansible/ && \
pip install pyyaml && \
pip install -r /opt/kolla-kubernetes/requirements.txt && \
pip install -e /opt/kolla-kubernetes/ && \
mkdir -p /etc/nodepool && \
echo "172.16.35.12" > /etc/nodepool/primary_node_private && \
rm -rf /etc/kolla && \
rm -rf /usr/share/kolla && \
rm -rf /etc/kolla-kubernetes && \
ln -s /opt/kolla-ansible/etc/kolla /etc/kolla && \
ln -s /opt/kolla-ansible /usr/share/kolla && \
ln -s /opt/kolla-kubernetes/etc/kolla-kubernetes /etc/kolla-kubernetes && \
mkdir /root/.ssh && \
mv /opt/halcyon-vagrant-kubernetes/ssh-config /root/.ssh/config && \
chmod 644 /root/.ssh/config
WORKDIR /opt/kolla-kubernetes
ENTRYPOINT ["/usr/bin/bash"]
CMD ["/opt/kolla-kubernetes/tools/setup_dev_env.sh"]
LABEL docker.cmd.build = "docker build . --tag 'kolla/k8s-devenv:latest'"
# Run in the hosts network namespace to ensure we have routes to the k8s cluster
# otherwise set up a route to the k8s cluster on the host from the docker0 iface
LABEL docker.cmd.devel = "docker run -it --rm \
--net=host \
-v ~/.kube:/root/.kube:rw \
-v `pwd`:/opt/kolla-kubernetes:rw \
--entrypoint=/bin/bash \
kolla/k8s-devenv:latest"
LABEL docker.cmd.run = "docker run -it --rm \
--net=host \
-v ~/.kube:/root/.kube:rw \
kolla/k8s-devenv:latest"

45
tools/build_dev_image.sh Executable file
View File

@ -0,0 +1,45 @@
#!/bin/bash -xe
TMP_BUILD_DIR=/tmp/kolla-kubernetes-build
DEV_BASE=~/devel
# Set the below values if you ad running behind a proxy
BUILD_ARGS="--tag kolla/k8s-devenv:latest"
if [ ! "x$http_proxy" == "x" ]; then
BUILD_ARGS="$BUILD_ARGS --build-arg http_proxy=$http_proxy"
fi
if [ ! "x$https_proxy" == "x" ]; then
BUILD_ARGS="$BUILD_ARGS --build-arg https_proxy=$https_proxy"
fi
# delete old build environment if it is still there
cleanup_build_dir () {
if [ -d ${TMP_BUILD_DIR} ];
then
rm -rf ${TMP_BUILD_DIR}
fi
}
# create build environment and run
do_build () {
if [ ! -d ${TMP_BUILD_DIR} ];
then
mkdir ${TMP_BUILD_DIR}
mkdir ${TMP_BUILD_DIR}/repos
fi
HALCYON_TMP=${TMP_BUILD_DIR}/repos/halcyon-vagrant-kubernetes
cp ${DEV_BASE}/kolla-kubernetes/tools/Dockerfile ${TMP_BUILD_DIR}
cp -R ${DEV_BASE}/kolla-kubernetes ${TMP_BUILD_DIR}/repos/
cp -R ${DEV_BASE}/kolla-ansible ${TMP_BUILD_DIR}/repos/
cp -R ${DEV_BASE}/halcyon-vagrant-kubernetes ${TMP_BUILD_DIR}/repos/
pushd ${HALCYON_TMP}
vagrant ssh-config > ssh-config
sed -ie "s/\/tmp\/kolla-kubernetes-build\/repos\/halcyon-vagrant-kubernetes/\/opt\/halcyon-vagrant-kubernetes/g" ssh-config
cp -R ~/.kube ${TMP_BUILD_DIR}/kube
cd ${TMP_BUILD_DIR}; docker build ${BUILD_ARGS} .
}
cleanup_build_dir
do_build

13
tools/run_dev_image.sh Executable file
View File

@ -0,0 +1,13 @@
#!/bin/bash -xe
# Set the below values if you are running behind a proxy
if [ -f "/etc/environment" ]; then
RUN_ARGS="--env-file=/etc/environment"
else
RUN_ARGS=""
fi
docker run -it --rm \
--net=host \
-v ~/.kube:/root/.kube:rw \
$RUN_ARGS \
kolla/k8s-devenv:latest

173
tools/setup_dev_env.sh Executable file
View File

@ -0,0 +1,173 @@
#!/bin/bash -xe
DEV_BASE=/opt
KOLLA_K8S=${DEV_BASE}/kolla-kubernetes
pushd ${DEV_BASE}
# create build environment and run
ceph_setup () {
ssh vagrant@kube2 'bash -s' < kolla-kubernetes/tests/bin/setup_gate_loopback.sh
echo "kolla_base_distro: centos" >> kolla-ansible/etc/kolla/globals.yml
cat kolla-kubernetes/tests/conf/ceph-all-in-one/kolla_config \
>> kolla-ansible/etc/kolla/globals.yml
cat kolla-kubernetes/tests/conf/ceph-all-in-one/kolla_kubernetes_config \
>> kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
sed -i "s/initial_mon:.*/initial_mon: kube2/" \
kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
interface="eth1"
echo "tunnel_interface: $interface" >> kolla-ansible/etc/kolla/globals.yml
echo "storage_interface: $interface" >> \
kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
sed -i "s/172.17.0.1/$(cat /etc/nodepool/primary_node_private)/" \
kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
}
config_setup () {
kolla-ansible/tools/generate_passwords.py
kolla-ansible/tools/kolla-ansible genconfig
crudini --set /etc/kolla/nova-compute/nova.conf libvirt virt_type qemu
crudini --set /etc/kolla/nova-compute/nova.conf libvirt rbd_user nova
UUID=$(awk '{if($1 == "rbd_secret_uuid:"){print $2}}' /etc/kolla/passwords.yml)
crudini --set /etc/kolla/nova-compute/nova.conf libvirt rbd_secret_uuid $UUID
# Keystone does not seem to invalidate its cache on entry point addition.
crudini --set /etc/kolla/keystone/keystone.conf cache enabled False
sed -i 's/log_outputs = "3:/log_outputs = "1:/' /etc/kolla/nova-libvirt/libvirtd.conf
sed -i 's/log_level = 3/log_level = 1/' /etc/kolla/nova-libvirt/libvirtd.conf
sed -i \
'/\[global\]/a osd pool default size = 1\nosd pool default min size = 1\nosd crush chooseleaf type = 0\ndebug default = 5\n'\
/etc/kolla/ceph*/ceph.conf
kolla-kubernetes/tools/fix-mitaka-config.py
}
k8s_setup () {
kubectl get nodes -L kubeadm.alpha.kubernetes.io/role --no-headers | awk '$NF ~ /^<none>/ { print $1}' | while read NODE ; do
kubectl label node $NODE --overwrite kolla_compute=true
done
kubectl label node 172.16.35.12 --overwrite kolla_controller=true
kubectl create namespace kolla
kolla-kubernetes/tools/secret-generator.py create
kolla-kubernetes/tools/setup-resolv-conf.sh kolla
}
ceph_startup () {
kollakube template configmap ceph-mon ceph-osd > /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kollakube template bootstrap ceph-bootstrap-initial-mon > /tmp/kube.yaml
sed -i "s|kubernetes.io/hostname: kube2|kubernetes.io/hostname: 172.16.35.12|g" /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-bootstrap-initial-mon succeeded
kolla-kubernetes/tools/setup-ceph-secrets.sh
# ceph mon-bootstrap
kollakube res delete bootstrap ceph-bootstrap-initial-mon
kollakube template pod ceph-mon > /tmp/kube.yaml
sed -i "s|kubernetes.io/hostname: kube2|kubernetes.io/hostname: 172.16.35.12|g" /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-mon running
# ceph-osd0 / osd1 bootstrap
kollakube template pod ceph-bootstrap-osd0 > /tmp/kube.yaml
sed -i "s|kubernetes.io/hostname: kube2|kubernetes.io/hostname: 172.16.35.12|g" /tmp/kube.yaml
sed -i "s|loop0|loop2|g" /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-bootstrap-osd0 succeeded
kollakube template pod ceph-bootstrap-osd1 > /tmp/kube.yaml
sed -i "s|kubernetes.io/hostname: kube2|kubernetes.io/hostname: 172.16.35.12|g" /tmp/kube.yaml
sed -i "s|loop1|loop3|g" /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-bootstrap-osd1 succeeded
# cleanup ceph bootstrap
kollakube res delete pod ceph-bootstrap-osd0
kollakube res delete pod ceph-bootstrap-osd1
# ceph osd0 / osd1 startup
sed -i "s|^ceph_osd_data_kube2:|ceph_osd_data_dev:|g" \
kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
sed -i "s|^ceph_osd_journal_kube2:|ceph_osd_journal_dev:|g" \
kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
sed -i "s|/kube2/loop|/dev/loop|g" \
kolla-kubernetes/etc/kolla-kubernetes/kolla-kubernetes.yml
kollakube template pod ceph-osd0 > /tmp/kube.yaml
sed -i "s|kubernetes.io/hostname: kube2|kubernetes.io/hostname: 172.16.35.12|g" /tmp/kube.yaml
sed -i "s|loop0|loop2|g" /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-osd0 running
kollakube template pod ceph-osd1 > /tmp/kube.yaml
sed -i "s|kubernetes.io/hostname: kube2|kubernetes.io/hostname: 172.16.35.12|g" /tmp/kube.yaml
sed -i "s|loop1|loop3|g" /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-osd1 running
kubectl exec ceph-osd0 -c main --namespace=kolla -- /bin/bash -c \
"cat /etc/ceph/ceph.conf" > /tmp/ceph.conf
kubectl create configmap ceph-conf --namespace=kolla \
--from-file=ceph.conf=/tmp/ceph.conf
# ceph admin startup
kollakube template pod ceph-admin ceph-rbd > /tmp/kube.yaml
kubectl create -f /tmp/kube.yaml
kolla-kubernetes/tools/wait_for_pods.py kolla ceph-admin,ceph-rbd running
kubectl exec ceph-admin -c main --namespace=kolla -- /bin/bash -c "ceph -s"
for x in kollavolumes images volumes vms; do
kubectl exec ceph-admin -c main --namespace=kolla -- /bin/bash \
-c "ceph osd pool create $x 64; ceph osd pool set $x size 1; ceph osd pool set $x min_size 1"
done
kubectl exec ceph-admin -c main --namespace=kolla -- /bin/bash \
-c "ceph osd pool delete rbd rbd --yes-i-really-really-mean-it"
kolla-kubernetes/tools/setup_simple_ceph_users.sh
kolla-kubernetes/tools/setup_rbd_volumes.sh --yes-i-really-really-mean-it 2
}
helm_setup () {
rm -rf ~/.helm
helm init
kubectl delete deployment tiller-deploy --namespace=kube-system; helm init
# wait for tiller service to be up / available
while true; do
echo 'Waiting for tiller to become available.'
helm version | grep Server > /dev/null && \
RUNNING=True || RUNNING=False
[ $RUNNING == "True" ] && \
break || true
sleep 5
done
kolla-kubernetes/tools/helm_build_all.sh ~/.helm/repository/kolla
helm repo remove kollabuild
kolla-kubernetes/tools/helm_buildrepo.sh ~/.helm/repository/kolla 10192 kolla &
helm repo update
kollakube res create configmap \
mariadb keystone horizon rabbitmq memcached nova-api nova-conductor \
nova-scheduler glance-api-haproxy glance-registry-haproxy glance-api \
glance-registry neutron-server neutron-dhcp-agent neutron-l3-agent \
neutron-metadata-agent neutron-openvswitch-agent openvswitch-db-server \
openvswitch-vswitchd nova-libvirt nova-compute nova-consoleauth \
nova-novncproxy nova-novncproxy-haproxy neutron-server-haproxy \
nova-api-haproxy cinder-api cinder-api-haproxy cinder-backup \
cinder-scheduler cinder-volume keepalived;
kollakube res create secret nova-libvirt
}
ceph_setup
config_setup
k8s_setup
ceph_startup
helm_setup
kolla-kubernetes/tests/bin/ceph_workflow_service.sh devenv centos 172.16.35.11 eth1

57
tools/wait_for_pods.py Executable file
View File

@ -0,0 +1,57 @@
#!/usr/bin/env python
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from kubernetes import client
from kubernetes import config
import sys
import time
def usage():
print("wait_for_pods.py requires three arguments. a namespace name \
a list of comma separated pod prefixes to monitor and a \
comma separated list of valid end states to wait for \
(such as completed and running)")
return
if len(sys.argv) != 4:
usage()
exit(1)
namespace = sys.argv[1]
prefix_list = sys.argv[2].lower().split(',')
end_status_list = sys.argv[3].lower().split(',')
config.load_kube_config()
v1 = client.CoreV1Api()
done = False
while not done:
matches = 0
finished = 0
# sleep at the start to give pods time to exist before polling
time.sleep(5)
kolla_pods = v1.list_namespaced_pod(namespace)
for pod in kolla_pods.items:
pod_name = pod.metadata.name.lower()
pod_status = pod.status.phase.lower()
for prefix in prefix_list:
if pod_name.startswith(prefix):
matches += 1
if pod_status in end_status_list:
finished += 1
if matches == finished:
done = True
else:
print('Waiting for pods to be ready. Total: ' + str(matches) +
' Ready:' + str(finished))