RETIRED, Kubernetes salt formula
Go to file
Jenkins 989d6db4a7 Merge "allow to pass options to dnsmasq service" 2017-01-24 22:08:28 +00:00
kubernetes Merge "allow to pass options to dnsmasq service" 2017-01-24 22:08:28 +00:00
metadata/service allow to pass options to dnsmasq service 2017-01-24 22:32:31 +01:00
tests update sources of k8s and calico installation 2017-01-18 14:30:09 +01:00
.gitignore packaging and legal 2016-06-14 11:21:14 +02:00
.gitreview openstack release cleanup 2016-08-05 12:25:45 +02:00
CHANGELOG.rst fix order of cni loading 2017-01-19 11:40:03 +01:00
LICENSE openstack release cleanup 2016-08-05 12:25:45 +02:00
Makefile packaging and legal 2016-06-14 11:21:14 +02:00
README.rst update sources of k8s and calico installation 2017-01-18 14:30:09 +01:00
VERSION fix order of cni loading 2017-01-19 11:40:03 +01:00
bindep.txt Move other-requirements.txt to bindep.txt 2016-08-12 21:16:48 +02:00
metadata.yml fix order of cni loading 2017-01-19 11:40:03 +01:00

README.rst

Kubernetes Formula

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

This formula deploys production ready Kubernetes and generate Kubernetes manifests as well.

Based on official Kubernetes salt https://github.com/kubernetes/kubernetes/tree/master/cluster/saltbase

Extended on Contrail contribution https://github.com/Juniper/kubernetes/blob/opencontrail-integration/docs/getting-started-guides/opencontrail.md

Sample pillars

REQUIRED: Define image to use for hyperkube, CNIs and calicoctl image

parameters:
  kubernetes:
    common:
      hyperkube:
        image: gcr.io/google_containers/hyperkube:v1.4.6
    pool:
      network:
        calicoctl:
          image: calico/ctl
        cni:
          image: calico/cni

Containers on pool definitions in pool.service.local

parameters:
  kubernetes:
    pool:
      service:
        local:
          enabled: False
          service: libvirt
          cluster: openstack-compute
          namespace: default
          role: ${linux:system:name}
          type: LoadBalancer
          kind: Deployment
          apiVersion: extensions/v1beta1
          replicas: 1
          host_pid: True
          nodeSelector:
          - key: openstack
            value: ${linux:system:name}
          hostNetwork: True
          container:
            libvirt-compute:
              privileged: True
              image: ${_param:docker_repository}/libvirt-compute
              tag: ${_param:openstack_container_tag}

Master definition

kubernetes:
    master:
      addons:
        dns:
          domain: cluster.local
          enabled: true
          replicas: 1
          server: 10.254.0.10
      admin:
        password: password
        username: admin
      apiserver:
        address: 10.0.175.100
        port: 8080
      ca: kubernetes
      enabled: true
      etcd:
        host: 127.0.0.1
        members:
        - host: 10.0.175.100
          name: node040
        name: node040
        token: ca939ec9c2a17b0786f6d411fe019e9b
      kubelet:
        allow_privileged: true
      network:
        engine: calico
        hash: fb5e30ebe6154911a66ec3fb5f1195b2
        private_ip_range: 10.150.0.0/16
        version: v0.19.0
      service_addresses: 10.254.0.0/16
      storage:
        engine: glusterfs
        members:
        - host: 10.0.175.101
          port: 24007
        - host: 10.0.175.102
          port: 24007
        - host: 10.0.175.103
          port: 24007
        port: 24007
      token:
        admin: DFvQ8GJ9JD4fKNfuyEddw3rjnFTkUKsv
        controller_manager: EreGh6AnWf8DxH8cYavB2zS029PUi7vx
        dns: RAFeVSE4UvsCz4gk3KYReuOI5jsZ1Xt3
        kube_proxy: DFvQ8GelB7afH3wClC9romaMPhquyyEe
        kubelet: 7bN5hJ9JD4fKjnFTkUKsvVNfuyEddw3r
        logging: MJkXKdbgqRmTHSa2ykTaOaMykgO6KcEf
        monitoring: hnsj0XqABgrSww7Nqo7UVTSZLJUt2XRd
        scheduler: HY1UUxEPpmjW4a1dDLGIANYQp1nZkLDk
      version: v1.2.4


kubernetes:
    pool:
      address: 0.0.0.0
      allow_privileged: true
      ca: kubernetes
      cluster_dns: 10.254.0.10
      cluster_domain: cluster.local
      enabled: true
      kubelet:
        allow_privileged: true
        config: /etc/kubernetes/manifests
        frequency: 5s
      master:
        apiserver:
          members:
          - host: 10.0.175.100
        etcd:
          members:
          - host: 10.0.175.100
        host: 10.0.175.100
      network:
        engine: calico
        hash: fb5e30ebe6154911a66ec3fb5f1195b2
        version: v0.19.0
      token:
        kube_proxy: DFvQ8GelB7afH3wClC9romaMPhquyyEe
        kubelet: 7bN5hJ9JD4fKjnFTkUKsvVNfuyEddw3r
      version: v1.2.4

Kubernetes with OpenContrail network plugin

On Master:

kubernetes:
  master:
    network:
      engine: opencontrail
      host: 10.0.170.70
      port: 8082
      default_domain: default-domain
      default_project: default-domain:default-project
      public_network: default-domain:default-project:Public
      public_ip_range: 185.22.97.128/26
      private_ip_range: 10.150.0.0/16
      service_cluster_ip_range: 10.254.0.0/16
      network_label: name
      service_label: uses
      cluster_service: kube-system/default
      network_manager:
        image: pupapaik/opencontrail-kube-network-manager
        tag: release-1.1-jpa-final-1

On pools:

kubernetes:
  pool:
    network:
      engine: opencontrail

Kubernetes control plane running in systemd

By default kube-apiserver, kube-scheduler, kube-controllermanager, kube-proxy, etcd running in docker containers through manifests. For stable production environment this should be run in systemd.

kubernetes:
  master:
    container: false

kubernetes:
  pool:
    container: false

Because k8s services run under kube user without root privileges, there is need to change secure port for apiserver.

kubernetes:
  master:
    apiserver:
      secure_port: 8081

Kubernetes with Flannel

On Master:

kubernetes:
  master:
    network:
      engine: flannel
# If you don't register master as node:
      etcd:
        members:
          - host: 10.0.175.101
            port: 4001
          - host: 10.0.175.102
            port: 4001
          - host: 10.0.175.103
            port: 4001
  common:
    network:
      engine: flannel

On pools:

kubernetes:
  pool:
    network:
      engine: flannel
      etcd:
        members:
          - host: 10.0.175.101
            port: 4001
          - host: 10.0.175.102
            port: 4001
          - host: 10.0.175.103
            port: 4001
  common:
    network:
      engine: flannel

Kubernetes with Calico

On Master:

kubernetes:
  master:
    network:
      engine: calico
# If you don't register master as node:
      etcd:
        members:
          - host: 10.0.175.101
            port: 4001
          - host: 10.0.175.102
            port: 4001
          - host: 10.0.175.103
            port: 4001

On pools:

kubernetes:
  pool:
    network:
      engine: calico
      etcd:
        members:
          - host: 10.0.175.101
            port: 4001
          - host: 10.0.175.102
            port: 4001
          - host: 10.0.175.103
            port: 4001

Post deployment configuration

# set ETCD
export ETCD_AUTHORITY=10.0.111.201:4001

# Set NAT for pods subnet
calicoctl pool add 192.168.0.0/16 --nat-outgoing

# Status commands
calicoctl status
calicoctl node show

Kubernetes with GlusterFS for storage

kubernetes:
  master
    ...
    storage:
      engine: glusterfs
      port: 24007
      members:
      - host: 10.0.175.101
        port: 24007
      - host: 10.0.175.102
        port: 24007
      - host: 10.0.175.103
        port: 24007
     ...

Kubernetes namespaces

Create namespace:

kubernetes:
  master
    ...
    namespace:
      kube-system:
        enabled: True
      namespace2:
        enabled: True
      namespace3:
        enabled: False
     ...

Kubernetes labels

Create namespace:

kubernetes:
  pool
    ...
    host:
      label:
        key01:
          value: value01
          enable: True
        key02:
          value: value02
          enable: False
      name: ${linux:system:name}
     ...

Pull images from private registries

kubernetes:
  master
    ...
    registry:
      secret:
        registry01:
          enabled: True
          key: (get from `cat /root/.docker/config.json | base64`)
          namespace: default
     ...
  control:
    ...
    service:
      service01:
      ...
      image_pull_secretes: registry01
      ...

Kubernetes Service Definitions in pillars

Following samples show how to generate kubernetes manifest as well and provide single tool for complete infrastructure management.

Deployment manifest

salt:
  control:
    enabled: True
    hostNetwork: True
    service:
      memcached:
        privileged: True
        service: memcached
        role: server
        type: LoadBalancer
        replicas: 3
        kind: Deployment
        apiVersion: extensions/v1beta1
        ports:
        - port: 8774
          name: nova-api
        - port: 8775
          name: nova-metadata
        volume:
          volume_name:
            type: hostPath
            mount: /certs
            path: /etc/certs
        container:
          memcached:
            image: memcached
            tag:2
            ports:
            - port: 8774
              name: nova-api
            - port: 8775
              name: nova-metadata
            variables:
            - name: HTTP_TLS_CERTIFICATE:
              value: /certs/domain.crt
            - name: HTTP_TLS_KEY
              value: /certs/domain.key
            volumes:
            - name: /etc/certs
              type: hostPath
              mount: /certs
              path: /etc/certs

PetSet manifest

service:
  memcached:
    apiVersion: apps/v1alpha1
    kind: PetSet
    service_name: 'memcached'
    container:
      memcached:
    ...

Configmap

You are able to create configmaps using support layer between formulas. It works simple, eg. in nova formula there's file meta/config.yml which defines config files used by that service and roles.

Kubernetes formula is able to generate these files using custom pillar and grains structure. This way you are able to run docker images built by any way while still re-using your configuration management.

Example pillar:

kubernetes:
  control:
    config_type: default|kubernetes # Output is yaml k8s or default single files
    configmap:
      nova-control:
        grains:
          # Alternate grains as OS running in container may differ from
          # salt minion OS. Needed only if grains matters for config
          # generation.
          os_family: Debian
        pillar:
          # Generic pillar for nova controller
          nova:
            controller:
              enabled: true
              versionn: liberty
              ...

To tell which services supports config generation, you need to ensure pillar structure like this to determine support:

nova:
  _support:
    config:
      enabled: true

initContainers

Example pillar:

kubernetes:
  control:
  service:
    memcached:
      init_containers:
      - name: test-mysql
        image: busybox
        command:
        - sleep
        - 3600
        volumes:
        - name: config
          mount: /test
      - name: test-memcached
        image: busybox
        command:
        - sleep
        - 3600
        volumes:
        - name: config
          mount: /test

Affinity

podAffinity

Example pillar:

kubernetes:
  control:
  service:
    memcached:
      affinity:
        pod_affinity:
          name: podAffinity
          expression:
            label_selector:
              name: labelSelector
              selectors:
              - key: app
                value: memcached
          topology_key: kubernetes.io/hostname

podAntiAffinity

Example pillar:

kubernetes:
  control:
  service:
    memcached:
      affinity:
        anti_affinity:
          name: podAntiAffinity
          expression:
            label_selector:
              name: labelSelector
              selectors:
              - key: app
                value: opencontrail-control
          topology_key: kubernetes.io/hostname

nodeAffinity

Example pillar:

kubernetes:
  control:
  service:
    memcached:
      affinity:
        node_affinity:
          name: nodeAffinity
          expression:
            match_expressions:
              name: matchExpressions
              selectors:
              - key: key
                operator: In
                values:
                - value1
                - value2

Volumes

hostPath

service:
  memcached:
    container:
      memcached:
        volumes:
          - name: volume1
            mountPath: /volume
            readOnly: True
    ...
    volume:
      volume1:
        name: /etc/certs
        type: hostPath
        path: /etc/certs

emptyDir

service:
  memcached:
    container:
      memcached:
        volumes:
          - name: volume1
            mountPath: /volume
            readOnly: True
    ...
    volume:
      volume1:
        name: /etc/certs
        type: emptyDir

configMap

service:
  memcached:
    container:
      memcached:
        volumes:
          - name: volume1
            mountPath: /volume
            readOnly: True
    ...
    volume:
      volume1:
        type: config_map
        item:
          configMap1:
            key: config.conf
            path: config.conf
          configMap2:
            key: policy.json
            path: policy.json

To mount single configuration file instead of whole directory:

service:
  memcached:
    container:
      memcached:
        volumes:
          - name: volume1
            mountPath: /volume/config.conf
            sub_path: config.conf

Generating Jobs

Example pillar:

kubernetes:
  control:
    job:
      sleep:
        job: sleep
        restart_policy: Never
        container:
          sleep:
            image: busybox
            tag: latest
            command:
            - sleep
            - "3600"

Volumes and Variables can be used as the same way as during Deployment generation.

Custom params:

kubernetes:
  control:
    job:
      host_network: True
      host_pid: True
      container:
        sleep:
          privileged: True
      node_selector:
        key: node
        value: one
      image_pull_secretes: password

Documentation and Bugs

To learn how to deploy OpenStack Salt, consult the documentation available online at:

https://wiki.openstack.org/wiki/OpenStackSalt

In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at:

http://bugs.launchpad.net/openstack-salt

Developers wishing to work on the OpenStack Salt project should always base their work on the latest formulas code, available from the master GIT repository at:

https://git.openstack.org/cgit/openstack/salt-formula-kubernetes

Developers should also join the discussion on the IRC list, at:

https://wiki.openstack.org/wiki/Meetings/openstack-salt

  1. 2016 tcp cloud a.s.
  2. 2016 OpenStack Foundation