initial commit of ceph helm chart

This commit is contained in:
Alan Meadows 2016-11-17 12:40:28 -08:00
parent d6d3215ba3
commit d4292d0c8a
20 changed files with 1027 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
*.tgz
**/*.tgz
.idea/
**/_partials.tpl

24
Makefile Normal file
View File

@ -0,0 +1,24 @@
.PHONY: ceph all clean base64
B64_DIRS := ceph/secrets
B64_EXCLUDE := $(wildcard ceph/secrets/*.b64)
all: base64 ceph
ceph: build-ceph
clean:
$(shell find . -name '*.b64' -exec rm {} \;)
$(shell find . -name '_partials.tpl' -exec rm {} \;)
echo "Removed all .b64 and _partials.tpl"
base64:
# rebuild all base64 values
$(eval B64_OBJS = $(foreach dir,$(B64_DIRS),$(shell find $(dir)/* -type f $(foreach e,$(B64_EXCLUDE), -not -path "$(e)"))))
$(foreach var,$(B64_OBJS),cat $(var) | base64 | perl -pe 'chomp if eof' > $(var).b64;)
build-%:
if [ -f $*/Makefile ]; then make -C $*; fi
if [ -f $*/requirements.yaml ]; then helm dep up $*; fi
helm package $*

15
README.md Normal file
View File

@ -0,0 +1,15 @@
# aic-helm
This is a fully self-contained OpenStack deployment on Kubernetes. This collection is a work in progress so components will continue to be added over time.
The following charts form the foundation to help establish an OpenStack control plane, including shared storage and bare metal provisioning:
- [ceph](ceph/README.md)
- maas (in progress)
- aic-kube (in progress)
These charts, unlike the OpenStack charts below, are designed to run directly. They form the foundational layers necessary to bootstrap an environment in may run in separate namespaces. The intention is to layer them. Please see the direct links above as they become available for README instructions crafted for each chart. Please walk through each of these as some of them require build steps that should be done before running make.
The OpenStack charts under development will focus on container images leveraging the entrypoint model. This differs somewhat from the existing [openstack-helm](https://github.com/sapcc/openstack-helm) repository maintained by SAP right now although we have shamelessly "borrowed" many concepts from them. For these charts, we will be following the same region approach as openstack-helm, namely that these charts will not install and run directly. They are included in the "openstack" chart as requirements, the openstack chart is effectively an abstract region and is intended to be required by a concrete region chart. We will provide an example region chart as well as sample region specific settings and certificate generation instructions.
Similar to openstack-helm, much of the 'make' complexity in this repository surrounds the fact that helm does not support directory based config maps or secrets. This will continue to be the case until (https://github.com/kubernetes/helm/issues/950) receives more attention.

26
ceph/.helmignore Normal file
View File

@ -0,0 +1,26 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
secrets/
patches/
*.py
Makefile

3
ceph/Chart.yaml Executable file
View File

@ -0,0 +1,3 @@
description: A Helm chart for Kubernetes
name: ceph
version: 0.1.0

7
ceph/Makefile Normal file
View File

@ -0,0 +1,7 @@
EXCLUDE := templates/* charts/* Chart.yaml requirement* values.yaml Makefile utils/*
FILES := $(shell find * -type f $(foreach e,$(EXCLUDE), -not -path "$(e)") )
templates/_partials.tpl: Makefile $(FILES)
echo Generating $(CURDIR)/$@
rm -f $@
for i in $(FILES); do printf '{{ define "'$$i'" }}' >> $@; cat $$i >> $@; printf "{{ end }}\n" >> $@; done

166
ceph/README.md Normal file
View File

@ -0,0 +1,166 @@
# aic-helm/ceph
This chart installs a working version of ceph. It is based on the ceph-docker work and follows closely with the setup [examples](https://github.com/ceph/ceph-docker/tree/master/examples/kubernetes) for kubernetes.
It attempts to simplify that process by wrapping up much of the setup into a helm-chart. A few items are still necessary, however until they can be refined:
### SkyDNS Resolution
The Ceph MONs are what clients talk to when mounting Ceph storage. Because Ceph MON IPs can change, we need a Kubernetes service to front them. Otherwise your clients will eventually stop working over time as MONs are rescheduled.
To get skyDNS resolution working, the resolv.conf on your nodes should look something like this:
```
domain <EXISTING_DOMAIN>
search <EXISTING_DOMAIN>
search svc.cluster.local #Your kubernetes cluster ip domain
nameserver 10.0.0.10 #The cluster IP of skyDNS
nameserver <EXISTING_RESOLVER_IP>
```
### Ceph and RBD utilities installed on the nodes
The Kubernetes kubelet shells out to system utilities to mount Ceph volumes. This means that every system must have these utilities installed. This requirement extends to the control plane, since there may be interactions between kube-controller-manager and the Ceph cluster.
For Debian-based distros:
```
apt-get install ceph-fs-common ceph-common
```
For Redhat-based distros:
```
yum install ceph
```
### Linux Kernel version 4.2.0 or newer
You'll need a newer kernel to use this. Kernel panics have been observed on older versions. Your kernel should also have RBD support.
This has been tested on:
- Ubuntu 15.10
This will not work on:
- Debian 8.5
### Override the default network settings
By default, `10.244.0.0/16` is used for the `cluster_network` and `public_network` in ceph.conf. To change these defaults, set the following environment variables according to your network requirements. These IPs should be set according to the range of your Pod IPs in your kubernetes cluster:
```
export osd_cluster_network=192.168.0.0/16
export osd_public_network=192.168.0.0/16
```
For a kubeadm installed weave cluster, you will likely want to run:
```
export osd_cluster_network=10.32.0.0/12
export osd_public_network=10.32.0.0/12
```
### Quickstart
You will need to generate ceph keys and configuration. There is a simple to use utility that can do this quickly:
```
cd ceph/utils/generator
./generate_secrets.sh `./generate_secrets.sh fsid
cd ../../..
```
At this point, you're ready to generate base64 encoded files based on the secrets generated above. This is done automatically if you run make which rebuilds all charts.
```
make
```
You can also trigger it specifically:
```
make base64
make ceph
```
Finally, you can now deploy your ceph chart:
```
helm --debug install local/ceph --namespace=ceph
```
You should see a deployed/successful helm deployment:
```
# helm ls
NAME REVISION UPDATED STATUS CHART
saucy-elk 1 Thu Nov 17 13:43:27 2016 DEPLOYED ceph-0.1.0
```
as well as all kubernetes resources deployed into the ceph namespace:
```
# kubectl get all --namespace=ceph
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/ceph-mon None <none> 6789/TCP 1h
svc/ceph-rgw 100.76.18.187 <pending> 80/TCP 1h
NAME READY STATUS RESTARTS AGE
po/ceph-mds-840702866-0n24u 1/1 Running 3 1h
po/ceph-mon-1870970076-7h5zw 1/1 Running 2 1h
po/ceph-mon-1870970076-d4uu2 1/1 Running 3 1h
po/ceph-mon-1870970076-s6d2p 1/1 Running 1 1h
po/ceph-mon-check-4116985937-ggv4m 1/1 Running 0 1h
po/ceph-osd-2m2mf 1/1 Running 2 1h
po/ceph-rgw-2085838073-02154 0/1 Pending 0 1h
po/ceph-rgw-2085838073-0d6z7 0/1 CrashLoopBackOff 21 1h
po/ceph-rgw-2085838073-3trec 0/1 Pending 0 1h
```
Note that the ceph-rgw image is crashing because of an issue processing the mon_host name 'ceph-mon' in ceph.conf. This is an upstream issue that needs to be worked but is not required to test ceph rbd or ceph filesystem functionality.
Finally, you can now test a ceph rbd volume:
```
export PODNAME=`kubectl get pods --selector="app=ceph,daemon=mon" --output=template --template="{{with index .items 0}}{{.metadata.name}}{{end}}" --namespace=ceph`
kubectl exec -it $PODNAME --namespace=ceph -- rbd create ceph-rbd-test --size 20G
kubectl exec -it $PODNAME --namespace=ceph -- rbd info ceph-rbd-test
```
If that works, you can create a container and attach it to that volume:
```
cd ceph/utils/test
kubectl create -f ceph-rbd-test.yaml --namespace=ceph
kubectl exec -it --namespace=ceph ceph-rbd-test -- df -h
```
### Cleanup
Always make sure to delete any test instances that have ceph volumes mounted before you delete your ceph cluster. Otherwise, kubelet may get stuck trying to unmount volumes which can only be recovered with a reboot. If you ran the tests above, this can be done with:
```
kubectl delete ceph-rbd-test --namespace=ceph
```
The easiest way to delete your environment is to delete the helm install:
```
# helm ls
NAME REVISION UPDATED STATUS CHART
saucy-elk 1 Thu Nov 17 13:43:27 2016 DEPLOYED ceph-0.1.0
# helm delete saucy-elk
```
And finally, because helm does not appear to cleanup all artifacts, you will want to delete the ceph namespace to remove any secrets helm installed:
```
kubectl delete namespace ceph
```

View File

@ -0,0 +1,87 @@
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: ceph-osd
labels:
app: ceph
daemon: osd
spec:
template:
metadata:
labels:
app: ceph
daemon: osd
spec:
nodeSelector:
node-type: storage
volumes:
- name: devices
hostPath:
path: /dev
- name: ceph
emptyDir: {}
# hostPath:
# path: /opt/ceph
- name: ceph-conf
secret:
secretName: ceph-conf-combined
- name: ceph-bootstrap-osd-keyring
secret:
secretName: ceph-bootstrap-osd-keyring
- name: ceph-bootstrap-mds-keyring
secret:
secretName: ceph-bootstrap-mds-keyring
- name: ceph-bootstrap-rgw-keyring
secret:
secretName: ceph-bootstrap-rgw-keyring
- name: osd-directory
emptyDir: {}
# hostPath:
# path: /home/core/data/ceph/osd
containers:
- name: osd-pod
image: {{ .Values.image_ceph_daemon }}
imagePullPolicy: Always
volumeMounts:
- name: devices
mountPath: /dev
- name: ceph
mountPath: /var/lib/ceph
- name: ceph-conf
mountPath: /etc/ceph
- name: ceph-bootstrap-osd-keyring
mountPath: /var/lib/ceph/bootstrap-osd
- name: ceph-bootstrap-mds-keyring
mountPath: /var/lib/ceph/bootstrap-mds
- name: ceph-bootstrap-rgw-keyring
mountPath: /var/lib/ceph/bootstrap-rgw
- name: osd-directory
mountPath: /var/lib/ceph/osd
securityContext:
privileged: true
env:
- name: CEPH_DAEMON
value: osd_directory
- name: KV_TYPE
value: k8s
- name: CLUSTER
value: ceph
- name: CEPH_GET_ADMIN_KEY
value: "1"
livenessProbe:
tcpSocket:
port: 6800
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 6800
timeoutSeconds: 5
resources:
requests:
memory: "512Mi"
cpu: "1000m"
limits:
memory: "1024Mi"
cpu: "2000m"

View File

@ -0,0 +1,306 @@
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: ceph
daemon: mds
name: ceph-mds
spec:
replicas: 1
template:
metadata:
name: ceph-mds
labels:
app: ceph
daemon: mds
spec:
nodeSelector:
node-type: storage
serviceAccount: default
volumes:
- name: ceph-conf
secret:
secretName: ceph-conf-combined
- name: ceph-bootstrap-osd-keyring
secret:
secretName: ceph-bootstrap-osd-keyring
- name: ceph-bootstrap-mds-keyring
secret:
secretName: ceph-bootstrap-mds-keyring
- name: ceph-bootstrap-rgw-keyring
secret:
secretName: ceph-bootstrap-rgw-keyring
containers:
- name: ceph-mon
image: {{ .Values.image_ceph_daemon }}
ports:
- containerPort: 6800
env:
- name: CEPH_DAEMON
value: MDS
- name: CEPHFS_CREATE
value: "1"
- name: KV_TYPE
value: k8s
- name: CLUSTER
value: ceph
volumeMounts:
- name: ceph-conf
mountPath: /etc/ceph
- name: ceph-bootstrap-osd-keyring
mountPath: /var/lib/ceph/bootstrap-osd
- name: ceph-bootstrap-mds-keyring
mountPath: /var/lib/ceph/bootstrap-mds
- name: ceph-bootstrap-rgw-keyring
mountPath: /var/lib/ceph/bootstrap-rgw
livenessProbe:
tcpSocket:
port: 6800
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 6800
timeoutSeconds: 5
resources:
requests:
memory: "10Mi"
cpu: "250m"
limits:
memory: "50Mi"
cpu: "500m"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: ceph
daemon: moncheck
name: ceph-mon-check
spec:
replicas: 1
template:
metadata:
name: ceph-mon
labels:
app: ceph
daemon: moncheck
spec:
serviceAccount: default
volumes:
- name: ceph-conf
secret:
secretName: ceph-conf-combined
- name: ceph-bootstrap-osd-keyring
secret:
secretName: ceph-bootstrap-osd-keyring
- name: ceph-bootstrap-mds-keyring
secret:
secretName: ceph-bootstrap-mds-keyring
- name: ceph-bootstrap-rgw-keyring
secret:
secretName: ceph-bootstrap-rgw-keyring
containers:
- name: ceph-mon
image: {{ .Values.image_ceph_daemon }}
imagePullPolicy: Always
ports:
- containerPort: 6789
env:
- name: CEPH_DAEMON
value: MON_HEALTH
- name: KV_TYPE
value: k8s
- name: MON_IP_AUTO_DETECT
value: "1"
- name: CLUSTER
value: ceph
volumeMounts:
- name: ceph-conf
mountPath: /etc/ceph
- name: ceph-bootstrap-osd-keyring
mountPath: /var/lib/ceph/bootstrap-osd
- name: ceph-bootstrap-mds-keyring
mountPath: /var/lib/ceph/bootstrap-mds
- name: ceph-bootstrap-rgw-keyring
mountPath: /var/lib/ceph/bootstrap-rgw
resources:
requests:
memory: "5Mi"
cpu: "250m"
limits:
memory: "50Mi"
cpu: "500m"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: ceph
daemon: mon
name: ceph-mon
spec:
replicas: 3
template:
metadata:
name: ceph-mon
labels:
app: ceph
daemon: mon
annotations:
# alanmeadows: this soft requirement allows single
# host deployments to spawn several ceph-mon
# containers
scheduler.alpha.kubernetes.io/affinity: >
{
"podAntiAffinity": {
"preferredDuringSchedulingIgnoredDuringExecution": [{
"labelSelector": {
"matchExpressions": [{
"key": "daemon",
"operator": "In",
"values":["mon"]
}]
},
"topologyKey": "kubernetes.io/hostname",
"weight": 10
}]
}
}
spec:
serviceAccount: default
volumes:
- name: ceph-conf
secret:
secretName: ceph-conf-combined
- name: ceph-bootstrap-osd-keyring
secret:
secretName: ceph-bootstrap-osd-keyring
- name: ceph-bootstrap-mds-keyring
secret:
secretName: ceph-bootstrap-mds-keyring
- name: ceph-bootstrap-rgw-keyring
secret:
secretName: ceph-bootstrap-rgw-keyring
containers:
- name: ceph-mon
image: {{ .Values.image_ceph_daemon }}
# imagePullPolicy: Always
lifecycle:
preStop:
exec:
# remove the mon on Pod stop.
command:
- "/remove-mon.sh"
ports:
- containerPort: 6789
env:
- name: CEPH_DAEMON
value: MON
- name: KV_TYPE
value: k8s
- name: NETWORK_AUTO_DETECT
value: "1"
- name: CLUSTER
value: ceph
volumeMounts:
- name: ceph-conf
mountPath: /etc/ceph
- name: ceph-bootstrap-osd-keyring
mountPath: /var/lib/ceph/bootstrap-osd
- name: ceph-bootstrap-mds-keyring
mountPath: /var/lib/ceph/bootstrap-mds
- name: ceph-bootstrap-rgw-keyring
mountPath: /var/lib/ceph/bootstrap-rgw
livenessProbe:
tcpSocket:
port: 6789
initialDelaySeconds: 60
timeoutSeconds: 5
readinessProbe:
tcpSocket:
port: 6789
timeoutSeconds: 5
resources:
requests:
memory: "50Mi"
cpu: "1000m"
limits:
memory: "100Mi"
cpu: "2000m"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
labels:
app: ceph
daemon: rgw
name: ceph-rgw
spec:
replicas: 3
template:
metadata:
name: ceph-rgw
labels:
app: ceph
daemon: rgw
spec:
hostNetwork: true
nodeSelector:
node-type: storage
serviceAccount: default
volumes:
- name: ceph-conf
secret:
secretName: ceph-conf-combined
- name: ceph-bootstrap-osd-keyring
secret:
secretName: ceph-bootstrap-osd-keyring
- name: ceph-bootstrap-mds-keyring
secret:
secretName: ceph-bootstrap-mds-keyring
- name: ceph-bootstrap-rgw-keyring
secret:
secretName: ceph-bootstrap-rgw-keyring
containers:
- name: ceph-rgw
image: {{ .Values.image_ceph_daemon }}
ports:
- containerPort: {{ .Values.ceph_rgw_target_port }}
env:
- name: RGW_CIVETWEB_PORT
value: "{{ .Values.ceph_rgw_target_port }}"
- name: CEPH_DAEMON
value: RGW
- name: KV_TYPE
value: k8s
- name: CLUSTER
value: ceph
volumeMounts:
- name: ceph-conf
mountPath: /etc/ceph
- name: ceph-bootstrap-osd-keyring
mountPath: /var/lib/ceph/bootstrap-osd
- name: ceph-bootstrap-mds-keyring
mountPath: /var/lib/ceph/bootstrap-mds
- name: ceph-bootstrap-rgw-keyring
mountPath: /var/lib/ceph/bootstrap-rgw
livenessProbe:
httpGet:
path: /
port: {{ .Values.ceph_rgw_target_port }}
initialDelaySeconds: 120
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /
port: {{ .Values.ceph_rgw_target_port }}
timeoutSeconds: 5
resources:
requests:
memory: "500Mi"
cpu: ".5"
limits:
memory: "500Mi"
cpu: ".5"

View File

@ -0,0 +1,73 @@
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-conf-combined"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.conf: |
{{ include "secrets/ceph.conf.b64" .| indent 4 }}
ceph.client.admin.keyring: |
{{ include "secrets/ceph.client.admin.keyring.b64" . | indent 4 }}
ceph.mon.keyring: |
{{ include "secrets/ceph.mon.keyring.b64" . | indent 4 }}
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-bootstrap-rgw-keyring"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.keyring: |
{{ include "secrets/ceph.rgw.keyring.b64" . | indent 4 }}
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-bootstrap-mds-keyring"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.keyring: |
{{ include "secrets/ceph.mds.keyring.b64" . | indent 4 }}
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-bootstrap-osd-keyring"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph.keyring: |
{{ include "secrets/ceph.osd.keyring.b64" . | indent 4 }}
---
apiVersion: v1
kind: Secret
metadata:
namespace: {{.Release.Namespace}}
name: "ceph-client-key"
# This declares the resource to be a hook. By convention, we also name the
# file "pre-install-XXX.yaml", but Helm itself doesn't care about file names.
annotations:
"helm.sh/hook": pre-install
type: Opaque
data:
ceph-client-key: {{ include "secrets/ceph-client-key.b64" . | quote }}

View File

@ -0,0 +1,34 @@
---
kind: Service
apiVersion: v1
metadata:
name: ceph-mon
labels:
app: ceph
daemon: mon
spec:
ports:
- port: {{ .Values.ceph_mon_port }}
protocol: TCP
targetPort: {{ .Values.ceph_mon_port }}
selector:
app: ceph
daemon: mon
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: ceph-rgw
labels:
app: ceph
daemon: rgw
spec:
ports:
- port: {{ .Values.ceph_rgw_ingress_port }}
protocol: TCP
targetPort: {{ .Values.ceph_rgw_target_port }}
selector:
app: ceph
daemon: rgw
type: LoadBalancer

View File

@ -0,0 +1,65 @@
# Ceph Kubernetes Secret Generation
This script will generate ceph keyrings and configs as Kubernetes secrets.
Sigil is required for template handling and must be installed in system PATH. Instructions can be found here: <https://github.com/gliderlabs/sigil>
The following functions are provided:
## Generate raw FSID (can be used for other functions)
```bash
./generate_secrets.sh fsid
```
## Generate raw ceph.conf (For verification)
```bash
./generate_secrets.sh ceph-conf-raw <fsid> "overridekey=value"
```
Take a look at `ceph/ceph.conf.tmpl` for the default values
## Generate encoded ceph.conf secret
```bash
./generate_secrets.sh ceph-conf <fsid> "overridekey=value"
```
## Generate encoded admin keyring secret
```bash
./generate_secrets.sh admin-keyring
```
## Generate encoded mon keyring secret
```bash
./generate_secrets.sh mon-keyring
```
## Generate a combined secret
Contains ceph.conf, admin keyring and mon keyring. Useful for generating the `/etc/ceph` directory
```bash
./generate_secrets.sh combined-conf
```
## Generate encoded boostrap keyring secret
```bash
./generate_secrets.sh bootstrap-keyring <osd|mds|rgw>
```
# Kubernetes workflow
```bash
./generator/generate_secrets.sh all `./generate_secrets.sh fsid`
kubectl create secret generic ceph-conf-combined --from-file=ceph.conf --from-file=ceph.client.admin.keyring --from-file=ceph.mon.keyring --namespace=ceph
kubectl create secret generic ceph-bootstrap-rgw-keyring --from-file=ceph.keyring=ceph.rgw.keyring --namespace=ceph
kubectl create secret generic ceph-bootstrap-mds-keyring --from-file=ceph.keyring=ceph.mds.keyring --namespace=ceph
kubectl create secret generic ceph-bootstrap-osd-keyring --from-file=ceph.keyring=ceph.osd.keyring --namespace=ceph
kubectl create secret generic ceph-client-key --from-file=ceph-client-key --namespace=ceph
```

View File

@ -0,0 +1,15 @@
#!/bin/python
import os
import struct
import time
import base64
key = os.urandom(16)
header = struct.pack(
'<hiih',
1, # le16 type: CEPH_CRYPTO_AES
int(time.time()), # le32 created: seconds
0, # le32 created: nanoseconds,
len(key), # le16: len(key)
)
print(base64.b64encode(header + key).decode('ascii'))

View File

@ -0,0 +1,82 @@
#!/bin/bash
gen-fsid() {
echo "$(uuidgen)"
}
gen-ceph-conf-raw() {
fsid=${1:?}
shift
conf=$(sigil -p -f templates/ceph/ceph.conf.tmpl "fsid=${fsid}" $@)
echo "${conf}"
}
gen-ceph-conf() {
fsid=${1:?}
shift
conf=$(sigil -p -f templates/ceph/ceph.conf.tmpl "fsid=${fsid}" $@)
echo "${conf}"
}
gen-admin-keyring() {
key=$(python ceph-key.py)
keyring=$(sigil -f templates/ceph/admin.keyring.tmpl "key=${key}")
echo "${keyring}"
}
gen-mon-keyring() {
key=$(python ceph-key.py)
keyring=$(sigil -f templates/ceph/mon.keyring.tmpl "key=${key}")
echo "${keyring}"
}
gen-combined-conf() {
fsid=${1:?}
shift
conf=$(sigil -p -f templates/ceph/ceph.conf.tmpl "fsid=${fsid}" $@)
echo "${conf}" > ../../secrets/ceph.conf
key=$(python ceph-key.py)
keyring=$(sigil -f templates/ceph/admin.keyring.tmpl "key=${key}")
echo "${key}" > ../../secrets/ceph-client-key
echo "${keyring}" > ../../secrets/ceph.client.admin.keyring
key=$(python ceph-key.py)
keyring=$(sigil -f templates/ceph/mon.keyring.tmpl "key=${key}")
echo "${keyring}" > ../../secrets/ceph.mon.keyring
}
gen-bootstrap-keyring() {
service="${1:-osd}"
key=$(python ceph-key.py)
bootstrap=$(sigil -f templates/ceph/bootstrap.keyring.tmpl "key=${key}" "service=${service}")
echo "${bootstrap}"
}
gen-all-bootstrap-keyrings() {
gen-bootstrap-keyring osd > ../../secrets/ceph.osd.keyring
gen-bootstrap-keyring mds > ../../secrets/ceph.mds.keyring
gen-bootstrap-keyring rgw > ../../secrets/ceph.rgw.keyring
}
gen-all() {
gen-combined-conf $@
gen-all-bootstrap-keyrings
}
main() {
set -eo pipefail
case "$1" in
fsid) shift; gen-fsid $@;;
ceph-conf-raw) shift; gen-ceph-conf-raw $@;;
ceph-conf) shift; gen-ceph-conf $@;;
admin-keyring) shift; gen-admin-keyring $@;;
mon-keyring) shift; gen-mon-keyring $@;;
bootstrap-keyring) shift; gen-bootstrap-keyring $@;;
combined-conf) shift; gen-combined-conf $@;;
all) shift; gen-all $@;;
esac
}
main "$@"

View File

@ -0,0 +1,6 @@
[client.admin]
key = {{ $key }}
auid = 0
caps mds = "allow"
caps mon = "allow *"
caps osd = "allow *"

View File

@ -0,0 +1,3 @@
[client.bootstrap-{{ $service }}]
key = {{ $key }}
caps mon = "allow profile bootstrap-{{ $service }}"

View File

@ -0,0 +1,71 @@
[global]
fsid = ${fsid:?}
cephx = ${auth_cephx:-"true"}
cephx_require_signatures = ${auth_cephx_require_signatures:-"false"}
cephx_cluster_require_signatures = ${auth_cephx_cluster_require_signatures:-"true"}
cephx_service_require_signatures = ${auth_cephx_service_require_signatures:-"false"}
# auth
max_open_files = ${global_max_open_files:-"131072"}
osd_pool_default_pg_num = ${global_osd_pool_default_pg_num:-"128"}
osd_pool_default_pgp_num = ${global_osd_pool_default_pgp_num:-"128"}
osd_pool_default_size = ${global_osd_pool_default_size:-"3"}
osd_pool_default_min_size = ${global_osd_pool_default_min_size:-"1"}
mon_osd_full_ratio = ${global_mon_osd_full_ratio:-".95"}
mon_osd_nearfull_ratio = ${global_mon_osd_nearfull_ratio:-".85"}
mon_host = ${global_mon_host:-'ceph-mon'}
[mon]
mon_osd_down_out_interval = ${mon_mon_osd_down_out_interval:-"600"}
mon_osd_min_down_reporters = ${mon_mon_osd_min_down_reporters:-"4"}
mon_clock_drift_allowed = ${mon_mon_clock_drift_allowed:-".15"}
mon_clock_drift_warn_backoff = ${mon_mon_clock_drift_warn_backoff:-"30"}
mon_osd_report_timeout = ${mon_mon_osd_report_timeout:-"300"}
[osd]
journal_size = ${osd_journal_size:-"100"}
cluster_network = ${osd_cluster_network:-'10.244.0.0/16'}
public_network = ${osd_public_network:-'10.244.0.0/16'}
osd_mkfs_type = ${osd_osd_mkfs_type:-"xfs"}
osd_mkfs_options_xfs = ${osd_osd_mkfs_options_xfs:-"-f -i size=2048"}
osd_mon_heartbeat_interval = ${osd_osd_mon_heartbeat_interval:-"30"}
osd_max_object_name_len = ${osd_max_object_name_len:-"256"}
#crush
osd_pool_default_crush_rule = ${osd_pool_default_crush_rule:-"0"}
osd_crush_update_on_start = ${osd_osd_crush_update_on_start:-"true"}
#backend
osd_objectstore = ${osd_osd_objectstore:-"filestore"}
#performance tuning
filestore_merge_threshold = ${osd_filestore_merge_threshold:-"40"}
filestore_split_multiple = ${osd_filestore_split_multiple:-"8"}
osd_op_threads = ${osd_osd_op_threads:-"8"}
filestore_op_threads = ${osd_filestore_op_threads:-"8"}
filestore_max_sync_interval = ${osd_filestore_max_sync_interval:-"5"}
osd_max_scrubs = ${osd_osd_max_scrubs:-"1"}
#recovery tuning
osd_recovery_max_active = ${osd_osd_recovery_max_active:-"5"}
osd_max_backfills = ${osd_osd_max_backfills:-"2"}
osd_recovery_op_priority = ${osd_osd_recovery_op_priority:-"2"}
osd_client_op_priority = ${osd_osd_client_op_priority:-"63"}
osd_recovery_max_chunk = ${osd_osd_recovery_max_chunk:-"1048576"}
osd_recovery_threads = ${osd_osd_recovery_threads:-"1"}
#ports
ms_bind_port_min = ${osd_ms_bind_port_min:-"6800"}
ms_bind_port_max = ${osd_ms_bind_port_max:-"7100"}
[client]
rbd_cache_enabled = ${client_rbd_cache_enabled:-"true"}
rbd_cache_writethrough_until_flush = ${client_rbd_cache_writethrough_until_flush:-"true"}
rbd_default_features = ${client_rbd_default_features:-"1"}
[mds]
mds_cache_size = ${mds_mds_cache_size:-"100000"}

View File

@ -0,0 +1,3 @@
[mon.]
key = {{ $key }}
caps mon = "allow *"

View File

@ -0,0 +1,26 @@
apiVersion: v1
kind: Pod
metadata:
name: ceph-rbd-test
spec:
containers:
- name: cephrbd-rw
image: busybox
command:
- sh
- -c
- while true; do sleep 1; done
volumeMounts:
- mountPath: "/mnt/cephrbd"
name: cephrbd
volumes:
- name: cephrbd
rbd:
monitors:
#This only works if you have skyDNS resolveable from the kubernetes node. Otherwise you must manually put in one or more mon pod ips.
- ceph-mon.ceph:6789
user: admin
image: ceph-rbd-test
pool: rbd
secretRef:
name: ceph-client-key

10
ceph/values.yaml Normal file
View File

@ -0,0 +1,10 @@
# Default values for keystone.
# This is a YAML-formatted file.
# Declare name/value pairs to be passed into your templates.
# name: value
image_ceph_daemon: quay.io/attcomdev/ceph-daemon:latest
node_label: storage
ceph_mon_port: 6789
ceph_rgw_ingress_port: 80
ceph_rgw_target_port: 8088