"configs" section added to "services" group.
services:
some_service:
service_def: ads
configs:
asd: dsa
If you are mapping some service to some another service,
configs will be inherited as well.
Change-Id: Id64c0bf816a639c0b3dee96e5a72fcf964f9f731
New config section introduced:
services:
keystone-db:
service_def: mariadb
keystone:
service_def: keystone
mapping:
database: keystone-db
Defined services can be used in topology definition.
In this example keystone-db service will be created from mariadb
definition and keystone will use it instead of mariadb.
Change-Id: I274826648390b844d240b7ae545c40264f662452
To support multiple instances of the same service we should
be able to manage their dependencies separately
Change-Id: I3d1951537e49f56ae01b69c0eeef853dcde3b8b9
Add implementation for some function in AttrDict
in order to avoid recourse to AttrDict._dict
In tests we use dict, but in fact should use _yaml.AttrDict
Change-Id: Ie99f3b05bd65f195f2f81191bff67cbacef1b816
Support of k8s secrets is introduced. To create a secret, put
an additional section 'secrets' to the definition
of the service:
secrets:
name-for-reference:
type: "Opaque"
data:
"file1": "some content"
"file2": "another one"
secret:
secretName: name-in-k8s
path: /where/to/mount
You can reference to this secret from the container definition:
daemon:
secrets:
- name-for-reference
The referenced secret must be defined in the 'secrets' section.
Change-Id: Iaaede4ccb94c99d70f3ecad040d5ab6c41428c5e
Partial-Bug: #1651392
Partial-Bug: #1651394
* nodes on which Pods controlled by Jobs will be scheduled
will be determined according to topology definition
* changes in order of nodes/roles in topology will not trigger
update anymore
Change-Id: Ia41b50ff2b214791bec17577eb6e59fc94d0f2c2
Now node definition can contain configs map,
which will add new configs especially for this node
or override current globals for the node.
Change-Id: I4de6a0fad94d5f83ca486c952d80d1c87c880c0e
Related-bug: #1653077
Since k8s 1.5 the keys of configmaps does not allow
the underscore and some others symbols.
So, we cant use the file names as keys like:
file_name.j2 : <file content>
Easy way to fix it is just remove non-(alph/dig)
symbols from file names:
filenamej2 : <file content>
and in a volume definition explicitly set
the filenames using the "path" field:
configMap:
name: exports
items:
- key: filenamej2
path: file_name.j2 <--------
Change-Id: I784f8190147d5a03c0127a2e79805ce78714defe
This patch adds support of actions on existing ccp deployment.
For example actions can run tempest, rotation fernet tokens and so on.
Documentation will be added in another patchset.
Change-Id: If45f1bfb823f2182b0e79ca269c6b0e95066d053
Full Annotations support for pod and service is needed for setting
extra options in kubernates for users.
Change-Id: Icbde776e5e8b44cfabe752fb43cab2ed9978ffe5
Make possible sharing and using of common parts of configs
like keystone, db, messaging, etc as jinja templates (e.g.
via macros) located at 'exports' directories of related
repositories.
Example of usage:
-------------------------------------------------
share rabbitmq configuration as macros:
-------------------------------------------------
# file fuel-ccp-rabbitmq/exports/messaging.j2
{% macro oslo_config() -%}
[DEFAULT]
transport_url=rabbit://{{ rabbitmq.user }}
[oslo_messaging_rabbit]
rabbit_ha_queues = true
{%- endmacro %}
-------------------------------------------------
use it in nova.conf.j2:
-------------------------------------------------
# file fuel-ccp-nova/service/files/nova.conf.j2
[upgrade_levels]
compute = auto
{{ messaging.oslo_config() }} <-----------
[wsgi]
api_paste_config = /etc/nova/api-paste.ini
-------------------------------------------------
During 'ccp deploy' the following occurs:
- loading templates files from /exports/ dirs of avaliable repositories.
- push files to k8s as ConfigMap with name 'exports'.
- adding a container volume '/etc/ccp/macros' with the ConfigMap content
- implicitly adding jinja imports of these templates files to all config
files from /fuel-ccp-xxx/service/* to make possible macros usage.
Change-Id: I4858d62a9713e90c09300f75e01e06a31d3ac0ae
Depends-On: I429656b7eaf6312ee2d27ccaf0cb8802a234e871
If we have something deployed using DaemonSet kind then it's 99%
high coupled to the host or some host capabilities while not providing
any HA or reliability features and it means that there is no reason
to make upgrade of such services one by one.
Good examples are nova-compute, nova-libvirt. There is no real diff
to update all of them at once or one by one (by default). In future
we'll need to implement much more flexible approach for upgrading
such service, as potentially we'll prefer to upgrade 10% of computes,
wait for some period of time to ensure that they are working correct
and only after that upgrade the rest of the compute nodes.
Change-Id: I477f3db48f459fad2753816a82575aa4174c96a6
Example:
files:
keystone-conf: /tmp/keystone
In that case source file will be taken not from
`content` path, but from path defined in files config.
Change-Id: If2e71887adca9148f98b555ef8d6033211fe6375
This is needed for backup jobs to be run on specific nodes that have
backup volumes mounted.
Jobs can provide a 'topology_key' value that will be looked up in
topology to find which nodes this job can run on. So for backup one
would need to do smth like this:
nodes:
node1:
- backup
roles:
backup:
- backup
And add "topology_key: backup" to backup jobs.
Change-Id: I3b51b7a957735873b0de098578e1b83c586f111a
To fix an exception during validation:
TypeError: unsupported operand type(s) for -: 'set' and 'list'
Change-Id: Id22c59ca71bdb9043a310ce35ae294cf42dc10eb
Seems better if we use by default list of topology-defined services
directly, instead of trying to deploy and validate every time all
components from 'component_map'.
Change-Id: I7ff16d523cae2cb8d6cc294b17806709fa188f90