The example file was removed from ceilometer in
318c54648c2c85d4f4f5425c5ffc5e5f3dda86f2
Signed-off-by: Matthew Thode <mthode@mthode.org>
Change-Id: Ic23655e793aca9db344354ef95460858534c518c
Instead of overriding each service separatelly it might make
sense for deployers to define some higher level variable that
will be used first or fallback to default variable.
Change-Id: I65ae80452c81c62ef111d0941624ece166f15ed8
Allow ceilometer to read octavia notifications once variable
octavia_ceilometer_enabled is set to true.
Change-Id: I08289736fe1b3b23873d131664a00a1b44c7d6d3
This aims to make possible to define ceilometer_pipeline_user_content
filepath instead of making huge overrides. In the meanwhile default
behaviour should be kept with minimal influence on the existing
deployments.
Change-Id: I0a0ac1b9bbdb8b6a68f870f0ae03edbee8c63d68
Instead of bothering with definition of python lib dirs per distro,
we can just get ceilometer library directory with python command.
`ceilometer_lib_dir` variable is mainly used by post_install task.
This variable is not set correctly at the moment if role is ran with
ceilometer-config tag. So this patch also fix running role with that tag
Change-Id: Id4fe9a8eef27bdd1c71104007e6b493f3f5109d5
In designate ansible role the default username for oslo is designate-rpc.
If ceilometer is used for designate, ceilometer configuration is generated incorrectly
and it doesn't start, returning ACCESS_REFUSED
Change-Id: I14f040c7549fabfe88882dc609f62cd08ab130db
The variables ceilometer_developer_mode and ceilometer_venv_download
no longer carry any meaning. This review changes ceilometer to
do the equivalent of what developer_mode was all the time,
meaning that it always builds the venv and never requires
the repo server, but it will use a repo server when available.
As part of this, we move the source build out of its own file
because it's now a single task to include the venv build role.
This is just to make it easier to follow the code.
We also change include_tasks to import_tasks and include_role
to import_role so that the tags in the python_venv_build role
will work.
Change-Id: Ib6423249ad6d8c24382a0d82229f766d34255ee9
The files and templates we carry are almost always in a state of
maintenance. The upstream services are maintaining these files and
there's really no reason we need to carry duplicate copies of them. This
change removes all of the files we expect to get from the upstream
service. while the focus of this change is to remove configuration file
maintenance burdens it also allows the role to execute faster.
* Source installs have the configuration files within the venv at
"<<VENV_PATH>>/etc/<<SERVICE_NAME>>". The role will now link the
default configuration path to this directory. When the service is
upgraded the link will move to the new venv path.
* Distro installs package all of the required configuration files.
To maintain our current capabilities to override configuration the
role will fetch files from the disk whenever an override is provided and
then push the fetched file back to the target using `config_template`.
Change-Id: Ia467e20c32732152a03579216a0ced0dbb4038c4
This change will allow deployer to override meters.yaml and
event_definitions.yaml files, if he would like to change layout/data,
which will be stored in gnocchi, and the way it will be
updated/stored.
Change-Id: I6bfe03dc1f5ad795d91534b1a51c4d8889bb3ed1
gnocchi_resources.yaml has been defined in main.yml, had all required
variables, but user overrides were not actually deployed on containers
Also, it had an absolete path in GIT repo.
Change-Id: Id32789b59f913cfbb78c1cb4a73b18df85c36655
The libvirt-python library has C bindings which are very particular
about the version of its companion package (libvirt). To ensure
things run smoothly for stable releases, we opt to use the distro
packages for these and symlink the appropriate library files and
binding into the venv.
This approach has been used successfully for the ceph and libvirt
python bindings in other roles.
Change-Id: I23bf430087af8b16a3d77d96f6ceb3d7f6d8a9d5
In order to enable the service setup host python interpreter to
be changed easily, we make it a variable. This will be useful
when someone sets the service setup host to be the utility
container, because we'll be able to set this var by default.
Change-Id: I17b580fa357154214da93a473213e7f37980a45d
Previously common checks were used in group_vars, and they were checking
if ceilometer exists. But ceilometer doesn't have such group_var.
As a result service thought, that ceilometer is enabled, and was sending
notifications to message queue.
These messages were accumulating, as the consumer was absent and
knew nothing about this service and it's queue.
Now this situation should be fixed.
Change-Id: I29b7f679eed1249f5367c42a2898804f1a436cbc
Ceilometer was missing designate and trove services. In addition,
services were sorted in alphabetical order both in defaults and template
Change-Id: I492a1d8d2bfa567a224e5441c6a15dea3c722741
Previously all ceilometer services were deployed to all hosts.
But, notification agent is most cases has nothing to do with
nova-compute hosts. So condition has been added in order to
differentiate services based on their groups.
Change-Id: I7bf1a2ef605f6645aa10787b1a53e8a63034cead
A buch of defaults oslomsg related variables were configured.
In case of enabling ceilometer for services, role failed unless
all required vars has been specified manually by deployer.
Also vars ceilometer_oslomsg_rpc_servers
and ceilometer_oslomsg_notify_servers were removed
They were not used anywhere except ceilometer.conf.j2
and might be easily replaced with host_group vars.
Their removal allowed to simplify macros.
Fixed distro_install functional test for ubuntu,
as ceilometer-agent-notification package was missing.
Change-Id: I148ccaff9576b09d33d889b79963e88ca84d2ffe
Closes-Bug: 1794688
The ceilometer requirement is tooz[zake] and not only tooz:
https://github.com/openstack/ceilometer/blob/master/requirements.txt#L36
if zake is not specified, it does not get installed and the dependency gets lost
Change-Id: Ie80704a1ed6750adb39d6a6fa5a61a0ccd7b4ebd
Signed-off-by: Manuel Buil <mbuil@suse.com>
Instead of ceilometer_service_setup_host, there is a
glance_service_setup_host, so I assume there was a wrong copy/paste
Change-Id: Ib19329bca8636753994ac99822026d54b980a030
Signed-off-by: Manuel Buil <mbuil@suse.com>
In order to radically simplify how we prepare the service
venvs, we use a common role to do the wheel builds and the
venv preparation. This makes the process far simpler to
understand, because the role does its own building and
installing. It also reduces the code maintenance burden,
because instead of duplicating the build processes in the
repo_build role and the service role - we only have it all
done in a single place.
We also change the role venv tag var to use the integrated
build's common venv tag so that we can remove the role's
venv tag in group_vars in the integrated build. This reduces
memory consumption and also reduces the duplication.
This is by no means the final stop in the simplification
process, but it is a step forward. The will be work to follow
which:
1. Replaces 'developer mode' with an equivalent mechanism
that uses the common role and is simpler to understand.
We will also simplify the provisioning of pip install
arguments when doing this.
2. Simplifies the installation of optional pip packages.
Right now it's more complicated than it needs to be due
to us needing to keep the py_pkgs plugin working in the
integrated build.
3. Deduplicates the distro package installs. Right now the
role installs the distro packages twice - just before
building the venv, and during the python_venv_build role
execution.
Depends-On: https://review.openstack.org/598957
Change-Id: I1e722ad353564a6586e5b597d73b5e1dad6033d9
Implements: blueprint python-build-install-simplification
Signed-off-by: Jesse Pretorius <jesse.pretorius@rackspace.co.uk>
Into ceilometer_pip_packages were added kazoo and redis packages.
These services are adviced by Tooz as coordinators backends.
Coordination is required to get metrics processing distribution among
service containers/instances.
Change-Id: I9ac2b7ae2eb4c1d1723a1cb2df592267145a114a
The systemd journal would normally be populated with the standard out of
a service however with the use of uwsgi this is not actually happening
resulting in us only capturing the logs from the uwsgi process instead
of the service itself. This change implements journal logging in the
service config, which is part of OSLO logging.
OSLO logging docs found here: <https://docs.openstack.org/oslo.log/3.28.1/journal.html>
Change-Id: I5ae4ee8b0f69a4ed9b6088f043abaa1c4b1291d8
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
When the RPC and Notify service are the same, the credentials
must match - otherwise the tasks to create the user/password
will overwrite with each other.
If the two clusters are different, then the matching credentials
and vhost will not be a problem. However, if the deployer really
wishes to make sure they're different, then the vars can be
overridden.
Also, to ensure that the SSL value is consistently set in the
conf file, we apply the bool filter. We also use the 'notify'
SSL setting as the messaging system for Notifications is more
likely to remain rabbitmq in our default deployment with qrouterd
becoming the default for RPC messaging.
Change-Id: Icfebf314963dabd93dd0c5effdbcd0bb4b15571e
There is no record for why we implement the MQ vhost/user creation
outside of the role in the playbook, when we could do it inside the
role.
Implementing it inside the role allows us to reduce the quantity of
group_vars duplicated from the role, and allows us to better document
the required variables in the role. The delegation can still be done
as it is done in the playbook too.
In this patch we implement two new variables:
- ceilometer_oslomsg_rpc_setup_host
- ceilometer_oslomsg_notify_setup_host
These are used in the role to allow delegation of the MQ vhost/user
setup for each type to any host, but they default to using the first
member of the applicable oslomsg host group.
We also adjust some of the defaults to automatically inherit existing
vars set in group_vars form the integrated build so that we do not
need to do the wiring in the integrated build's group vars. We still
default them in the role too for independent role usage.
Change-Id: If59922884f893961feeb3b475b0c84160f348c49
This introduces oslo.messaging variables that define the RPC and
Notify transports for the OpenStack services. These parameters replace
the rabbitmq values and aure used to generate the messaging
transport_url for the service. The association of the messaging
backend server to the oslo.messaging services will then be transparent
to the ceilometer service.
This patch:
* Add oslo.messaging variables for RPC and Notify to defaults
* Update transport_url generation in config template
* Add oslo.messaging to tests inventory
* Update tests
* Add release note
Change-Id: Ib14a7a5ec0348933eee4fb1a151010841b29ca1f
In order to reduce the packages required to pip install on to the hosts,
we allow the service setup to be delegated to a specific host, defaulting
to the deploy host. We also switch as many tasks as possible to using the
built-in Ansible modules which make use of the shade library.
The 'virtualenv' package is now installed appropriately by the openstack_hosts
role, so there's no need to install it any more. The 'httplib2' package is a
legacy Ansible requirement for the get_url/get_uri module which is no longer
needed. As there are no required packages left, the task to install them is
also removed.
The openstack_openrc role is now executed once on the designated host, so
it is no longer necessary as a meta-dependency for the role.
Ceilometer no longer has an API service, so the service setup task for it is
removed as it is unnecessary, along with the related variables which are no
longer used.
Depends-On: https://review.openstack.org/579233
Depends-On: https://review.openstack.org/579959
Change-Id: I4072acc1770432526a8bc26ebb5833b6fdd61f0a
Ceilometer no longer requires or supports a database. Instead
it now forwards everything it collects to the message queue
for collection by the other Telemetry services.
Change-Id: Ib8cb1ca3e0ef81c16a6483d20705fa3b237bbc48
The following packages are required in-order to run osprofiler.
these packages will provide deployers the ability to profile
a service on demand should they choose to enable the profile
functionality.
Change-Id: I5d62b71ab7a94f994dd4e0f9fa7fc600543433c2
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The keystoneclient package is being installed on the host by PIP but
that means that a whole bunch of required dependencies are being pulled
in as well.
This brings the host to a rather messed up state when installing
keystone from distro packages, since distribution and
PIP packages are being mixed together. We only need the client to
register the service with keystone so we can simply use the distro
package for that to avoid installing lots of PIP packages on the
host.
Change-Id: I1247ae07035451087e70f3e1782dd9c25fe1c554
Implements: blueprint openstack-distribution-packages
Distributions provide packages for the OpenStack services so we add
support for using these instead of the pip ones.
Change-Id: I3edabacc3978cd42988a6c0c04a011c22159b9fa
Implements: blueprint openstack-distribution-packages
This removes the systemd service templates and tasks from this role and
leverages a common systemd service role instead. This change removes a
lot of code duplication across all roles all without sacrificing features
or functionality. The intention of this change is to ensure uniformity and
reduce the maintenance burden on the community when sweeping changes are
needed.
Change-Id: I128bc9736c1a460c161314eeaa6759de5ceba977
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Option "os_endpoint_type" from groups "service_credentials" and
"keystone_authtoken" is deprecated[1].
Use option "interface" from groups "service_credentials" and
"keystone_authtoken".
[1] https://docs.openstack.org/ceilometer/latest/configuration/
Change-Id: I44d6f478f5eed7521ab79b29e6a757f0b779d9d6