These updates, on the master branch, are to support testing the caracal
packages and support of the charms for caracal. They do NOT lock the charms
down, and don't change the testing branches to stable branches.
Change-Id: I8207aceaa1426be6d736819e88e34702a4125fe7
Patch out charmhelpers.osplatform.get_platform() and
charmhelpers.core.host.lsb_release() globally in the unit tests to
insulate the unit tests from the platform that the unit tests are being
run on.
Change-Id: I7116d1232d19996e39665f5e6f15dae7b8e74118
* Voting was turned on for jammy-antelope in the
project-template for charm-functional-jobs in zosci-config
* Voting for jammy-antelope bundles with non-standard names
is turned on in individual charms
* Kinetic-zed bundles/tests are removed
Change-Id: I45bf4ef2d0fc1323132804c7a89cc42a768d18a8
CompareOpenStackReleases is used to handle openstack comparison
after the z-wrap, now that we are at antelope.
Change-Id: I31254f50d9befdfc3a54c2ee305cb06e7d19cce3
This patch adds kinetic to the metadata.yaml and ensures
that a run-on base for 22.10 is added in the
charmcraft.yaml
Change-Id: If1982cf023556b54837d01411cb6602633ac4cdd
* sync charm-helpers to classic charms
* change openstack-origin/source default to zed
* align testing with zed
* add new zed bundles
* add zed bundles to tests.yaml
* add zed tests to osci.yaml and .zuul.yaml
* update build-on and run-on bases
* add bindep.txt for py310
* sync tox.ini and requirements.txt for ruamel
* use charmcraft_channel 2.0/stable
* drop reactive plugin overrides
* move interface/layer env vars to charmcraft.yaml
Change-Id: I34ae94970fc5cfd242df5184fba09b611874ee71
- Add 22.04 to charmcraft.yaml
- Update metadata to include jammy
- Remove impish from metadata
- Update osci.yaml to include py3.10 default job
- Modify tox.ini to remove py35,py36,py37 tox target and add py310
target.
Change-Id: Ia9a713625aa933fe99722df36f42a800af19c790
This update is to ensure that the Zuul Canonical CI builds the charm
before functional tests and ensure that that artifact is used for the
functional tests. This is to try to ensure that the charm that gets
landed to the charmhub is the same charm that was tested with.
Change-Id: I76cc7c4c782d60f3558df6b9f96c513eff16331b
This is a subordinate charm and since a recent
commit [1] it shares a list of its services with
the principal charm nova-compute, which has now
the responsibility to pause and resume services. [2]
The ceilometer-agent-compute service has a
dependency to the nova-compute service anyway, so
it is impossible for this charm to resume its
service if its principal charm nova-compute is
paused. This is what also led to errors in
ceilometer-agent's post-series-upgrade hook. This
hook attempted to resume its service although
the principal service was still paused. Removing
this logic entirely solves this issue.
Validated by running openstack-upgrade and
series-upgrade tests. [3]
[1]: https://opendev.org/openstack/charm-ceilometer-agent/commit/be45f779
[2]: https://opendev.org/openstack/charm-nova-compute/commit/8fb37dc0
[3]: https://github.com/openstack-charmers/charmed-openstack-tester
Closes-Bug: #1952882
Change-Id: Ia22b53b52b541250f7f803c6708968d75e64475c
* charm-helpers sync for classic charms
* sync from release-tools
* switch to release-specific zosci functional tests
* run focal-ussuri as smoke tests
* remove trusty, xenial, and groovy metadata/tests
* drop py35 and add py39
Change-Id: If4892692cda73dacc3b9b430cdaf9c82f814b64a
For principal - subordinate plugin type relations where the
principal Python payload imports code from packages managed by a
subordinate, upgrades can be problematic.
This change will allow a subordinate charm that have opted into the
feature to inform its principal about all implemented release -
packages combinations ahead of time. With this information in place
the principal can do the upgrade in one operation without risk of
charm relation RPC type processing at a critical moment.
This is similar to
https://review.opendev.org/c/openstack/charm-interface-keystone-domain-backend/+/781658https://review.opendev.org/c/openstack/charm-layer-openstack/+/781624
Change-Id: Ibd5bdcb141fc3103ee97123ff284fb2957802eba
Closes-Bug: #1927277
Recent test run(s) have shown memory exhaustion on the nova
cloud controller units. This exhibits itself as the controller
dropping messages from the compute nodes and logging messages like:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 268, in _perform_action_on_threads
File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 342, in <lambda>
lambda x: x.wait(),
File "/usr/lib/python3/dist-packages/oslo_service/threadgroup.py", line 61, in wait
return self.thread.wait()
File "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 180, in wait
return self._exit_event.wait()
File "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait
result = hub.switch()
File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 298, in switch
return self.greenlet.switch()
File "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 350, in run
self.wait(sleep_time)
File "/usr/lib/python3/dist-packages/eventlet/hubs/poll.py", line 80, in wait
presult = self.do_poll(seconds)
File "/usr/lib/python3/dist-packages/eventlet/hubs/epolls.py", line 31, in do_poll
return self.poll.poll(seconds)
MemoryError
to the nova-conductor log.
It seems very likely this issue is specific to Bionic Stein so it
may be a little wasteful to have increased the memory allocation
for all the bundles but I think consistancy between the bundles is
more important.
Change-Id: I1ab3e8f0d71b06fe97fa4b6cdee138c294dca158
Co-authored-by: Liam Young <liam.young@canonical.com>
The ceilometer compute agent uses the default polling.yaml
from the installed packages without the ability to configure its contents.
This change adds two configuration options: 'polling-interval' and
'enable-all-pollsters', borrowing from the implementation in
charm-ceilometer. We start off with a limited set of meters as before
and if these are not enough, the user can set 'enable-all-pollsters' to
'true' to collect all available 'Pollster' metrics as listed in the
documentation [1].
Verification:
I tested this change on a cluster built from the OpenStack base bundle
and the ceilometer and gnocchi charms. I confirmed that extra metrics
that originate from the Compute Agent pollster (e.g. disk.device.read.latency)
are available in gnocchi after setting 'enable-all-pollsters' to true.
[1] https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html
Closes-Bug: #1914442
Change-Id: I21c9a93e7dd91bced9365e44f3e6a5315976c3bb
These are the test bundles (and any associated changes) for
focal-wallaby and hirsute-wallaby support.
hisute-wallaby test is disabled (moved to dev) due to [1] as bundle may
reference a reactive charm.
[1] https://github.com/juju-solutions/layer-basic/issues/194
Sync charmhelpers.
Change-Id: Ie73cc2223e51b741272c32e4d4a9d4a21949e37c
This patchset updates all the requirements for charms.openstack,
charm-helpers, charms.ceph, zaza and zaza-openstack-tests back
to master branch.
Change-Id: I94ac02a2232aaffdbf3a6d2b3ef4fa64074cfa83
The 'hirsute' key in c-h/core/host_factory/ubuntu.py:
UBUNTU_RELEASES had been missed out, and is needed for
hirsute support in many of the charms. This sync is to
add just that key. See also [1]
Note that this sync is only for classic charms.
[1] https://github.com/juju/charm-helpers/pull/598
Change-Id: Ice7eb3dee63f319d2b534a72fd228853e3bad9bd
* charm-helpers sync for classic charms
* build.lock file for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure stable/21.04 branch for charms.openstack
- ensure stable/21.04 branch for charm-helpers
Change-Id: If8b18dd3488e2c84ef433fab0b144cfdbb65cd01
This update adds the new hirsute Ubuntu release (21.04) and
removes trusty support (14.04 which is EOL at 21.04).
Change-Id: Id3a5027d4165da8be59f37b9b9a96c34280c78ce
The network-get --primary-address juju-info fails on pre-2.8.?
versions of juju. This results in a NoNetworkBinding error.
Fallback to unit_get() if that occurs for local_address().
Change-Id: I9a1ed8146233997cbad86a329e8a0067a6966b00
Includes updates to charmhelpers/charms.openstack for cert_utils
and unit-get for the install hook error on Juju 2.9
* charm-helpers sync for classic charms
* rebuild for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure master branch for charms.openstack
- ensure master branch for charm-helpers
Change-Id: I7e64344f01ebfea2f7d5c4c71053309b0956203a
* charm-helpers sync for classic charms
* charms.ceph sync for ceph charms
* rebuild for reactive charms
* sync tox.ini files as needed
* sync requirements.txt files to sync to standard
Change-Id: I602a2a9c241de5898f737aacec6f85390d0be4f7
Samples collected can be batched together,
consequently increasing or reducing the
amount of API calls and body data
sent to the configured publisher.
This config is available since Rocky,
receiving the value from ceilometer-charm
to allow its tuning.
Change-Id: I986073fdacd750cf96d662abf1d58844479c25ba
Closes-bug: #1885190