This appears to be a typo, where self.service_name and self.endpoint_type
was used in the same way as seen in other methods,
but self.service_name or self.endpoint_type is not defined in this class,
or any subclasses of it.
So we can use the passed name and endpoint_type arguments here.
Closes-Bug: #2037515
Change-Id: I12c561057372c11632b2d97fa1763fc92d89f479
The cluster as its currently configured for services with clone=False,
Pacemaker will monitor exclusively that the daemon is running in the
node where it should, but will take no actions if the same daemon is
running (e.g. started manually by a sysadmin) in another node of the
cluster, this becomes a problem for services that are expected to be
configured in active/passive (e.g. manila-share).
This change configures two monitors for services with clone=False, one
that monitors the daemon is running where it should, and another one
that monitors the daemon is not running where it shouldn't.
primitive res_apache systemd:apache2 \
...
op monitor interval=5s role=Started \
op monitor interval=6s role=Stopped
https://clusterlabs.org/pacemaker/doc/deprecated/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_resource_operations.html#s-resource-monitoring
Closes-Bug: #1904623
Change-Id: I9e5383f5ab6b6967aa0f2318764519989a292227
Charm-helpers dropped py2 support, hence charms are in the process of
dropping support for it too, this change drops the use of the `six`
library.
Change-Id: I4c76e60780e7dc1189a9fad8f8caf34c3fad5f65
This will enable removal of previously created
resources. Originaly, the empty values were not
propagated so the resource ended up in both fields
"json_delete_resources" and "json_resources".
Closes-Bug: #1953623
Change-Id: I34693bb0e30bce96144a983e55e212e27029ba52
Move common requires code in to the common module so that
requires.py only contains the code which is specific to
reactive charms. This will allow for a subsequent patch which
creates a requires interface consumable by operator framework
charms.
Change-Id: I70037252cc7a677a9394929cb0cd17e9506ab624
The mock third party library was needed for mock support in py2
runtimes. Since we now only support py36 and later, we can use the
standard lib unittest.mock module instead.
Note that https://github.com/openstack/charms.openstack is used during tests
and he need `mock`, unfortunatelly it doesn't declare `mock` in its
requirements so it retrieve mock from other charm project (cross dependency).
So we depend on charms.openstack first and when
Ib1ed5b598a52375e29e247db9ab4786df5b6d142 will be merged then CI
will pass without errors.
Depends-On: Ib1ed5b598a52375e29e247db9ab4786df5b6d142
Change-Id: Ibbbcfe51e76537702a2f4612a8b9829b25b2d149
Port of https://github.com/juju/charm-helpers/pull/373 to this
interface for reactive charms. From the ch pull request:
On Eoan we saw errors like:
ERROR: syntax in primitive: Attribute order error: timeout must appear
before any instance attribute parsing 'primitive res_ks_cf9dea1_vip
ocfheartbeatIPaddr2 params ip=10.5.253.30 op monitor depth=0
timeout=20s interval=10s'
It would appear, that ordering matters, update resource config function
to the correct order.
Change-Id: I1f8a440fb0ad62192307946de42b9b176b3ef4c1
Partial-Bug: #1843830
having any corosync resource available.
Closes-bug: #1864804
Change-Id: I6eb3b9a816a93c4c7894e17935b1e7c8604592c5
Signed-off-by: José Pekkarinen <jose.pekkarinen@canonical.com>
Systemd support is in corosync, but currently the add_init_service
handles adding an upstart service only. This results in Charmed
Kubernetes having incorrectly monitored services.
Change-Id: I935613292ce6b78cf645469fda6d21b0aa695c28
Closes-Bug: #1843933
- removing sitepackages in tox.ini to avoid test env pollution
- skip_missing_interpreters in tox.ini set to False to avoid false
positives by skipping missing interpreters.
Change-Id: I3e9653753f17e68a302184a734b8fbe6fbf92df8
This is a mechanically generated patch to ensure unit testing is in place
for all of the Tested Runtimes for Train.
See the Train python3-updates goal document for details:
https://governance.openstack.org/tc/goals/train/python3-updates.html
Note that python35-charm-jobs is retained since this charm is supported
on Xenial.
Change-Id: Ie5ed4bc0b8c1ecf1c870b9d7d1147c32950eec28
Story: #2005924
Task: #34228
This technique was borrowed from the tox "cover" environment in
openstack/nova's tox.ini. This leverages the fact that stestr lets
you override the python executable via the PYTHON environment
variable. Doing this allows us to easily generate coverage for our
unit tests.
An important caveat is that this does not provide any coverage for
tests via zaza, amulet, etc. It is purely focused on the unit tests.
Note that this replaces the previous .coveragerc; coverage
configuration is instead pulled from tox.ini.
Change-Id: Ia0f190f7de273290d01091fd9211a4bbdb688be5
Currently resources that are marked for deletion are not removed
from the local reactive charm resource map. This means that on
charm upgrade the charm will rerequest resources that
are marked for deletion as well as requesting they are deleted.
Change-Id: I68c57307c9e5b0a5743ac3105e48668b2e436957
Instruct charm tools to not include repo/test config and test code
when consuming interface to build a charm.
Change-Id: Iba7ec239094214a408105359c9f18b53a7bc0919
* Add unit tests for common.py and requires.py.
* Update tox env for above.
* Update gitignore to ignore common kruft.
* CRM._parse did not add a space when constructing 'results' after
each argument of 'data'. This caused each element to run into the
last.
Change-Id: I2c35820149618aae02171c89b26bf29ee5e22344
The interface was failing to setup monitoring for Virtual IPs. If for
some reason a VIP failed on an instance corosync would fail to notice
it.
This change adds the op monitor setting to monitor VIPs.
Change-Id: If885340f04b8834fa4604ab742c7facfc6f316ad
We want to default to running all tox environments under python 3, so
set the basepython value in each environment.
We do not want to specify a minor version number, because we do not
want to have to update the file every time we upgrade python.
We do not want to set the override once in testenv, because that
breaks the more specific versions used in default environments like
py35 and py36.
Change-Id: Id5511088f9b29a801156231f6b95d6add8b68cd5
Signed-off-by: Doug Hellmann <doug@doughellmann.com>
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.
Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.
Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.
See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html
Change-Id: I15f8fc216f5bddf9cd1cd29f2c1347f60ed0bee1
Story: #2002586
Task: #24317
If a nic is not specified when setting up a vip resource then
corosync should just do the right thing. This change makes the nic
field optional as there is little benefit in using it.
Change-Id: I022a43df0a50a21df3c5f021dcd563da4d20db53
Ensure that CRM group members are sorted to avoid continual
data mutation on ha relations to the hacluster charm, which
causes restarts of services and vips.
Change-Id: I80ca19fa5f1d573827ea9cd7fe0f18c575518c8c
Closes-Bug: 1754149
The interface was setting ha.available immediately with no gating.
The hacluster charm sets a relation variable, clustered, when it has
enough peers and has setup corosync. This change updates the
interface to react to this setting. Only set ha.available when
clustered is set and True.
Change-Id: I23eb5e70537a62d5b9e5e24d09f37519b63a1717
Closes-Bug: #1749280
The CRM.delete_services(...) and CRM.init_services(...) methods
update the internal list used to communicate with the hacluster
charm. This causes the list to continuously grow, which causes
the data in the hacluster relation to change. The constantly
updated relation change causes hooks to fire unintentionally.
Change-Id: Ica823027b1a3fafe277d862fae0a3cdcf5907774
Partial-Bug: #1737776
Switch to using json encoded data when passing configuration
to the hacluster charm.
Note that this requires an up-to-date version of the hacluster
charm, but ensures that data is presented consistently over the
relation due to deterministic encoding using json.
Change-Id: If2ace4b37f4152bf8bc44526b609a034433efc67
Closes-Bug: 1741304
Add code for managing DNS Entries via hacluster. This is part of the
effort to enable DNS HA in the reactive charms.
Change-Id: I1a6cdeffa3aa8657b957ba68cd09face27f93b27
Partial-Bug: #1727376
Fix bug I intrduces in:
77c2b25407
Where the haproxy resource was also added to vip group.
Ensure only VIP resources are added to the vip group.
Change-Id: Ie6b4ae89af74151ebb435c72a6df78174bbec87b
Allow for two VIPs on a single interface as in IPv4/IPv6 dual
stack scenarios.
Add missing group membership for VIPs.
Change-Id: Ieba9fd453efcd3d407baaeb8d0d6f3f71750060e
This change stops the requires side of the interface (in a reactive
charm) from setting data on the relation that's already been set.
This has the effect of not asking hacluster to do something every time
a reactive charm (e.g. designate) does update status.
Change-Id: I750f3c41a2f0447a47cfd19bab1d4958de4577f2
Related-Bug: #1708396
When requesting hacluster manage an init service the principle
sends options to hacluster but is currently erroneously sending
the word 'param' as well. eg
The pinciple is setting:
''res_designate_haproxy'': '' params op monitor interval="5s"''
When it should be:
''res_designate_haproxy'': ''op monitor interval="5s"''
Including the word 'params' evenutally causes hacluster to fail with:
ERROR: syntax in primitive: params op monitor interval=5s
Change-Id: Id35bb22692914dc3f94465d8ae7d62971a238d4e