The cluster as its currently configured for services with clone=False,
Pacemaker will monitor exclusively that the daemon is running in the
node where it should, but will take no actions if the same daemon is
running (e.g. started manually by a sysadmin) in another node of the
cluster, this becomes a problem for services that are expected to be
configured in active/passive (e.g. manila-share).
This change configures two monitors for services with clone=False, one
that monitors the daemon is running where it should, and another one
that monitors the daemon is not running where it shouldn't.
primitive res_apache systemd:apache2 \
...
op monitor interval=5s role=Started \
op monitor interval=6s role=Stopped
https://clusterlabs.org/pacemaker/doc/deprecated/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_resource_operations.html#s-resource-monitoring
Closes-Bug: #1904623
Change-Id: I9e5383f5ab6b6967aa0f2318764519989a292227
This will enable removal of previously created
resources. Originaly, the empty values were not
propagated so the resource ended up in both fields
"json_delete_resources" and "json_resources".
Closes-Bug: #1953623
Change-Id: I34693bb0e30bce96144a983e55e212e27029ba52
Move common requires code in to the common module so that
requires.py only contains the code which is specific to
reactive charms. This will allow for a subsequent patch which
creates a requires interface consumable by operator framework
charms.
Change-Id: I70037252cc7a677a9394929cb0cd17e9506ab624
The mock third party library was needed for mock support in py2
runtimes. Since we now only support py36 and later, we can use the
standard lib unittest.mock module instead.
Note that https://github.com/openstack/charms.openstack is used during tests
and he need `mock`, unfortunatelly it doesn't declare `mock` in its
requirements so it retrieve mock from other charm project (cross dependency).
So we depend on charms.openstack first and when
Ib1ed5b598a52375e29e247db9ab4786df5b6d142 will be merged then CI
will pass without errors.
Depends-On: Ib1ed5b598a52375e29e247db9ab4786df5b6d142
Change-Id: Ibbbcfe51e76537702a2f4612a8b9829b25b2d149
Port of https://github.com/juju/charm-helpers/pull/373 to this
interface for reactive charms. From the ch pull request:
On Eoan we saw errors like:
ERROR: syntax in primitive: Attribute order error: timeout must appear
before any instance attribute parsing 'primitive res_ks_cf9dea1_vip
ocfheartbeatIPaddr2 params ip=10.5.253.30 op monitor depth=0
timeout=20s interval=10s'
It would appear, that ordering matters, update resource config function
to the correct order.
Change-Id: I1f8a440fb0ad62192307946de42b9b176b3ef4c1
Partial-Bug: #1843830
having any corosync resource available.
Closes-bug: #1864804
Change-Id: I6eb3b9a816a93c4c7894e17935b1e7c8604592c5
Signed-off-by: José Pekkarinen <jose.pekkarinen@canonical.com>
Systemd support is in corosync, but currently the add_init_service
handles adding an upstart service only. This results in Charmed
Kubernetes having incorrectly monitored services.
Change-Id: I935613292ce6b78cf645469fda6d21b0aa695c28
Closes-Bug: #1843933
Currently resources that are marked for deletion are not removed
from the local reactive charm resource map. This means that on
charm upgrade the charm will rerequest resources that
are marked for deletion as well as requesting they are deleted.
Change-Id: I68c57307c9e5b0a5743ac3105e48668b2e436957
* Add unit tests for common.py and requires.py.
* Update tox env for above.
* Update gitignore to ignore common kruft.
* CRM._parse did not add a space when constructing 'results' after
each argument of 'data'. This caused each element to run into the
last.
Change-Id: I2c35820149618aae02171c89b26bf29ee5e22344