The cloud-compute relation uses the private-address setting to
reflect the hostname/address to be used for vm migrations. This
can be the default management network or an alternate one. When
this charm populates ssh known_hosts entries for compute hosts
it needs to ensure hostname, address and fqdn for the mgmt network
is included so that Nova resize operations can work if they use
the hostname from the db (which will always be from the mgmt
network).
Change-Id: Ic9e4657453d8f53d1ecbee23475c7b11549ebc14
Closes-Bug: #1969971
This patch checks if HTTPS configuration is enabled in Apache to
determine if the websocket protocol should switch from 'ws' to 'wss' or
not.
Change-Id: I738652373604966b6df079e45a0ad26c83e21688
Closes-Bug: #2039490
Patch out charmhelpers.osplatform.get_platform() and
charmhelpers.core.host.lsb_release() globally in the unit tests to
insulate the unit tests from the platform that the unit tests are being
run on.
Change-Id: I25df688c9ec07168b825815b1e1c27ecf2673d2b
The was removed from upstream nova in the 2023.2 cycle
via commit 5edd805fe2395f35ecdfe5b589a51dc00565852f.
The nova release note states:
The ``AvailabilityZoneFilter`` was deprecated for removal
in 24.0.0 (Xena) and has now been removed.
The functionality of the``AvailabilityZoneFilter`` has been
replaced by the``map_az_to_placement_aggregate`` pre-filter.
The pre-filter was introduced in 18.0.0 (Rocky) and enabled
by default in 24.0.0 (Xena). This pre-filter is now always
enabled and the ``[scheduler] query_placement_for_availability_zone``
config option has been removed.
This change also syncs the charm-helpers change from:
https://github.com/juju/charm-helpers/pull/850
Closes-Bug: #2037751
Closes-Bug: #2036766
Change-Id: I315900a7e32ec66b27fa69961e9b7dcb9fa1f949
This new interface consumes information exposed by openstack-dashboard
to correctly configure nova-serialproxy and allow requests coming from
the web browser that tries to load the serial console.
Change-Id: I2d82abffb9649f16a792f180806cea36cc5e25df
Closes-Bug: #2030094
When taking the nova-cloud-controller from single unit to full HA by
increasing the number of units from 1 to 3 and relating it to hacluster,
the data set on the cloud-compute relation is not updated, because the
update_nova_relation() function is only called on
cloud-compute-relation-joined and config-changed, none of these hooks
are executed when scaling out the application.
This patch introduces a call to update_nova_relation() on
ha-relation-changed.
Test case on an environment deployed with a single unit of
nova-cloud-controller:
export NOVA_CC_VIP=10.0.0.11
juju config nova-cloud-controller vip=$NOVA_CC_VIP
juju deploy --series jammy --channel 2.4/stable hacluster \
nova-cloud-controller-hacluster
juju add-unit -n 2 nova-cloud-controller
juju deploy --series jammy memcached
juju add-relation memcached nova-cloud-controller
juju add-relation nova-cloud-controller nova-cloud-controller-hacluster
Change-Id: Ib08bf9b6e1ce2b69be4d99ffe0726b59d81f4bc9
Closes-Bug: #2002154
- Replace ".has_calls(...)" with ".assert_has_calls(...)"
- Use "self.Client()" instead of "self.Client", because otherwise the
wrong mock object is tested.
Related-Bug: #2012108
Change-Id: I8649ba727a75c8dcb86404ed4c3def12e0fdda01
The method was refactored (in part) to use sets to enforce uniqueness of
the hosts. Unfortunately, a list method (.append()) slipped through
that should have been converted to .add(). This fixes that error.
Change-Id: I248430cd1a9156efab745fe110a39441b503b3a5
Closes-Bug: #1992789
The charm looks for endpoint changes and restarts the nova-scheduler
when the endpoint changes. However, the nova-conductor also needs to be
restarted in order to pick up new endpoints.
Closes-Bug: 1968708
Change-Id: I18dee4eb46bd836805e60427c0afc508e2489111
Using affinity/anti-affinity policies sometimes we get into a race
condition on the resources that are available on the hypervisors.
This flag allows to increase the number of retries and hence hosts
to schedule on and therefore allowing the successful scheduling
of instances.
This option was taken from the following URL, while future work
with placement is done to help with scheduling with affinity
https://docs.openstack.org/nova/latest/admin/troubleshooting/affinity-policy-violated.html
Signed-off-by: Arif Ali <arif.ali@canonical.com>
Change-Id: I353dbaa38eb0526014888ede27702b428eb66afd
It's useful to force instance creations to fail if ephemeral drives are
requested when cloud admins want users to use persistent volumes always.
Closes-Bug: #1953561
Change-Id: I1c958b4bcf79512c06de6d81fe55c681bb5f38a7
By default resizing an instance to the same host as the source is
not enabled. This change adds new charm config option that maps
directly to the nova.conf setting which effectively gives a user
possibility to enable/disable this functionality.
Closes-Bug: #1946620
Change-Id: I13d0c332cd0b110344b7a1645e3e4fd250fce33a
nova-consoleauth was removed for OpenStack >= Train, this change will
remove the nrpe check associated with it when is_consoleauth_enabled()
returns False.
Change-Id: I891634fc8001597089312801b29a80336543f5f0
Closes-Bug: #1904650
The mock third party library was needed for mock support in py2
runtimes. Since we now only support py36 and later, we can use the
standard lib unittest.mock module instead.
Note that https://github.com/openstack/charms.openstack is used during tests
and he need `mock`, unfortunatelly it doesn't declare `mock` in its
requirements so it retrieve mock from other charm project (cross dependency).
So we depend on charms.openstack first and when
Ib1ed5b598a52375e29e247db9ab4786df5b6d142 will be merged then CI
will pass without errors.
Depends-On: Ib1ed5b598a52375e29e247db9ab4786df5b6d142
Change-Id: Id925078c5c04c2f89a570bdf7171c666839f9e40
stestr runs tests in parallel and this can cause issues with locking
when SQLite is not mocked out properly and gets used in a test. This
patch just switches Storage to use :memory: so that locking does not
occur.
Closes-Bug: #1908282
Change-Id: Iaa1c7b78dee498e0cc6dc6fccf12e74f22225ecd
SSH keys from nova-compute are now shared across all
nova-compute charm apps.
Closes-Bug: #1468871
Change-Id: Ia142eceff56bb763fcca8ddf5b74b83f84bf3539
Upon running hooks that update ssh_keys, they
end up duplicated in the /etc/nova/compute_ssh/* files
and cloud-compute relations because the code that
checks whether the keys already exist are currently
not working.
This change fixes the deduplication code and improves
unit tests, while also handling a special case for
specific ubuntu-version scenarios.
Change-Id: I93f9418d5340e7cb599a42970d78557362c1542f
Closes-bug: #1943753
When a cloud is deployed earlier than the Train release, the placement
service is provided by nova-cloud-controller. As part of an upgrade to
Train, the new placement service is added and updates the placement
endpoint in the service catalog. The nova-cloud-controller charm no
longer advertises the placement service URL, but because the data
exists on the relation until removed, the service catalog changes the
placement URL to the placement endpoints advertised from
nova-cloud-controller.
Fix this by explicitly removing the placement service URLs when the
placement service is not provided by nova-cloud-controller.
Change-Id: Ibb3b1429820a4188fe3d2c1142c295c0de4ee24e
Closes-Bug: #1928992
This action should be used to sync the Juju availability zones,
from the nova-compute units, with the OpenStack availability zones.
The action is meant to be used post-deployment by the operator.
It will setup OpenStack aggregates for each availability zone, and
add the proper compute hosts to them.
Co-Authored-By: Billy Olsen <billy.olsen@canonical.com>
Change-Id: Ibd71cd61e51b04599eadf21b3ef46e47544b8814
Communicate to compute services the cross_az_attach config setting.
Since the cross_az_attach setting needs to be applied at the compute
node, update the relation settings to specify the cross_az_attach
policy configured.
Change-Id: I71e97453453d5d091449caf547e68c6455d091cf
Closes-Bug: #1899084
The charm looked for `keystone_juju_ca_cert` on disk
instead of
`/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt`
Synced charm-helpers for
https://github.com/juju/charm-helpers/pull/570
Change-Id: Ib7cfdadc3a75fca951792ef2c2e2b454b1ad021d
Closes-Bug: #1915504
Includes updates to charmhelpers/charms.openstack for cert_utils
and unit-get for the install hook error on Juju 2.9
* charm-helpers sync for classic charms
* rebuild for reactive charms
* ensure tox.ini is from release-tools
* ensure requirements.txt files are from release-tools
* On reactive charms:
- ensure master branch for charms.openstack
- ensure master branch for charm-helpers
* Remove unit_get mock from unit tests as not in charmhelpers context
Change-Id: Ic828c8d6dd45148a1684c74d3f2b2157cfe6bbc4
Note that part of this fix belongs in c-h, but let's add it here
as a tactical measure given we are practically frozen.
Enable TLS in the functional test for focal-ussuri and onwards.
Also switch to focal-ussuri as target for smoke.
Drop Trusty/Mitaka as it currently does not pass with symptoms
like https://bugs.launchpad.net/charm-nova-compute/+bug/1861094
Closes-Bug: #1911902
Change-Id: I7b12479ce3afb94a0fb21c26b1ac78736b81aba2
If an ep change trigger is recieved then also look for the
catalog_ttl key on the relation. If it is present then wait for
that long before restarting services, this allows stale ep
entries to expire from the catalogue before restarting.
Change-Id: Ief2fa8286d9fa8058b7a012ec719776c4dd302f5
* charm-helpers sync for classic charms
* charms.ceph sync for ceph charms
* rebuild for reactive charms
* sync tox.ini files as needed
* sync requirements.txt files to sync to standard
Change-Id: Ie7640826be5426157c57877348cef43ab6067543
Prior to this change, cell1 was updated when there was a database
or rabbitmq-server relation change, but cell0 wasn't. Ensure that
cell0 is also updated.
Change-Id: I670d0295ea339b21166ef7b18509b04a5beaa959
Closes-Bug: #1892904
This commit introduces a new charm option allowing operators to override
the hardcoded 0.0 that disabled hypervisor demotion on build failures
from pike onward.
In certain environments it may be preferable to retain the upstream
behavior of letting the scheduler work around malfunctional computes and
favor instance building reliability at the cost of a potentially uneven
load distribution.
Change-Id: I2faa5ab8cd505a9d61a9fa26e1b08d16b0c795fb
Closes-Bug: 1892934
Ensure that the public endpoint binding is used to resolve the
path to the SSL certificate and key files as the base access
URL for console access is always via this binding.
Add unit tests to cover the InstanceConsoleContext class.
Change-Id: I27de9445d249b0d670543d250bd02f450764a10f
Closes-Bug: 1871428
New config option count_usage_from_placement is added in Nova from
Train release to enable/disable counting of quota usage from placement
service. Corresponding config parameter is required in nova-cloud-controller
charm.
This patch introduces quota-count-usage-from-placement config parameter in
nova-cloud-controller charm. For openstack releases train or above, this
option is rendered in nova.conf for nova-cloud-controller units.
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/250
Change-Id: I57b9335b7b6aecb8610a66a59cb2e4e506e76a5e
Closes-Bug: #1864859
Enable PCI Passthrough Filter if neutron-api charm has enabled
support for hardware offloading.
Change-Id: I0ccb4366b03557b316da1507015112c7f378176e
Depends-On: I1f59012ad2d16af18ca310906f6c6b537bb7ec72
Allow attach between instance and volume in different
availability zones. If False, volumes attached to an
instance must be in the same availability zone
in Cinder as the instance availability zone in Nova.
Change-Id: I21df8e0dfa585133c5ef6a55cdbbc2071c267424
Closes-Bug: #1856776
If the database is in maintenace mode do not attempt to access
it.
Depends-On: I5d8ed7d3935db5568c50f8d585e37a4d0cc6914f
Change-Id: I7d5b7a20573b38d12b1ead708ee446472f21e9f8
Currently, Apache ports.conf file is not being configured by this
charm. This patch changes the ports.conf default file with another one
that does not open port 80 on SSL environments..
Change-Id: Id0b3ce106e2779ce6a44b59c0b08fb1011dfdd54
Closes-bug: #1845665
If nova-compute is contected to the message broker before
the nova-conducter then it times out after a minute and shutsdown.
The nova-cloud-controller needs to inform the nova-compute charm
to restart nova-compute when it is connected to the message broker.
The restart is limited to the leader to stop multiple restart
requests.
Change-Id: Icdf47ea80267d421ca14f131f2d1f7cbdeb73641
Closes-Bug: #1861094
When resuming services exclude those managed by hacluster, in
this case haproxy. If pacemaker lacks quorum it may shut haproxy
down which will cause this charm to error.
Charmhelper sync included to bring in required
get_managed_services_and_ports method.
Change-Id: I063c168595bee05c924cb23469f8dc866a43982b