The cloud-compute relation uses the private-address setting to
reflect the hostname/address to be used for vm migrations. This
can be the default management network or an alternate one. When
this charm populates ssh known_hosts entries for compute hosts
it needs to ensure hostname, address and fqdn for the mgmt network
is included so that Nova resize operations can work if they use
the hostname from the db (which will always be from the mgmt
network).
Change-Id: Ic9e4657453d8f53d1ecbee23475c7b11549ebc14
Closes-Bug: #1969971
This patch checks if HTTPS configuration is enabled in Apache to
determine if the websocket protocol should switch from 'ws' to 'wss' or
not.
Change-Id: I738652373604966b6df079e45a0ad26c83e21688
Closes-Bug: #2039490
The was removed from upstream nova in the 2023.2 cycle
via commit 5edd805fe2395f35ecdfe5b589a51dc00565852f.
The nova release note states:
The ``AvailabilityZoneFilter`` was deprecated for removal
in 24.0.0 (Xena) and has now been removed.
The functionality of the``AvailabilityZoneFilter`` has been
replaced by the``map_az_to_placement_aggregate`` pre-filter.
The pre-filter was introduced in 18.0.0 (Rocky) and enabled
by default in 24.0.0 (Xena). This pre-filter is now always
enabled and the ``[scheduler] query_placement_for_availability_zone``
config option has been removed.
This change also syncs the charm-helpers change from:
https://github.com/juju/charm-helpers/pull/850
Closes-Bug: #2037751
Closes-Bug: #2036766
Change-Id: I315900a7e32ec66b27fa69961e9b7dcb9fa1f949
This new interface consumes information exposed by openstack-dashboard
to correctly configure nova-serialproxy and allow requests coming from
the web browser that tries to load the serial console.
Change-Id: I2d82abffb9649f16a792f180806cea36cc5e25df
Closes-Bug: #2030094
This change moves the default return value for the Keystone api_version
to 3.0 instead of 2.0. By this point in time, all supported OpenStack
releases use Keystone API version 3.0 instead of 2.0.
This was previously causing Nova templates to render with 2.0 in the
Keystone auth URL instead of 3.0, which caused auth failures.
Closes-Bug: 1995778
Change-Id: I6463a24fe4aaa654a58cff56720a55f0950db717
When taking the nova-cloud-controller from single unit to full HA by
increasing the number of units from 1 to 3 and relating it to hacluster,
the data set on the cloud-compute relation is not updated, because the
update_nova_relation() function is only called on
cloud-compute-relation-joined and config-changed, none of these hooks
are executed when scaling out the application.
This patch introduces a call to update_nova_relation() on
ha-relation-changed.
Test case on an environment deployed with a single unit of
nova-cloud-controller:
export NOVA_CC_VIP=10.0.0.11
juju config nova-cloud-controller vip=$NOVA_CC_VIP
juju deploy --series jammy --channel 2.4/stable hacluster \
nova-cloud-controller-hacluster
juju add-unit -n 2 nova-cloud-controller
juju deploy --series jammy memcached
juju add-relation memcached nova-cloud-controller
juju add-relation nova-cloud-controller nova-cloud-controller-hacluster
Change-Id: Ib08bf9b6e1ce2b69be4d99ffe0726b59d81f4bc9
Closes-Bug: #2002154
This change add several configuration options to enable HTTP checks
to the HAProxy configuration, instead of the default TCP connection
checks.
Closes-Bug: #1880610
Change-Id: I4a947c5b52eb3283c08a0d39cc9bf14695a63eab
The method was refactored (in part) to use sets to enforce uniqueness of
the hosts. Unfortunately, a list method (.append()) slipped through
that should have been converted to .add(). This fixes that error.
Change-Id: I248430cd1a9156efab745fe110a39441b503b3a5
Closes-Bug: #1992789
The linked bug shows the install of the charm with openstack-origin set
to zed. This happens because configure_installation_source() causes the
openstack-release package to be installed *before* the zed cloud archive
sources are configured into /etc/apt and an apt update done. This means
that the openstack-release package says "yoga" despite the zed packages
actually being installed.
Then, on the config-changed hook, it sees that the installed version is
showing as yoga and tries to do an upgrade. This fails, as the charm
hasn't yet bootstrapped, and the charm tries to bootstrap after
upgrading the packages.
There's a few bugs here which are exposed, but the tactical fix is to
force the openstack-release to match the installed packages.
Closes-Bug: #1989538
Change-Id: Icdef04e25e74c0a18fd49997c5f5b0540d583f40
The charm looks for endpoint changes and restarts the nova-scheduler
when the endpoint changes. However, the nova-conductor also needs to be
restarted in order to pick up new endpoints.
Closes-Bug: 1968708
Change-Id: I18dee4eb46bd836805e60427c0afc508e2489111
The original code was appending a str to a list rather than either
appending it or adding it as a list of one element. The code avoids
append to avoid unintentional side-effects.
Change-Id: I1466981f1d68f8dea3bbe32fdde6c4825056c0d0
Closes-Bug: #1927698
Using affinity/anti-affinity policies sometimes we get into a race
condition on the resources that are available on the hypervisors.
This flag allows to increase the number of retries and hence hosts
to schedule on and therefore allowing the successful scheduling
of instances.
This option was taken from the following URL, while future work
with placement is done to help with scheduling with affinity
https://docs.openstack.org/nova/latest/admin/troubleshooting/affinity-policy-violated.html
Signed-off-by: Arif Ali <arif.ali@canonical.com>
Change-Id: I353dbaa38eb0526014888ede27702b428eb66afd
It's useful to force instance creations to fail if ephemeral drives are
requested when cloud admins want users to use persistent volumes always.
Closes-Bug: #1953561
Change-Id: I1c958b4bcf79512c06de6d81fe55c681bb5f38a7
By default resizing an instance to the same host as the source is
not enabled. This change adds new charm config option that maps
directly to the nova.conf setting which effectively gives a user
possibility to enable/disable this functionality.
Closes-Bug: #1946620
Change-Id: I13d0c332cd0b110344b7a1645e3e4fd250fce33a
nova-consoleauth was removed for OpenStack >= Train, this change will
remove the nrpe check associated with it when is_consoleauth_enabled()
returns False.
Change-Id: I891634fc8001597089312801b29a80336543f5f0
Closes-Bug: #1904650
SSH keys from nova-compute are now shared across all
nova-compute charm apps.
Closes-Bug: #1468871
Change-Id: Ia142eceff56bb763fcca8ddf5b74b83f84bf3539
Upon running hooks that update ssh_keys, they
end up duplicated in the /etc/nova/compute_ssh/* files
and cloud-compute relations because the code that
checks whether the keys already exist are currently
not working.
This change fixes the deduplication code and improves
unit tests, while also handling a special case for
specific ubuntu-version scenarios.
Change-Id: I93f9418d5340e7cb599a42970d78557362c1542f
Closes-bug: #1943753
This commit exposes allocation ratio configuration via the cloud-compute
relation for backwards compatiable behaviour of the charms when
configuration of allocation ratios is delegated to nova-compute.
Closes-Bug: #1677223
Change-Id: I5a6bd1fa06d06dfd3e49182cc72ad83025429b13
When a cloud is deployed earlier than the Train release, the placement
service is provided by nova-cloud-controller. As part of an upgrade to
Train, the new placement service is added and updates the placement
endpoint in the service catalog. The nova-cloud-controller charm no
longer advertises the placement service URL, but because the data
exists on the relation until removed, the service catalog changes the
placement URL to the placement endpoints advertised from
nova-cloud-controller.
Fix this by explicitly removing the placement service URLs when the
placement service is not provided by nova-cloud-controller.
Change-Id: Ibb3b1429820a4188fe3d2c1142c295c0de4ee24e
Closes-Bug: #1928992
This action should be used to sync the Juju availability zones,
from the nova-compute units, with the OpenStack availability zones.
The action is meant to be used post-deployment by the operator.
It will setup OpenStack aggregates for each availability zone, and
add the proper compute hosts to them.
Co-Authored-By: Billy Olsen <billy.olsen@canonical.com>
Change-Id: Ibd71cd61e51b04599eadf21b3ef46e47544b8814
Communicate to compute services the cross_az_attach config setting.
Since the cross_az_attach setting needs to be applied at the compute
node, update the relation settings to specify the cross_az_attach
policy configured.
Change-Id: I71e97453453d5d091449caf547e68c6455d091cf
Closes-Bug: #1899084
The charm looked for `keystone_juju_ca_cert` on disk
instead of
`/usr/local/share/ca-certificates/keystone_juju_ca_cert.crt`
Synced charm-helpers for
https://github.com/juju/charm-helpers/pull/570
Change-Id: Ib7cfdadc3a75fca951792ef2c2e2b454b1ad021d
Closes-Bug: #1915504
Note that part of this fix belongs in c-h, but let's add it here
as a tactical measure given we are practically frozen.
Enable TLS in the functional test for focal-ussuri and onwards.
Also switch to focal-ussuri as target for smoke.
Drop Trusty/Mitaka as it currently does not pass with symptoms
like https://bugs.launchpad.net/charm-nova-compute/+bug/1861094
Closes-Bug: #1911902
Change-Id: I7b12479ce3afb94a0fb21c26b1ac78736b81aba2
There is a 'hole' in the user experience where if you try to
openstack-upgrade from stein to train but have no already related
placement to the nova-cc unit, then the openstack upgrade is aborted and
a workload message indicates that the relation is needed.
However, if you then subsequently add the placement relation, the
warning goes away, but the payload is not upgraded to match the
openstack-origin value.
This patch adds a warning if the openstack-origin is for train, the
payload is stein, the action-managed-upgrade is false and the placement
relation is made; i.e. that the operator fell into the above hole.
Change-Id: I360f2d72cad374c31ee766065af682e2fa6218d1
Closes-Bug: #1910276
If an ep change trigger is recieved then also look for the
catalog_ttl key on the relation. If it is present then wait for
that long before restarting services, this allows stale ep
entries to expire from the catalogue before restarting.
Change-Id: Ief2fa8286d9fa8058b7a012ec719776c4dd302f5
* charm-helpers sync for classic charms
* charms.ceph sync for ceph charms
* rebuild for reactive charms
* sync tox.ini files as needed
* sync requirements.txt files to sync to standard
Change-Id: Ie7640826be5426157c57877348cef43ab6067543
Prior to this change, cell1 was updated when there was a database
or rabbitmq-server relation change, but cell0 wasn't. Ensure that
cell0 is also updated.
Change-Id: I670d0295ea339b21166ef7b18509b04a5beaa959
Closes-Bug: #1892904
This commit introduces a new charm option allowing operators to override
the hardcoded 0.0 that disabled hypervisor demotion on build failures
from pike onward.
In certain environments it may be preferable to retain the upstream
behavior of letting the scheduler work around malfunctional computes and
favor instance building reliability at the cost of a potentially uneven
load distribution.
Change-Id: I2faa5ab8cd505a9d61a9fa26e1b08d16b0c795fb
Closes-Bug: 1892934
Ensure that the public endpoint binding is used to resolve the
path to the SSL certificate and key files as the base access
URL for console access is always via this binding.
Add unit tests to cover the InstanceConsoleContext class.
Change-Id: I27de9445d249b0d670543d250bd02f450764a10f
Closes-Bug: 1871428
New config option count_usage_from_placement is added in Nova from
Train release to enable/disable counting of quota usage from placement
service. Corresponding config parameter is required in nova-cloud-controller
charm.
This patch introduces quota-count-usage-from-placement config parameter in
nova-cloud-controller charm. For openstack releases train or above, this
option is rendered in nova.conf for nova-cloud-controller units.
func-test-pr: https://github.com/openstack-charmers/zaza-openstack-tests/pull/250
Change-Id: I57b9335b7b6aecb8610a66a59cb2e4e506e76a5e
Closes-Bug: #1864859
Enable PCI Passthrough Filter if neutron-api charm has enabled
support for hardware offloading.
Change-Id: I0ccb4366b03557b316da1507015112c7f378176e
Depends-On: I1f59012ad2d16af18ca310906f6c6b537bb7ec72
As witih the shared-db hook do not run db updates if the database
is in maintenace mode
Closes-Bug: #1866864
Change-Id: I65619271d8a4215c8d9bf68ad0a86136ad87011c
Allow attach between instance and volume in different
availability zones. If False, volumes attached to an
instance must be in the same availability zone
in Cinder as the instance availability zone in Nova.
Change-Id: I21df8e0dfa585133c5ef6a55cdbbc2071c267424
Closes-Bug: #1856776
Currently the charm masks all services on db departed. The consequence
of which is that when a db relation is re-joined these services are not
started back up.
In addition, it stops all services, including memcached and haproxy,
which also do not get restarted on a db re-join.
This change selectively stops but does not mask services that pertain
directly to nova. So that on db joined the correct services get started.
Change-Id: I81f59c97b33edd5c3e67c379cfdee8f26509075a
Request to be informed of changes to placement and neutron changes.
If a placement change occurs restart nova-scheduler as it will
cache the old endpoint url and tell nova-compute to restart its
services as they will have done the same.
Change-Id: I7537723e40a5a25672fbbdc2d5c3144724f6240a
Closes-Bug: #1862974
If the database is in maintenace mode do not attempt to access
it.
Depends-On: I5d8ed7d3935db5568c50f8d585e37a4d0cc6914f
Change-Id: I7d5b7a20573b38d12b1ead708ee446472f21e9f8