Creating and immediately deleting a snapshot greater than 50GB
fails. This patch rectifies the issue by passing the
WaitForCopyState parameter into the CreateElementReplica SMI-S
function when creating the snapshot.
Change-Id: I5aa4a77cb8208bb805551074a28d8b722d6eb22c
Closes-Bug: #1662950
(cherry picked from commit 3cdbcba1e2)
Every secure connection to the ECOM was not being closed after
use. This fix closes each connection after use. Due to not being
able to mock pywbem there is no unit test for this fix.
Change-Id: I0730c709b9fdc20410de06ca3f9705c9ea77d098
Closes-Bug: #1689760
(cherry picked from commit 430e8c9fd8)
When Live migration is used extensively there can be scenarios
where a regular attached volume can belong to two or more Masking
Views. Because of this, we did not remove the volume from the storage
group, which is not typical behaviour. In this fix we use a temporary
file to determine if a terminate_connection is a regular detach or a
part of the live migration process.
Change-Id: Ide38fa21d65859a5516c577a9983124d998a2e95
Closes-Bug: #1684595
(cherry picked from commit 9d2466bb29)
This patch fixes an issue where errorMessage is not getting reassigned
from None in the method ._get_port_group_name_from_mv., having the knock
on effect where errors are not being picked up in subsequent calls or
conditional checks.
Change-Id: I74a85aaa912a7f4c22abbf48f0a5d1bd4538c098
Closes-Bug: #1686174
(cherry picked from commit b44f55c516)
If there are more than one masking view on the same compute node
with different portgroups, then code did not always pick the
correct masking view to extract the port group from. This can
happen when a system is pre-zoned FC.
Change-Id: I6787f9415d97ce5988984f3aeac05c02c5217aac
Closes-Bug: #1682176
When using the NFS backup driver if we do multiple backups using the
same container we end up overwriting older backups.
The issue comes from a misunderstanding in the Posix backup driver of
the purpose of the "prefix" metadata used in the ChunkedBackupDriver
base class.
This prefix is for the name of the backup objects to store, but unlike
the prefix for the volumes, here it must be unique as the base driver
will only add numbers to identify the chunk (for the volume we add the
volume id). Unfortunately the Posix driver just assumed that the prefix
had the same meaning as the prefix for volumes thus making one backups
override one another.
This patch changes the prefix generated by the Posix driver so we have
the following format: "volume_$VOL_ID_$TIMESTAMP_backup_$BACK_ID", thus
allowing multiple backups in the same container.
The new name is backward compatible with existing backups because the
new prefix will only be used on new backups as the prefix for already
existing backups is stored in the DB.
Change-Id: I2903c27633facde6370d95ba0b9e06025ccaef26
Closes-Bug: #1628768
(cherry picked from commit 535e717970)
(cherry picked from commit 640b9dc2b7)
After image cloning the NFS client cache needs to be refreshed.
This can be accomplished by touching the directory hosting the
cached image file.
See also: https://bugs.launchpad.net/nova/+bug/1617299
Co-Authored-By: Sebastian Schee <sebastian.schee@sap.com>
Co-Authored-By: Goutham Pacha Ravi <gouthampravi@gmail.com>
Closes-bug: #1679716
Change-Id: If392f41f65978721668b53cfab94393f074d24e9
(cherry picked from commit ff6acd62ec)
(cherry picked from commit cd3c1c2c37)
Current code retries three times for deactivation to complete,
waiting one second between retries. On some heavily loaded
systems, and apparently the gate, this is not enough time for
the operation to complete.
This attempts to work around those slower systems by increasing
our retries to 5 and adding a backoff of 2 seconds to give it
more time.
Change-Id: I4f40a1984fe828c8ff965033f7e25b1d7516ab1e
Closes-bug: #1687044
(cherry picked from commit 5744301777)
Without try...catch like fetch_to_volume_format,
fetch_verify_image will prevent a raw image from being used when qemu-img
is not installed.
So we add a try...catch for qemu_img_info for fetch_verify_image and
separate this into its own routine.
This fixes the failure if you create a volume with a raw, nonqemu image
and don't have the qemu packages on your system.
Change-Id: I3aaf43a453e7096161780d9bfc2515c66a3a9f2c
Closes-Bug: #1674771
(cherry picked from commit 70a0cc921f)
* In case the new volume is larger than the cloned one,
resize the cloned volume.
* Added unit test for success and failed resize.
Co-Authored-By: Xinli Guan <xinli@us.ibm.com>
Change-Id: I2346049c2177a9497750c05c0eb9e7edf8c12c22
Closes-Bug: #1554778
(cherry picked from commit 105e625328)
Usually we would update the 'deleted_at' column when
soft-deleting records, otherwise the 'db purge' command
would fail because it depends on the 'deleted_at' column.
Closes-Bug: #1671354
Change-Id: Ib302488d3a007df09a2e7ece40488c00bf732119
(cherry picked from commit f9012466db)
(cherry picked from commit 1695e7c0cb)
Fixed as part of:
VMAX driver - Implement volume replication for VMAX
Volume replication supports disaster recovery solution where
there has been a catastrophic event in your data centre for
the VMAX array.
Change-Id: I2aafe564cdb31895756b4b8884af2635b054ae59
Implements: blueprint add-vmax-replication
(cherry picked from commit 67a2178eb4)
When Live migrating from one compute node to another the connection
drops and requires the instance to be rebooted. To prevent this
from happening we need to share the storage group and port group
between masking views.
Change-Id: I1483ca38362c5ff1724940c2abf1179e75e02c8e
Closes-Bug: #1676459
(cherry picked from commit 069dd5b80d)
This fix avoids logging an exception when a user
chooses to use an SVM scoped account. cDOT
driver requires cluster scoped privileges to
gather backend statistics, performance
counters, etc. These APIs are not available for
SVM scoped credentials.
Closes-Bug: #1660870
Change-Id: If2e3bae98db225ff0cfc9e868eaaeef088135562
(cherry picked from commit 0f9b6e9ac2)
(cherry picked from commit 109177a366)
Users have reported that the current CPU limit is not
sufficient for processing large enough images when
downloading images to volumes.
This mirrors a similar increase made in Nova (b78b1f8ce).
Closes-Bug: #1646181
Change-Id: I5edea7d1d19fd991e51dca963d2beb7004177498
(cherry picked from commit 52310fa864)
(cherry picked from commit 2a5e0086e1)
Currently the _create_image_volume() method in solidfire.py doesn't
catch the exception for initialize_connection() or connect_volume(),
therefore when something wrong happens at this stage, which is as
critial as image conversion, the empty volume created as image cache
wouldn't be deleted. And that invalid image cache would cause all
consecutive volume creation from same image fail silently.
This change makes simple adjustment of the code to make sure the
errors happen at connecting volume to controler stage is being
covered.
Change-Id: Idd192be86a83dc341431593ab27bed9178cc36d1
Closes-bug: #1663782
(cherry picked from commit 55c1863e1d)
For each existing masking view we check if the initiator group
matches with the initiators of the compute host. The is a very
expensive check and should be turned off by default. To turn
it back on we will put a initiator_check flag in the cinder.conf
Change-Id: Ia0677bafe9d586e9e65cd0d63924259f9a2e6ee8
Closes-Bug: #1663312
(cherry picked from commit 81aae7f712)
While executing the image utils methods, we need to take
care not to use root privileges if the environment is running
in 'secure' mode.
This is a partial back-port of the fix committed in
Iae35c722eb4b6b7d02a95690abbc07a63da77ce7 in ocata.
Change-Id: I8a4912d10797ac8521acc27d02b97b00f8efdb7a
Closes-Bug: #1649209
There is an issue around the locking of storage groups.
There is insufficent locking when multiple processes may
be adding and removing volumes from the same storage group
e.g. deleting the last replicated volume while another
process is creating one. This patch rectifies this issue.
Change-Id: I0138ace1e3d8f1e62d5422864481221915907a25
Closes-Bug: #1660374
(cherry picked from commit 84d463380a)
Creating a cloned volume can fail when the source volume
is bigger than 50 GB. This occurs when we try to unlink the
source from target before all the tracks are copied.
This patch fixes the issue by adding a 'WaitForCopyState'
parameter to the CreateElementReplica SMI-S function.
Change-Id: Iac8ce48dda6f21a49838e4ca900a4f3ba1a9b2a7
Closes-Bug: #1660378
(cherry picked from commit d43395d0af)
This can avoid redundant sg creation during volume attachment,
thus benefiting the overall driver performance.
Closes-bug: #1657964
Change-Id: I1d96468248d512b59f77ab93d486efb3ff29d6ca
(cherry picked from commit 2d5bad7928)
Add simple script to setup mysql and postgresql databases, this script
can be run by users during testing and will be run by CI systems for
specific setup before running unit tests. This is exactly what is
currently done by OpenStack CI in project-config.
This allows to change in project-config the python-db jobs to
python-jobs since python-jobs will call this script initially.
See also
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107784.html
Update devref for this.
Needed-By: I628f825d9200f7951eae46b7be78b111b1a8141c
Change-Id: If549e6f108ec6184a432d5050da4386efe794a4e
(cherry picked from commit 1c91c0f033)
1.Backup of ISCSI volume with chap enabled on it, fails
because it choose online copy code path where copy
volume is background process on 3par and we do not
wait for process to complete and setting of any attribute
on cloned volume failed during online copy of data.
This patch ensures it choses non online copy code path
for above scenario.
2.Backup of attached volume fails with snapCPG error
Sometime, during online clone operation, we set snapCPG and
userCPG on cloned volume but when we get cpg of
cloned volume we get an error as these attributes
do not set for cloned volume on 3par.
This patch ensure if cpg is not available for volume
from 3par it will take default value from 'host' attribute
from volume param.
Change-Id: I8dab8fd4e56c38557c46e4ae9a01fb6fead2a2a8
Closes-Bug: #1644238
Closes-Bug: #1646396
(cherry picked from commit 6543c0e13d)
QoS key value was changed from 'qos_specs' to 'qos_spec' in a
feedback comment in _initial_setup. The side effect of this
was QoS was never enabled. Please refer to
https://review.openstack.org/#/c/307502/
Change-Id: I32a203c1a29e214656f6684268266df589db67bd
Closes-Bug: #1656029
(cherry picked from commit d4d88f31c6)
There is a threading issue in Heat on a delete stack. All
detaches happen simultaneously. The fix locks the portion
of code to determine accurately, the number of volumes left
in the storage group, at any given time, to prevent
attempting to remove a volume that is the last in the
storage group.
Change-Id: I84164a9abac35d962408febbe8d3af759beb6a94
Closes-Bug: #1630535
(cherry picked from commit d9ccfaef8e)
The NetApp Data ONTAP drivers contain a minor calculation error
that can lead to misstated controller utilization values used by
the filter & goodness functions.
Closes-Bug: #1631460
Change-Id: Idf6a33a388e732acd2f3154ac7d0a6490f8cf88e
(cherry picked from commit f1bbaea40a)
If a reschedulable failure happens in the driver after the volume was
created on the backend, it will get orphaned and only way to remove it
will be through backend internals. This commit adds an additional
delete_volume call to the driver after the volume got rescheduled to
make sure no volumes will get orphaned.
Change-Id: Idd86a4842bdc6ecf0cabbeff0a9c9704e030302a
Closes-Bug: 1561579
(cherry picked from commit 7f09229d6c)
Due to the amount of query data is too large to cause rest
query timeout.
This commit fixes that by matching by name when rest query
Change-Id: Ied73574658403ed04cb238c953b11d56c9d35ba1
Closes-Bug: #1633956
(cherry picked from commit d4ae67db31)