Previous method verify if line that contain "source file"
exists into dumpxml of every virtual machine to extract
uuid of virtual machines running on hypervisor but this
line exists only when a disk file is present on the host
and boot from volumes for example has not "source file" line
into dumpxml and this script will kill them.
To avoid theese case it should be possible to use
'virsh list --uuid --all' to list directly the virtual machine
uuids of nova instances.
This changes avoid to shut running customer virtual machines.
This also fixes handling of --noop by correcting the condition checking
for arguments.
Change-Id: Ib9ec04a37bfe69e323f14f6bf4cb72b0fa818803
Closes-Bug: #1816434
Signed-off-by: Davide Panarese <dpanarese@enter.it>
The commands used by constraints need at least tox 2.0. Update to
reflect reality, which should help with local running of constraints
targets.
Change-Id: I2d757bb73f45bcbde86d2dc9bd960365010cc041
Closes-Bug: #1801462
compute_node_stats have been removed in liberty (commit:
8a7b95dccdbe449d5235868781b30edebd34bacd)
Change-Id: I1cdf7e1a0a9ac686724f0d21d769551980b660b0
This script can be useful to run when a large number of
messages are backlogged and we do not want to go in and
delete the full mnesia database but only want to selectively
clear off certain queues.
Change-Id: I0ecdeea3f4079c90ce9bc0bb31b2b4f0f55c313b
In order to fix bug #1534660, change 269530 splits many overly long
lines in Bash scripts using backslash continuations. But in some
cases, these backslashes were inserted within command arguments that
are interpreted as SQL expressions, where they cause syntax errors.
This changes splits the corresponding lines differently, so that the
backslashes are no longer passed in SQL expressions.
Change-Id: I4a8940b6fe9ce8563315cd0cc9a9529a02f8cdb8
Closes-Bug: 1596193
Related-Bug: 1534660
This script decodes the information in /proc/cpuinfo and
produces a human readable version displaying:
- Total number of physical CPUs
- Total number of logical CPUs
- Model of the chipset
Change-Id: Ie30ff236fe6dbe61fa247762631072d5a037e110
There are times where you might want to enable or disable neutron agent
admin state en masse. This tool will let you enable/disable any of the
main neutron agents.
Change-Id: I9bc4b38b7e6680359141624a64fbb16bfafdae3e
If OS_REGION_NAME is set in the environment, pass it to the API/CLI
along with the other credentials. This allows selecting a non-default
region for multi-region clouds.
Change-Id: I19ed33fe1428b97b6aa526fc19b6b592e1a11ca9
Closes-Bug: 1540535
Signed-off-by: Simon Leinen <simon.leinen@gmail.com>
I've ported this tool to work with Juno and thought I'd share this for
all to use.
We've done basic functional testing, an internal code review and used it
against our production cloud. I would encourage further review and
testing, but this should be usable for others.
The backend changes to Juno required more changes that I would have
liked and the code does throw the odd exception (handled by the code)
when run against every project in our production system, but this hasn't
caused any issues for us.
Change-Id: I3913251ede4949149e63d1337bd92dd836f98763
Fixing https://bugs.launchpad.net/osops/+bug/1534660. Some of these
fixes are not pretty and I've not been able to test the tools still
work.
I think the bashate rules should be relaxed for operations tools ... or
people shouldn't use bash for such tools. Sometimes it's pretty
difficult to shorten lines and still have readable code.
Co-Authored-By: Peter Jenkins <mail@peter-jenkins.com>
Co-Authored-By: Mike Dorman <mdorman@godaddy.com>
Change-Id: I70cfc2420cc9a2a4ec553ab7b7ca43a7fc38a9f0
listorphans.py lists orphaned Neutron objects. 'Orphans' in this
context are objects which OpenStack knows about and manages but which do
not have a valid project (tenant) ID.
The previous version was very inefficient in that for every object being
checked, it would do a discrete Keystone API call to see if the
associated tenant ID was valid or not. For an installation of any
reasonable size, i.e one with 100s of Neutron routers, this method was
particularly slow.
The script has been updated to first build a list of all tenant IDs, and
then for every Neutron object check project ownership validity against
this list instead.
Output has also changed slightly to print out a list of discovered
orphans, simplifying workflow e.g when piping to another command which
cleans up these objects.
Closes-Bug: #1515300
Change-Id: I72ca84fe48beb623d43ee446a32ea1bb30730bcc
listorphans.py lists certain 'orphaned' objects - routers, floating IPs,
subnets, and networks - present in Neutron. Orphans in this context are
objects that exist but whose project ID is no longer valid, e.g tenants
that have been deleted.
Change-Id: I41ea6f115d0b7a1a84e7f23005d333d39b800beb
Add ability to pass the --nosafe-auto-increment flag to
pt-archiver, which may be necessary in some situations.
Change-Id: I841193dc97b36ad365eee89447b33f3e96cd1c41
Add support for archiving more tables in the nova database
that must be done before the nova.instances table, due to
foreign key constraints.
Change-Id: Ie433caa96de898d5cf64a0b03ac4681a410f5dfa
Add options to the user may specify hostname, database name,
username, and password for the openstasck_db_archive scripts.
Also fix some minor whitespace issues.
Change-Id: Ib4b2f8282db6c9958d2e0a7d5d72abd4a192de32
orphaned_volumes.sh is a script that gets a list of all volumes and
their owner (as reported by cinder) and compares it to a list of all
tenants (as reported by keystone). If any volume has an owner who does
not exist in keystone, it is returned in the output. Orphaned cinder
volumes can occur when a tenant is deleted but still had volumes
provisioned.
Change-Id: I3e20826be54595b4a9a35aaac95be881ce658fa0
orphaned_vms.sh is a script that searches through the current VMs (as
reported by nova) and retrieves their own. Then, it cross checks that
list against a tenant listing (as reported by keystone). Any VMs whose
owner does not exist in keystone is assumed to be orphaned. This can
happen if a tenant is deleted while it still has VMs online.
Change-Id: I880b66e6d303e3348ac1d7fde1762633ae9ac07a
This simple script will show you where you have qemu processes running
that are no longer managed from nova. If you find these you need to
alert the customers, who may still be using this VM. Ghost VMs use
compute/memory resources on your compute hosts, so it's a good idea to
find and remove them.