Automatically prune the inventory backup

The inventory backup process takes the running inventory json file and
adds it to a tar archive. This process has no limits and will add files
to the tar archive until that is no longer possible and limited by the
underlying operating system. This change automatically prunes the backup
file and retains only the last 15 inventory files. This should provide
the same backup capabilities we've had without trying to saving
archives indefinitely.

> It should be noted that this change is using a subprocess call to
  prune the tar file. This is being done because the "tarfile" library
  does not provide an interface for deleting a file within an archive.

Change-Id: Ida5a9be0d0910c223fe05401bc4f75aef100e456
Closes-Bug: #1750233
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This commit is contained in:
Kevin Carter 2018-05-02 23:29:50 -05:00 committed by Shannon Mitchell
parent 271d67b4a0
commit ac28ad1329
2 changed files with 23 additions and 1 deletions

View File

@ -21,10 +21,12 @@ import datetime
import json
import logging
import os
from osa_toolkit import dictutils as du
import subprocess
import tarfile
import yaml
from osa_toolkit import dictutils as du
logger = logging.getLogger('osa-inventory')
@ -154,6 +156,19 @@ def _make_backup(backup_path, source_file_path):
'backup_openstack_inventory.tar'
)
with tarfile.open(inventory_backup_file, 'a') as tar:
# tar.getmembers() is always ordered with the
# tar standard append file order
members = [i.name for i in tar.getmembers()]
if len(members) > 15:
with open(os.devnull, 'w') as null:
for member in members[:-15]:
subprocess.call(
['tar', '-vf', inventory_backup_file,
'--delete', member],
stdout=null,
stderr=subprocess.STDOUT
)
basename = os.path.basename(source_file_path)
backup_name = _get_backup_name(basename)
tar.add(source_file_path, arcname=backup_name)

View File

@ -0,0 +1,7 @@
---
issues:
- We are limiting the tarred inventory backups to 15 in addition to changes
that only apply backups when the config has changed. These changes are to
address an issue where the inventory was corruped with parallel runs on
large clusters.