Clean up docs and specs directories

This patch proposes to clean up unused or redundant docs
as well as moving config-generator to the etc dir

Change-Id: I37d4c4ad1d46ee273575021ca90d5b31121ce9e8
This commit is contained in:
Trinh Nguyen 2018-09-05 10:03:07 +09:00
parent e7f6b83833
commit 1467df83a6
18 changed files with 4 additions and 1233 deletions

View File

@ -30,12 +30,12 @@ oslo.config
- After adding new options to freezer-scheduler please use the following command to update the sample configuration file::
oslo-config-generator --config-file config-generator/scheduler.conf
oslo-config-generator --config-file etc/config-generator.conf
- If you added support for a new oslo library, you have to edit the following file adding a new namespace for the new oslo library:
for example adding oslo.db::
# edit config-generator/scheduler.conf
# edit etc/config-generator.conf
[DEFAULT]
output_file = etc/scheduler.conf.sample
wrap_width = 79

View File

@ -1,18 +0,0 @@
Install instructions
====================
Please check README.rst for further installation instructions.
Install from sources::
----------------------
You have now the freezer-agent tool installed in /usr/local/bin/freezer-agent
Please execute the following command to all the available options::
$ freezer-agent --help [...]
Please read README.rst or HACKING.rst to see the requirement and more
technical details about how to run freezer
Thanks, The Freezer Team.

View File

@ -1 +0,0 @@

View File

@ -46,7 +46,7 @@ extensions = ['sphinxcontrib.apidoc',
]
config_generator_config_file = (
'../../config-generator/scheduler.conf')
'../../etc/config-generator.conf')
sample_config_basename = '_static/freezer'
# autodoc generation is a bit aggressive and a nuisance

View File

@ -586,7 +586,7 @@ To get an updated sample of freezer-scheduler configuration you the following co
.. code:: bash
oslo-config-generator --config-file config-generator/scheduler.conf
oslo-config-generator --config-file etc/config-generator.conf
Update sample file will be generated in etc/scheduler.conf.sample

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

View File

@ -1,652 +0,0 @@
=============
Test Scenario
=============
Summary
=======
* Intro
* 1. Setup Devstack machine with swift and elasticsearch
* 2. Setup the client machine
* 3. Freezer test scenarios
* 4. Automated Integration Tests
Intro
=====
Test environment nodes layout:
1) Server Machine (192.168.20.100)
* devstack
* elasticsearch
* freezer api
2) Client Machine(s)
* freezerc
* freezer-scheduler
Freezer requires Python version 2.7. Using a virtualenv is also recommended
The devstack, elasticsearch and freezer-api services can also be distributed
across different nodes.
In this example the server machine has a (main) interface with ip 192.168.20.100
1. Setup Devstack machine with swift and elasticsearch
======================================================
1.1 devstack
------------
Install devstack with swift support. See https://docs.openstack.org/devstack/latest/
::
$ cat local.conf
LOGDAYS=1
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
ADMIN_PASSWORD=quiet
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
SWIFT_REPLICAS=1
SWIFT_DATA_DIR=$DEST/data/swift
SWIFT_HASH=66a3d6b56c1f479c8b4e70ab5c2000f5
enable_service s-proxy s-object s-container s-account
1.1.1 OpenStack variables
-------------------------
Quick script to set OS variables:
::
$ cat > ~/os_variables
#!/bin/bash
# optional parameters:
# 1 - os_tenant_name
# 2 - os_user_name
# 3 - os_auth_url
#
export OS_TENANT_NAME=${1:-admin}
export OS_USERNAME=${2:-admin}
export ADDR=${3:-192.168.20.100}
export OS_REGION_NAME=RegionOne
export OS_PASSWORD=quiet
export OS_AUTH_URL=http://${ADDR}:5000/v2.0
export OS_TENANT_ID=`openstack project list | awk -v tenant="$OS_TENANT_NAME" '$4==tenant {print $2}'`
env | grep OS_
1.1.2 register freezer service in keystone
------------------------------------------
::
export my_devstack_machine=192.168.20.100
openstack user create --password FREEZER_PWD freezer
openstack role add --user freezer --project service admin
openstack service create --name freezer \
--description "Freezer Backup Service" backup
openstack endpoint create --region regionOne \
--publicurl http://${my_devstack_machine}:9090 \
--internalurl http://${my_devstack_machine}:9090 \
--adminurl http://${my_devstack_machine}:9090 backup
1.1.3 add a user for the tests in keystone
------------------------------------------
::
source os_variables
openstack project create --description "Testing project for Freezer" fproject
openstack user create --password quiet fuser
openstack role add --project fproject --user fuser Member
openstack role add --project fproject --user fuser admin
1.2 Elasticsearch
-----------------
visit https://www.elastic.co for installation instruction
Relevant lines in /etc/elasticsearch/elasticsearch.yml
::
cluster.name: elasticfreezer # choose you own
network.host: 192.168.20.100
1.3 Python Virtualenv
---------------------
Not required, but recommended
::
apt-get install virtualenv
virtualenv ~/.venv
source ~/.venv/bin/activate
1.4 Freezer Service
-------------------
1.4.1 Freezer API installation steps and requirements
-----------------------------------------------------
::
cd ~ && source ~/.venv/bin/activate
git clone https://github.com/openstack/freezer.git
cd freezer/freezer_api
pip install -r requirements.txt
python setup.py install
1.4.2 Freezer API Configuration
-------------------------------
::
$ cat /etc/freezer-api.conf
[DEFAULT]
verbose = false
logging_file = freezer-api.log
[keystone_authtoken]
identity_uri = http://192.168.20.100:5000/
www_authenticate_uri = http://192.168.20.100:5000/
admin_user = freezer
admin_password = FREEZER_PWD
admin_tenant_name = service
include_service_catalog = False
delay_auth_decision = False
insecure=true
[storage]
db=elasticsearch
endpoint=http://192.168.20.100:9200
If you plan to use a devstack installation on a different machine, update with the
correct URIs in the [keystone_authtoken] section
Same for the elasticsearch endpoint in the [storage] section
1.4.3 Start API service
-----------------------
Quick start the api for test:
::
$ freezer-api 192.168.20.100
2. Setup the client machine
===========================
::
git clone https://github.com/openstack/freezer.git
cd freezer
pip install -r requirements.txt
python setup.py install
3. Freezer test scenarios
=========================
While executing the freezer script it can be useful to monitor the logs:
::
tail -f /var/log/freezer.log /var/log/freezer-scheduler.log
3.1 File system tree backup/restore (no snapshot involved)
----------------------------------------------------------
* backup mode: fs
* directory
* local storage
* no lvm
3.1.1 Setup
-----------
::
mkdir -p ~/test/data_dir ~/test/data_dir/subdir1 ~/test/data_dir/subdir2 ~/test/data_dir_restore ~/test/storage
echo 'alpha bravo' > ~/test/data_dir/file01.txt
echo 'charlie delta' > ~/test/data_dir/subdir1/file11.txt
ln -s ~/test/data_dir/subdir1/file01.txt ~/test/data_dir/subdir2/link_file01.txt
3.1.2 Backup
------------
::
freezerc --path-to-backup ~/test/data_dir --container ~/test/storage --backup-name my_test_backup --max-level 3 --storage local
# add a file
echo 'echo foxtrot' > ~/test/data_dir/subdir2/file21.txt
# take another backup, level will be 1
freezerc --path-to-backup ~/test/data_dir --container ~/test/storage --backup-name my_test_backup --max-level 3 --storage local
3.1.3 restore
-------------
::
freezerc --action restore --restore-abs-path ~/test/data_dir_restore --container ~/test/storage --backup-name copia_dati_fondamentali --storage local
3.2 Backup apache folder using lvm snapshot and restore on a different machine
------------------------------------------------------------------------------
* backup mode: fs
* directory
* swift storage
* lvm snapshot
The commands need to be executed with superuser privileges, because of
file access rights and also lvm-snapshot creation.
We also need the hostname of the source machine to restore on a
different machine.
::
$ hostname
test_machine_1
since we're going to use swift, we also need to source the env vars containing our os credentials
3.2.1 check available space for the lvm snapshot
------------------------------------------------
::
# sudo vgdisplay
--- Volume group ---
VG Name freezer1-vg
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 13
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size 49.76 GiB
PE Size 4.00 MiB
Total PE 12738
Alloc PE / Size 11159 / 43.59 GiB
Free PE / Size 1579 / 6.17 GiB
VG UUID Ns35jE-eTAT-dy1j-ArWw-8ztM-Wvw2-3nTJOn
Here we have 6.17 GB available for lvm snapshots
3.2.2 Backup
------------
Source the env variable containing the OS credentials. The simple script above accepts
the OS_tenant and OS_user as parameters
::
sudo -s
source ~/.venv/bin/activate
source os_variables fproject fuser
freezerc --action backup --container freezer_test_backups --backup-name apache_backup \
--max-level 3 --max-segment-size 67108864 \
--lvm-auto-snap /etc/apache2 \
--lvm-dirmount /var/freezer/freezer-apache2 \
--lvm-snapsize 1G \
--lvm-snapname freezer-apache2-snap \
--path-to-backup /var/freezer/freezer-apache2/etc/apache2
3.2.3 Restore on a different machine
------------------------------------
We need to use the --restore-from-host parameter because we are restoring on
another machine
::
sudo -s
source ~/.venv/bin/activate
source os_variables fproject fuser
freezerc --action restore --container freezer_test_backups --backup-name apache_backup \
--restore-abs-path /etc/apache2 \
--restore-from-host test_machine_1
3.3 Use a INI config file to backup directory /etc/ssl
------------------------------------------------------
3.3.1 Execute a backup using a config file
------------------------------------------
::
cat > backup_apache.ini
[job]
action=backup
container=freezer_test_backups
backup+name=apache_backup
max_level=3
max_segment_size=67108864
lvm_auto-snap=/etc/apache2
lvm_dirmount=/var/freezer/freezer-apache2
lvm_snapsize=1G
lvm_snapname=freezer-apache2-snap
path_to_backup=/var/freezer/freezer-apache2/etc/apache2
freezerc --config backup_apache.ini
3.3.2 Execute a restore using a config file
-------------------------------------------
::
cat > restore_apache.ini
[job]
action=restore
container=freezer_test_backups
backup_name=apache_backup
restore_abs_path=/etc/apache2
restore_from_host=test_machine_1
freezerc --config restore_apache.ini
3.4 Incremental backup and restore of mysql using the freezer-scheduler
-----------------------------------------------------------------------
We want to push jobs to be executed on the test machine. For that we need
to know what is the client_id of the machine we want to execute the jobs on.
When not provided with a client_id parameter, the scheduler uses the default value
::
client_id = <tenant_id>_<hostname>
For example, if the tenant_id is 03a81f73595c46b38e0cabf047cb0206 and the host running
the scheduler is "pluto" the default client_id will be 03a81f73595c46b38e0cabf047cb0206_pluto:
::
# openstack project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 03a81f73595c46b38e0cabf047cb0206 | fproject |
.....
# hostname
pluto
# freezer-scheduler client-list
+-------------------------------------------+----------------------------+-------------+
| client_id | hostname | description |
+-------------------------------------------+----------------------------+-------------+
| 03a81f73595c46b38e0cabf047cb0206_pluto | pluto | |
.....
We are going to use "client_node_1" as a client_id. We are therefore going to start the
scheduler using the parameter
::
-c client_node_1
We also use that parameter when using the freezer-scheduler to interact with the api.
3.4.1 Start the freezer scheduler on the target client machine
--------------------------------------------------------------
Start the scheduler with the custom client_id.
The scheduler connects to the freezer api service registered in keystone.
If there's no api service registered you need to specify it using
the command line option --os-endpoint
Since this is a demo, we want the freezer-scheduler to poll the api every 10 seconds
instead of the default 60 seconds, so we use the parameter "-i 10".
::
sudo -s
source ~/.venv/bin/activate
source os_variables fproject fuser
freezer-scheduler -c client_node_1 -i 10 start
You can check that the demo in running with:
::
# freezer-scheduler status
Running with pid: 9972
Then we clean the /var/lib/mysql folder and leave a terminal open with the
freezer-scheduler logs:
::
rm -rf /var/lib/mysql/*
tail -f /var/log/freezer-scheduler.log
3.4.2 Create the job configuration and upload it to the api service
-------------------------------------------------------------------
Log in any machine and create the restore job.
Remember to source the OS variables and use the custom client_id
::
source os_variables fproject fuser
cat > job-backup-mysql.conf
{
"job_actions": [
{
"freezer_action": {
"mode" : "mysql",
"mysql_conf" : "/etc/mysql/debian.cnf",
"path_to_backup": "/var/freezer/freezer-db-mysql/var/lib/mysql/",
"lvm_auto_snap": "/var/lib/mysql",
"lvm_dirmount": "/var/freezer/freezer-db-mysql",
"lvm_snapsize": "1G",
"backup_name": "freezer-db-mysql",
"max_level": 6,
"lvm_snapname": "freezer_db-mysql-snap",
"max_priority": true,
"remove_older_than": 90,
"max_segment_size": 67108864,
"container": "freezer_backup_devstack_1"
},
"max_retries": 3,
"max_retries_interval": 10,
"mandatory": true
}
],
"job_schedule" : {
},
"description": "mysql backup"
}
If we want the backup to be executed every day at 3am,
we can specify the following scheduling properties:
::
"job_schedule" : {
"schedule_interval": "1 days",
"schedule_start_date": "2015-06-30T03:00:00"
},
Upload it into the api using the correct client_id
::
freezer-scheduler job-create -c client_node_1 --file job-backup-mysql.conf
The status of the jobs can be checked with
::
freezer-scheduler -c client_node_1 job-list
If no scheduling information is provided, the job will be executed as soon
as possible, so its status will go into "running" state, then "completed".
Information about the scheduling and backup-execution can be found in
/var/log/freezer-scheduler.log and /var/log/freezer.log, respectively.
**NOTE**: Recurring jobs never go into "completed" state, as they go back
into "scheduled" state.
3.4.3 Create a restore job and push it into the api
---------------------------------------------------
If we want to restore on a different node, we need to provide the
restore_from_host parameter.
::
cat > job-restore-mysql.conf
{
"job_actions": [
{
"freezer_action": {
"action": "restore",
"restore_abs_path": "/var/lib/mysql",
"restore_from_host": "test_machine_1",
"backup_name": "freezer-db-mysql",
"container": "freezer_backup_devstack_1"
},
"max_retries": 1,
"max_retries_interval": 10,
"mandatory": true
}
],
"description": "mysql test restore"
}
freezer-scheduler job-create -c client_node_1 --file job-restore-mysql.conf
3.5 Differential backup and restore
-----------------------------------
The difference is in the use of the parameter "always_level": 1
We also specify a different container, so it's easier to spot
the files created in the swift container:
::
swift list freezer_backup_devstack_1_alwayslevel
3.5.1 Backup job
----------------
::
cat > job-backup.conf
{
"job_actions": [
{
"freezer_action": {
"mode" : "mysql",
"mysql_conf" : "/etc/mysql/debian.cnf",
"path_to_backup": "/var/freezer/freezer-db-mysql/var/lib/mysql/",
"lvm_auto_snap": "/var/lib/mysql",
"lvm_dirmount": "/var/freezer/freezer-db-mysql",
"lvm_snapsize": "1G",
"backup_name": "freezer-db-mysql",
"always_level": 1,
"lvm_snapname": "freezer_db-mysql-snap",
"max_priority": true,
"remove_older_than": 90,
"max_segment_size": 67108864,
"container": "freezer_backup_devstack_1_alwayslevel"
},
"max_retries": 3,
"max_retries_interval": 10,
"mandatory": true
}
],
"job_schedule" : {
},
"description": "mysql backup"
}
freezer-scheduler job-create -c client_node_1 --file job-backup.conf
3.5.2 Restore job
-----------------
The restore job is the same as in 3.4.3
::
cat > job-restore.conf
{
"job_actions": [
{
"freezer_action": {
"action": "restore",
"restore_abs_path": "/var/lib/mysql",
"restore_from_host": "test_machine_1",
"backup_name": "freezer-db-mysql",
"container": "freezer_backup_devstack_1_alwayslevel"
},
"max_retries": 1,
"max_retries_interval": 10,
"mandatory": true
}
],
"description": "mysql test restore"
}
freezer-scheduler job-create -c client_node_1 --file job-restore.conf
4. Automated Integration Tests
==============================
Automated integration tests are being provided in the directory
freezer/tests/integration directory
Since they require external resources - such as swift or ssh storage -
they are executed only when some environment variables are defined.
4.1 local storage tests
-----------------------
always executed automatically, using temporary local directories under /tmp
(or whatever temporary path is available)
4.2 ssh storage
---------------
SSH storage need the following environment variables to be defined:
::
* FREEZER_TEST_SSH_KEY
* FREEZER_TEST_SSH_USERNAME
* FREEZER_TEST_SSH_HOST
* FREEZER_TEST_CONTAINER
For example:
::
export FREEZER_TEST_SSH_KEY=/home/myuser/.ssh/id_rsa
export FREEZER_TEST_SSH_USERNAME=myuser
export FREEZER_TEST_SSH_HOST=127.0.0.1
export FREEZER_TEST_CONTAINER=/home/myuser/freezer_test_backup_storage_ssh
4.3 swift storage
-----------------
To enable the swift integration tests - besides having a working swift node -
the following variables need to be defined accordingly:
::
* FREEZER_TEST_OS_TENANT_NAME
* FREEZER_TEST_OS_USERNAME
* FREEZER_TEST_OS_REGION_NAME
* FREEZER_TEST_OS_PASSWORD
* FREEZER_TEST_OS_AUTH_URL
For example:
::
export FREEZER_TEST_OS_TENANT_NAME=fproject
export FREEZER_TEST_OS_USERNAME=fuser
export FREEZER_TEST_OS_REGION_NAME=RegionOne
export FREEZER_TEST_OS_PASSWORD=freezer
export FREEZER_TEST_OS_AUTH_URL=http://192.168.56.223:5000/v2.0
The cloud user/tenant has to be already been created
4.4 LVM and MySQL
-----------------
Some tests, like LVM snapshots and access to privileged files, need
the tests to be executed with superuser privileges.
Tests involving such requirements are not executed when run
with normal-user privileges.
In cases where LVM snapshot capability is not available (for example
the filesystem does not make use of LV or there are not enough space
available) the LVM tests can be skipped by defining the following
env variable:
::
* FREEZER_TEST_NO_LVM

View File

@ -1,223 +0,0 @@
# This is a config file example of a freezer job. It can be used
# for backup, restore or any action/job that need to be executed
# by the freezer client. The naming convention is consistent with
# the option arguments metavar provider by command line or by the
# same command line arguments but "-" are substituted with "_" and
# the leading "--" are removed.
# For every single option it is possible to get verbose help
# from the freezer client help (i.e. freezerc --help, freezerc, etc)
# Values that take no arguments can be disable by using None or
# False and
# Job name
[job:var-log]
# OS auth version, could be 1, 2 or 3
os_auth_ver = 2
# List the Swift objects stored in a container on remote
# Object Storage Server.
list_objects = False
# The Object name you want to download on the local file
# system.
get_object = False
# Suppress verbose output
quiet = False
# Automatically guess the volume group and volume name
# for a given PATH
lvm_auto_snap = False
# Specify the volume group of your logical volume. This
# is important to mount your snapshot volume
lvm_volgroup = False
# Set the absolute path where you want your data
# restored. Please provide datetime in format "YYYY-MM-
# DDThh:mm:ss" i.e. "1979-10-03T23:23:23". Make sure the
# "T" is between date and time
restore_from_date = False
# Exclude files, given as a PATTERN.Ex: --exclude
# '*.log' will exclude any file with name ending with .log
exclude = False
# Set the SQL Server configuration file where freezer
# retrieve the sql server instance. Following is an
# example of config file: instance = <db-instance>
sql_server_conf = False
# The backup name you want to use to identify your
# backup on the storage media
backup_name = freezer-windows-restore-2
# The Swift container used to upload files to or retrieve from
container = freezer-windows-restore
# Disable incremental feature. By default freezer build
# the meta data even for level 0 backup. By setting this
# option incremental meta data is not created at all.
no_incremental = False
# Set the maximum file chunk size in bytes to upload to
# the storage media Default 67108864 bytes (64MB)
max_segment_size = 67108864
# Set the lvm volume you want to take a snaphost from
lvm_srcvol = False
# Download bandwidth limit in Bytes per sec. Can be
# invoked with dimensions (10K, 120M, 10G)
download_limit = -1
# Set hostname to execute actions. If you are executing
# freezer from one host but you want to delete objects
# belonging to another host then you can set this option
# that hostname and execute appropriate actions. Default
# current node hostname.
hostname = False
# Checks the specified container and removes objects
# older than the provided datetime in the form
# "YYYY-MM-DDThh:mm:ss i.e. "1974-03-25T23:23:23".
# Make sure the "T" is between date and time
remove_from_date = False
# Restart the backup from level 0 after n days. Valid
# only if --always-level option if set. If --always-
# level is used together with --remove-older-than, there
# might be the chance where the initial level 0 will be removed
restart_always_level = False
# The file name used to save the object on your local
# disk
dst_file = False
# Follow hard and soft links and archive and dump the
# files they refer to. Possible options are {None,soft,hard,all}
dereference_symlink = None
# Set the hostname used to identify the data you want to
# restore from. If you want to restore data in the same
# host where the backup was executed just type from your
# shell: "$ hostname" and the output is the value that
# needs to be passed to this option. Mandatory with action restore
restore_from_host = False
# Config file abs path. Option arguments are provided
# from config file. When config file is used any option
# from command line provided take precedence.
config = /home/anakin/.freezer/jobs-name.conf
# Set the MySQL configuration file where freezer
# retrieve important information as db_name, user,
# password, host, port. Following is an example of
# config file: # cat ~/.freezer/backup_mysql_conf
# host = <db-host>
# user = <mysqluser>
# password = <mysqlpass>
# port = <db-port>
mysql_conf = False
# Set the directory you want to mount the lvm snapshot to
lvm_dirmount = False
# Allow to access swift servers without checking SSL certs
insecure = False
# Set the lvm snapshot name to use. If the snapshot name
# already exists, the old one will be used a no new one
# will be created. Default freezer_backup_snap.
lvm_snapname = False
# Set the cpu process to the highest priority (i.e. -20
# on Linux) and real-time for I/O. The process priority
# will be set only if nice and ionice are installed
# Default disabled. Use with caution.
max_priority = False
# Set the backup level used with tar to implement
# incremental backup. If a level 1 is specified but no
# level 0 is already available, a level 0 will be done
# and subsequently backs to level 1. Default 0 (No Incremental)
max_level = False
# The file or directory you want to back up to the storage media
path_to_backup = False
# Passing a private key to this option, allow you to
# encrypt the files before to be uploaded to the storage media, or
# decrypt data in stream before data touch the disk when restoring
encrypt_pass_file = False
# Create a snapshot of the selected volume
volume = False
# Enforce proxy that alters system HTTP_PROXY and
# HTTPS_PROXY, use '' to eliminate all system proxies
proxy = False
# ID of cinder volume for backup or restore
volume_id = False
# List the Swift containers on remote Object Storage
# Server
list_containers = False
# Checks in the specified container for object older
# than the specified days.If i.e. 30 is specified, it
# will remove the remote object older than 30 days.
# Default False (Disabled)
remove_older_than = None
# Upload bandwidth limit in Bytes per sec. Can be
# invoked with dimensions (10K, 120M, 10G).
upload_limit = -1
# Set backup maximum level used with tar to implement
# incremental backup. If a level 3 is specified, the
# backup will be executed from level 0 to level 3 and to
# that point always a backup level 3 will be executed.
# It will not restart from level 0. This option has
# precedence over --max-backup-level. Default False
always_level = False
# Print out the freezerc client (freezerc) version
version = False
# Do everything except writing or removing objects
dry_run = False
# Set the lvm snapshot size when creating a new
# snapshot. Please add G for Gigabytes or M for
# Megabytes, i.e. 500M or 8G. Default 5G.
# WARNING: It is important that the volume snapshot
# size will not be filled at 100% while executing the backup
# or the data on the volume snapshot will be corrupted.
# This is an LVM behavior
lvm_snapsize = False
# Set the absolute path where you want your data
# restored. Default False
restore_abs_path = /home/anakin/freezer-restore-test/
# Upload data to the media storage. Default True
upload = True
# Set the technology to back from. Options are, fs
# (filesystem), mongo (MongoDB), mysql (MySQL),
# sqlserver (SQL Server) Default set to fs
mode = fs
# Set the action to be taken. backup and restore are
# self explanatory, info is used to retrieve info from
# the storage media, while admin is used to delete old
# backups and other admin actions.
# Possible options: {backup,restore,info,admin}. Default backup.
action = restore
# Set log file. By default logs to
# /var/log/freezer.logIf that file is not writable,
# freezer tries to log to ~/.freezer/freezer.log
log_file = None

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

View File

@ -1,85 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
..
==========================================
Creation of the python-freezerclient repo
==========================================
Include the URL of your launchpad blueprint:
* https://blueprints.launchpad.net/freezer/+spec/freezerclient
Freezer needs to align to the other OpenStack projects and have a dedicated
repo/client to communicate with the API and the storage media.
Problem description
===================
Currently the related freezer code to talk to the API and
query the storage media is hosted in the openstack/freezer repo.
We would like to follow the same convention/organization as
other OpenStack projects and place the code on a dedicated repo,
hoping also to reduce complexity and increase readability.
Proposed change
===============
Split the code and placing it in a new dedicated repo. Currently some code related
code (i.e. list backups, registered clients, etc) is located in the scheduler
code in the openstack/freezer repo. The following code most likely needs to be
moved to the new openstack/python-freezerclient repo:
* freezer/freezer/apiclient
* apiclient needs to be renamed to freezerclient
* apiclient is used by the scheduler (freezer/freezer/scheduler) and by the web ui
We need to make sure the namespace change, module import and dependancies
are reflected also in the scheduler and the web ui
python-freezerclient responsibilities
-------------------------------------
* Retrieve and display nicely metadata information, metrics and stats, from the freezer-api
* Retrieve and display nicely metadata information, metrics and stats from the storage media (i.e. swift)
* Perform basic maintenance instruction like remove old backups
Projects
========
List the projects that this spec effects. (for now only Freezer) For example:
* openstack/freezer
* openstack/freezer-web-ui
* openstack/freezer-api
* openstack/python-freezerclient
Implementation
==============
Milestones
----------
Target Milestone for completion:
Mitaka-2
Work Items
----------
1) Create the python-freezerclient repo on openstack-infra/project-config
2) Move freezer/freezer/apiclient to the python-freezerclient repo
3) Change the naming convention and imports from apiclient to freezerclient within the apiclient
4) Change the naming convention, import and deps on the freezer-scheduler and freezer-web-ui
5) Create a pypi packge called python-freezerclient
6) Add the code to extract the information from the object media in case the freezer api are not avaialble (most probably a separated item).

View File

@ -1,87 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
https://creativecommons.org/licenses/by/3.0/legalcode
..
This template should be in ReSTructured text. The filename in the git
repository should match the launchpad URL, for example a URL of
https://blueprints.launchpad.net/freezer/+spec/awesome-thing should be named
awesome-thing.rst . Please do not delete any of the sections in this
template. If you have nothing to say for a whole section, just write: None
For help with syntax, see http://www.sphinx-doc.org/en/stable/rest.html
To test out your formatting, see http://rest.lurkingideas.net/
=============================
The title of your blueprint
=============================
Include the URL of your launchpad blueprint:
https://blueprints.launchpad.net/freezer/+spec/example
Introduction paragraph -- why are we doing anything?
Problem description
===================
A detailed description of the problem.
Proposed change
===============
Here is where you cover the change you propose to make in detail. How do you
propose to solve this problem?
If this is one part of a larger effort make it clear where this piece ends. In
other words, what's the scope of this effort?
Projects
========
List the projects that this spec effects. (for now only Freezer) For example:
* /openstack/freezer
Implementation
==============
Assignee(s)
-----------
Who is leading the writing of the code? Or is this a blueprint where you're
throwing it out there to see who picks it up?
If more than one person is working on the implementation, please designate the
primary author and contact.
Primary assignee:
<launchpad-id or None>
Can optionally can list additional ids if they intend on doing
substantial implementation work on this blueprint.
Milestones
----------
Target Milestone for completion:
Kilo-2
Work Items
----------
Work items or tasks -- break the feature up into the things that need to be
done to implement it. Those parts might end up being done by different people,
but we're mostly trying to understand the timeline for implementation.
Dependencies
============
- Include specific references to specs and/or blueprints in freezer, or in other
projects, that this one either depends on or is related to.
- Does this feature require any new library dependencies or code otherwise not
included in OpenStack? Or does it depend on a specific version of library?

View File

@ -1,163 +0,0 @@
===============================
Tenant based backup and restore
===============================
Blueprint URL:
- https://blueprints.launchpad.net/freezer/+spec/tenant-backup
Problem description
===================
As a tenant, I need to use Freezer to backup all my data and metadata from an OS Cloud and restore it
at my convenience. With this approach all the data can be restored on the same Cloud platform (in case anything went lost) or on an independent cloud (i.e. a new one freshly deployed on a different geographic location).
Tenants needs to backup selectively all their resources from a the OS services.
These resources/services are:
- Users [meta]data stored in keystone (email, tenants)
- VMs in Nova
- Volumes in Cinder
- Objects in Swift
- Images in Glance
- Networks and other settings as FWaaS, LBaaS, VPNaaS in Neutron
Proposed change
===============
Backup Work Items
-----------------
For tenants backup the data and metadata needs to be retrieved from the services API, such as:
- Keystone:
- Retrieve all the user data from the keystone API in json format
- Download and backup the data in stream using the Freezer backup block based incremental, tenant mode
- Nova:
- Retrieve the list of VMs of the tenant from the Nova API
For each VM:
- Save the metadata
- Create a vm snapshot
- Generate an image from a snapshot
- Download the image file and process it in stream using the Freezer backup block based incremental, tenant mode
- Remove the image and the snapshot
- Cinder:
- Retrieve the list of volumes of the tenant from the Cinder API
For each Volume:
- Save the metadata
- Create a volume snapshot
- Generate an image from snapshot
- Download the image file and process it in stream using Freezer backup stream block based incremental tenant mode
- Remove the image and the snapshot
- Swift (it make sense have Swift objects backup?):
- Retrieve the list of containers from Swift
- For each container
- Save the metadata
- Download and backup all the object in the containers in stream using the Freezer backup stream block based incremental tenant mode
- Glance:
- Retrieve all the image list owned by the user from the Glance API
- Save the metadata
- Download the image file and process it in stream using Freezer backup stream block based incremental tenant mode
- Neutron:
- Retrieve all the sub services enabled in neutron such as:
- FwaaS, VPNaaS, LBaaS
- Retrieve all the existing networks and routing for the tenant
- Download and backup all the network and routing data/metadata in stream using the Freezer backup stream block based incremental tenant mode
- Same apply for FWaaS, LBaaS, VPNaaS
Restore Work Items
------------------
The restore process should consist on downloading the data from the media storage, decompress/decrypt,
process it and recreate to each server the settings/configuration as they were at backup point in time.
A distinction needs to be done for metadata and data (i.e. the metadata of the vm and the vm image file itself).
The metadata needs to be downloaded in full before restoring the tenant data, as is probably not possible to upload to the openstack api services partial or incomplete metadata when restoring.
When restoring, the order of the services/components to restore should be the following:
- Keystone:
- Retrieve all meta data from the freezer media storage
- Recreate all the metadata on the Keystone API endpoint
- Upload any data (non metadata)
- Neutron:
- Retrieve all meta data from the freezer media storage
- Recreate all the metadata on the Neutron API endpoint
- Upload any data (non metadata)
- Glance:
- Retrieve all meta data from the freezer media storage
- Recreate all the metadata on the Glance API endpoint
- Upload any data (non metadata, i.e. image files)
- Cinder:
- Retrieve all meta data from the freezer media storage
- Recreate all the metadata on the Cinder API endpoint
- Upload any data (non metadata, i.e. volumes)
- Make sure the user/tenant that owns the volume si correct
- Nova:
- Retrieve all meta data from the freezer media storage
- Recreate all the metadata on the Nova API endpoint
- Upload any data (non metadata, i.e. VMs)
- Make sure networking, volumes and user/tenants are correct
- Swift (If Swift backup doesn't make sense, probably the restore is in the same situation.):
- Retrieve all meta data from the freezer media storage
- Recreate all the metadata and containers using the Swift API endpoint
- Upload any data object to the containers
- Update the metadata (acl, read only, etc)
Further questions and considerations:
- How would be configured the data input?
- How would be configure the tenant backup.
Currently path_to_backup is used as source of where the data is read from. For tenant based backups the data to be backed up
would be read from a stream. The challenge is to specify the input as stream and the tenant mode.
Possible options:
- Provide "stream" to path_to_backup rather than the file system path, and use tenant as backup mode.
- Create a new backup mode called stream (in this case, how to we then set the tenant backup mode?)
- We can add an additional option called data_input_type, setting the default to fs (file system) or stream and use tenant as backup mode
- When restoring, would probably good to recreate the tenant resources with a tenant name provided by the user (i.e. provided by OpenStack environment variable like the OS_TENANT_NAME var). This has the advantage to recreate the resources with a different tenant in case is needed.
- Does the Cinder volumes needs to be attached to the VMs, in case they were when the backup was taken?
- How do we store the metadata of all the service? Any particular structure? Do we need to have freezer metadata on top of that to make easy the restore and to diplay the information from the web ui?
- If the admin user/role execute the backup, actions can be taken probably on all_tenants for services like Nova and Cinder.
We need to take this in consideration for ALL_TENANTS backups.
- Freezer needs to make sure the tenant data is backed up in a consistent manner, therefore the snapshot
of the resources (i.e. Volumes and VMs) needs to be taken in the shortest time windows as possible.
How do we make this happen? At least for the first release this will probably be best effort
(i.e. vms and volumes snapshots will happen in parallel). We need to evaluate if Job Session can help on this use case.
- If during the tenant backup, something went wrong, will the backup stop or keep executing?
Do we have some service/data that even if the backup fail, the execution can proceed?
- Where the backups should be executed and by which Freezer component?
The backup can be executed from any node (virtual, physical, being part or totally independent
from the infrastructure(i.e. compute node, storage node or a totally detached and cloud independent node)
The component that execute all these actions should probably be the freezer-agent.
Milestones
----------
Target Milestone for completion:
Mitaka
Dependencies
============
- Block based incremental backups (needs to be impleted abastracted from the fs, as the same code can be reused also for other streams based backups).
- A new backup mode and incremental type needs to be defined.