Tools and automation to achieve Disaster Recovery of OpenStack Cloud Platforms
Go to file
LeopardMa 6af94a5610 Change openstack-dev to openstack-discuss
Change-Id: I00948eca00d75c1e2be9d75bd5861e2c3a9be51e
2018-12-03 22:26:51 -05:00
config-generator freezer-dr big bang 2016-05-09 09:55:31 +00:00
doc Update .gitignore 2018-11-12 06:20:22 -05:00
etc Set default notifier endpoint 2018-10-15 20:28:09 +08:00
freezer_dr Merge "Set default notifier endpoint" 2018-11-19 09:22:28 +00:00
tests Adding pep8, pylint, coverage, sphinx testing 2016-05-09 15:00:10 +00:00
.coveragerc Adding pep8, pylint, coverage, sphinx testing 2016-05-09 15:00:10 +00:00
.gitignore Update .gitignore 2018-11-12 06:20:22 -05:00
.gitreview Added .gitreview 2016-05-12 13:10:52 +00:00
.pylintrc update pylint 2018-08-15 13:22:31 +09:00
.stestr.conf Switch to stestr 2018-07-11 10:59:33 +07:00
.zuul.yaml add python 3.6 unit test job and switch documentation job to new PTI 2018-08-30 10:26:49 +09:00
CREDITS.rst Adding CREDITS.rst 2016-02-12 16:28:58 +00:00
HACKING.rst Fix wrong links 2017-08-14 17:20:13 +08:00
LICENSE Add LICENSE file 2017-01-17 13:18:06 +07:00
README.rst Change according to preferred word choice 2016-11-29 00:29:13 +00:00
lower-constraints.txt Update tox.ini and fix pep8 errors 2018-11-07 06:14:17 -05:00
requirements.txt Update tox.ini and fix pep8 errors 2018-11-07 06:14:17 -05:00
setup.cfg Change openstack-dev to openstack-discuss 2018-12-03 22:26:51 -05:00
setup.py Update pbr version 2018-11-07 01:37:20 -05:00
test-requirements.txt Update tox.ini and fix pep8 errors 2018-11-07 06:14:17 -05:00
tox.ini Update tox minversion to 2.0 2018-11-09 22:43:53 -05:00

README.rst

Team and repository tags

image

Freezer Disaster Recovery

freezer-dr, OpenStack Compute node High Available provides compute node high availability for OpenStack. Simply freezer-dr monitors all compute nodes running in a cloud deployment and if there is any failure in one of the compute nodes freezer-dr will fence this compute node then freezer-dr will try to evacuate all running instances on this compute node, finally freezer-dr will notify all users who have workload/instances running on this compute node as well as will notify the cloud administrators.

freezer-dr has a pluggable architecture so it can be used with:

  1. Any monitoring system to monitor the compute nodes (currently we support only native OpenStack services status)
  2. Any fencing driver (currently supports IPMI, libvirt, ...)
  3. Any evacuation driver (currently supports evacuate api call, may be migrate ??)
  4. Any notification system (currently supports email based notifications, ...)

just by adding a simple plugin and adjust the configuration file to use this plugin or in future a combination of plugins if required

freezer-dr should run in the control plane, however the architecture supports different scenarios. For running freezer-dr under high availability mode, it should run with active passive mode.

How it works

Starting freezer-dr:

  1. freezer-dr Monitoring manager is going to load the required monitoring driver according to the configuration
  2. freezer-dr will query the monitoring system to check if it considers any compute nodes to be down ?
  3. if no, freezer-dr will exit displaying No failed nodes
  4. if yes, freezer-dr will call the fencing manager to fence the failed compute node
  5. Fencing manager will load the correct fencer according to the configuration
  6. once the compute node is fenced and is powered off now we will start the evacuation process
  7. freezer-dr will load the correct evacuation driver
  8. freezer-dr will evacuate all instances to another computes
  9. Once the evacuation process completed, freezer-dr will call the notification manager
  10. The notification manager will load the correct driver based on the configurations
  11. freezer-dr will start the notification process ...