Tools and automation to achieve Disaster Recovery of OpenStack Cloud Platforms
Go to file
zhangyangyang 1bcc0b44b8 Cleanup test-requirements
python-subunit is not used directly anywhere
and it is dependency of both testrepository
and os-testr
(probably was used by some tox wrapper script before)

Change-Id: I1c021424dcc66f1aa080e35fa51ef02bf22ed915
2017-09-21 14:45:45 +08:00
config-generator freezer-dr big bang 2016-05-09 09:55:31 +00:00
doc Remove link to modindex 2017-01-27 10:23:19 +05:30
etc Refactor evacuators 2017-07-13 20:33:48 +01:00
freezer_dr Removed unused config options 2017-08-02 14:32:34 +02:00
tests Adding pep8, pylint, coverage, sphinx testing 2016-05-09 15:00:10 +00:00
.coveragerc Adding pep8, pylint, coverage, sphinx testing 2016-05-09 15:00:10 +00:00
.gitignore Update .gitignore 2017-07-20 18:16:05 +05:30
.gitreview Added .gitreview 2016-05-12 13:10:52 +00:00
.pylintrc Adding pep8, pylint, coverage, sphinx testing 2016-05-09 15:00:10 +00:00
.testr.conf Adding pep8, pylint, coverage, sphinx testing 2016-05-09 15:00:10 +00:00
CREDITS.rst Adding CREDITS.rst 2016-02-12 16:28:58 +00:00
HACKING.rst Fix wrong links 2017-08-14 17:20:13 +08:00
LICENSE Add LICENSE file 2017-01-17 13:18:06 +07:00
README.rst Change according to preferred word choice 2016-11-29 00:29:13 +00:00
requirements.txt Remove unused components from requirements 2017-04-04 20:22:48 +00:00
setup.cfg Update freezer-dr for pike goal python 3.5 2017-05-22 17:03:07 +00:00
setup.py Added pbr version in setup.py as it's required 2017-02-02 14:11:34 +00:00
test-requirements.txt Cleanup test-requirements 2017-09-21 14:45:45 +08:00
tox.ini Remove py34 from tox 2017-04-05 15:25:27 +00:00

README.rst

Team and repository tags

image

Freezer Disaster Recovery

freezer-dr, OpenStack Compute node High Available provides compute node high availability for OpenStack. Simply freezer-dr monitors all compute nodes running in a cloud deployment and if there is any failure in one of the compute nodes freezer-dr will fence this compute node then freezer-dr will try to evacuate all running instances on this compute node, finally freezer-dr will notify all users who have workload/instances running on this compute node as well as will notify the cloud administrators.

freezer-dr has a pluggable architecture so it can be used with:

  1. Any monitoring system to monitor the compute nodes (currently we support only native OpenStack services status)
  2. Any fencing driver (currently supports IPMI, libvirt, ...)
  3. Any evacuation driver (currently supports evacuate api call, may be migrate ??)
  4. Any notification system (currently supports email based notifications, ...)

just by adding a simple plugin and adjust the configuration file to use this plugin or in future a combination of plugins if required

freezer-dr should run in the control plane, however the architecture supports different scenarios. For running freezer-dr under high availability mode, it should run with active passive mode.

How it works

Starting freezer-dr:

  1. freezer-dr Monitoring manager is going to load the required monitoring driver according to the configuration
  2. freezer-dr will query the monitoring system to check if it considers any compute nodes to be down ?
  3. if no, freezer-dr will exit displaying No failed nodes
  4. if yes, freezer-dr will call the fencing manager to fence the failed compute node
  5. Fencing manager will load the correct fencer according to the configuration
  6. once the compute node is fenced and is powered off now we will start the evacuation process
  7. freezer-dr will load the correct evacuation driver
  8. freezer-dr will evacuate all instances to another computes
  9. Once the evacuation process completed, freezer-dr will call the notification manager
  10. The notification manager will load the correct driver based on the configurations
  11. freezer-dr will start the notification process ...