openstack-manuals/doc/high-availability-guide/aa-controllers.txt

67 lines
2.3 KiB
Plaintext
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

[[ha-aa-controllers]]
=== OpenStack controller nodes
OpenStack controller nodes contain:
* All OpenStack API services
* All OpenStack schedulers
* Memcached service
==== Run OpenStack API and schedulers
===== API services
All OpenStack projects have an API service for controlling all the resources in the Cloud.
In Active / Active mode, the most common setup is to scale-out these services on at least two nodes
and use load-balancing and virtual IP (with HAproxy & Keepalived in this setup).
*Configure API OpenStack services*
To configure our Cloud using Highly available and scalable API services, we need to ensure that:
* You use Virtual IP when configuring OpenStack Identity endpoints.
* All OpenStack configuration files should refer to Virtual IP.
*In case of failure*
The monitor check is quite simple since it just establishes a TCP connection to the API port. Comparing to the
Active / Passive mode using Corosync & Resources Agents, we dont check if the service is actually running).
Thats why all OpenStack API should be monitored by another tool (i.e. Nagios) with the goal to detect
failures in the Cloud Framework infrastructure.
===== Schedulers
OpenStack schedulers are used to determine how to dispatch compute, network and volume requests. The most
common setup is to use RabbitMQ as messaging system already documented in this guide.
Those services are connected to the messaging backend and can scale-out :
* nova-scheduler
* nova-conductor
* cinder-scheduler
* neutron-server
* ceilometer-collector
* heat-engine
Please refer to the RabbitMQ section for configure these services with multiple messaging servers.
==== Memcached
Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens).
Memcached is one of them and can scale-out easily without specific trick.
To install and configure it, read the http://code.google.com/p/memcached/wiki/NewStart[official documentation].
Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects.
Example with two hosts:
----
memcached_servers = controller1:11211,controller2:11211
----
By default, controller1 handles the caching service but if the host goes down, controller2 does the job.
For more information about memcached installation, see the OpenStack
Cloud Administrator Guide.