Fixing keepalived bug when 2+ backup nodes have the same priority

The issue is present if you're running 2 or more nodes with
a keepalived < 1.2.8. This bumps the version of keepalived role
(installing a more recent version of keepalived by default) AND
edits the keepalived configuration file to avoid having nodes
with the same priority.

This will restart your keepalived service.

Please note this commit is not meant for backporting. The deployer
running on mitaka and below should follow the documentation here:
https://review.openstack.org/#/c/279664/

Bug: #1545066

Change-Id: Ie28d2d3fa8670212c64ecbdf5a87314e7ca0a2d9
This commit is contained in:
Jean-Philippe Evrard 2016-02-12 20:09:24 +01:00 committed by Jean-Philippe Evrard
parent 6ff23113bd
commit 12a3fbafd0
4 changed files with 58 additions and 72 deletions

View File

@ -57,28 +57,31 @@ Otherwise, edit at least the following variables in
haproxy_keepalived_external_interface: br-flat
haproxy_keepalived_internal_interface: br-mgmt
``haproxy_keepalived_internal_interface`` represents the interface
on the deployed node where the keepalived master will bind the
internal vip. By default the ``br-mgmt`` will be used.
- ``haproxy_keepalived_internal_interface`` and
``haproxy_keepalived_external_interface`` represent the interfaces on the
deployed node where the keepalived nodes will bind the internal/external
vip. By default the ``br-mgmt`` will be used.
``haproxy_keepalived_external_interface`` represents the interface
on the deployed node where the keepalived master will bind the
external vip. By default the ``br-mgmt`` will be used.
- ``haproxy_keepalived_internal_vip_cidr`` and
``haproxy_keepalived_external_vip_cidr`` represents the internal and
external (respectively) vips (with their prefix length) that will be used on
keepalived host with the master status, on the interface listed above.
``haproxy_keepalived_external_vip_cidr`` represents the external
vip (and its netmask) that will be used on keepalived master host.
- Additional variables can be set to adapt keepalived in the deployed
environment. Please refer to the ``user_variables.yml`` for more descriptions.
``haproxy_keepalived_internal_vip_cidr`` represents the internal
vip (and its netmask) that will be used on keepalived master host.
To always deploy (or upgrade to) the latest stable version of keepalived,
edit the ``/etc/openstack_deploy/user_variables.yml`` by setting:
Additional variables can be set to adapt keepalived in the deployed
environment. Please refer to the ``user_variables.yml``
for more descriptions.
.. code-block:: yaml
All the variables mentioned above are used in the variable file
``vars/configs/keepalived_haproxy.yml`` to feed the
keepalived role. More information can be found in the keepalived
role documentation. You can use your own variable file by setting
keepalived_use_latest_stable: True
The HAProxy playbook makes use of the variable file
``vars/configs/keepalived_haproxy.yml``, and feeds its content
to the keepalived role, for keepalived master and backup nodes.
You can use your own variable file by setting
the path in your ``/etc/openstack_deploy/user_variables.yml``:
.. code-block:: yaml

View File

@ -54,19 +54,7 @@
- "{{ haproxy_keepalived_vars_file | default('vars/configs/keepalived_haproxy.yml')}}"
roles:
- role: "keepalived"
keepalived_sync_groups: "{{ keepalived_master_sync_groups }}"
keepalived_scripts: "{{ keepalived_master_scripts }}"
keepalived_instances: "{{ keepalived_master_instances }}"
when: >
haproxy_use_keepalived|bool and
inventory_hostname in groups['haproxy'][0]
- role: "keepalived"
keepalived_sync_groups: "{{ keepalived_backup_sync_groups }}"
keepalived_scripts: "{{ keepalived_backup_scripts }}"
keepalived_instances: "{{ keepalived_backup_instances }}"
when: >
haproxy_use_keepalived|bool and
inventory_hostname in groups['haproxy'][1:]
when: haproxy_use_keepalived | bool
- name: Install haproxy
hosts: haproxy

View File

@ -13,21 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
keepalived_global_sync_groups:
keepalived_sync_groups:
haproxy:
instances:
- external
- internal
notify_script: /etc/keepalived/haproxy_notify.sh
##if a src_*_script is defined, it will be uploaded from src_*_script on the deploy host to the *_script location. Make sure *_script is a location in that case.
#src_notify_script: /opt/os-ansible-deployment/playbooks/vars/configs/keepalived_haproxy_notifications.sh
##if a src_*_script is defined, it will be uploaded from src_*_script
##on the deploy host to the *_script location. Make sure *_script is
##a location in that case.
src_notify_script: vars/configs/keepalived_haproxy_notifications.sh
# Master and backup sync groups should normally be the same.
keepalived_master_sync_groups: "{{ keepalived_global_sync_groups }}"
keepalived_backup_sync_groups: "{{ keepalived_global_sync_groups }}"
keepalived_global_scripts:
keepalived_scripts:
haproxy_check_script:
check_script: "killall -0 haproxy"
pingable_check_script:
@ -36,17 +33,17 @@ keepalived_global_scripts:
fall: 2
rise: 4
# Master and backup scripts should be the same.
# The two variables (master/backup) are kept if the deployer wants different checks for backup and master.
keepalived_master_scripts: "{{ keepalived_global_scripts }}"
keepalived_backup_scripts: "{{ keepalived_global_scripts }}"
keepalived_master_instances:
# If you have more than 5 keepalived nodes, you should build your own script
# (handling master and backups servers), and replace in keepalived_instances:
# priority: "{{ ((play_hosts|length-play_hosts.index(inventory_hostname))*100)-((play_hosts|length-play_hosts.index(inventory_hostname))*50) }}"
# by
# priority: "{{ (play_hosts.index(inventory_hostname) == 0) | ternary('100','50') }}"
keepalived_instances:
external:
interface: "{{ haproxy_keepalived_external_interface | default(management_bridge) }}"
state: MASTER
state: "{{ (play_hosts.index(inventory_hostname) == 0) | ternary('MASTER', 'BACKUP') }}"
virtual_router_id: "{{ haproxy_keepalived_external_virtual_router_id | default ('10') }}"
priority: "{{ haproxy_keepalived_priority_master | default('100') }}"
priority: "{{ ((play_hosts|length-play_hosts.index(inventory_hostname))*100)-((play_hosts|length-play_hosts.index(inventory_hostname))*50) }}"
authentication_password: "{{ haproxy_keepalived_authentication_password }}"
vips:
- "{{ haproxy_keepalived_external_vip_cidr }} dev {{ haproxy_keepalived_external_interface | default(management_bridge) }}"
@ -55,33 +52,9 @@ keepalived_master_instances:
- pingable_check_script
internal:
interface: "{{ haproxy_keepalived_internal_interface | default(management_bridge) }}"
state: MASTER
state: "{{ (play_hosts.index(inventory_hostname) == 0) | ternary('MASTER', 'BACKUP') }}"
virtual_router_id: "{{ haproxy_keepalived_internal_virtual_router_id | default ('11') }}"
priority: "{{ haproxy_keepalived_priority_master | default('100') }}"
authentication_password: "{{ haproxy_keepalived_authentication_password }}"
track_scripts:
- haproxy_check_script
- pingable_check_script
vips:
- "{{ haproxy_keepalived_internal_vip_cidr }} dev {{ haproxy_keepalived_internal_interface | default(management_bridge) }}"
keepalived_backup_instances:
external:
interface: "{{ haproxy_keepalived_external_interface | default(management_bridge) }}"
state: BACKUP
virtual_router_id: "{{ haproxy_keepalived_external_virtual_router_id | default ('10') }}"
priority: "{{ haproxy_keepalived_priority_backup | default('20') }}"
authentication_password: "{{ haproxy_keepalived_authentication_password }}"
vips:
- "{{ haproxy_keepalived_external_vip_cidr }} dev {{ haproxy_keepalived_external_interface | default(management_bridge) }}"
track_scripts:
- haproxy_check_script
- pingable_check_script
internal:
interface: "{{ haproxy_keepalived_internal_interface | default(management_bridge) }}"
state: BACKUP
virtual_router_id: "{{ haproxy_keepalived_internal_virtual_router_id | default ('11') }}"
priority: "{{ haproxy_keepalived_priority_backup | default('20') }}"
priority: "{{ ((play_hosts|length-play_hosts.index(inventory_hostname))*100)-((play_hosts|length-play_hosts.index(inventory_hostname))*50) }}"
authentication_password: "{{ haproxy_keepalived_authentication_password }}"
track_scripts:
- haproxy_check_script

View File

@ -0,0 +1,22 @@
---
features:
- There is a new default configuration for keepalived, supporting more than 2 nodes.
- In order to make use of the latest stable keepalived version, the variable
``keepalived_use_latest_stable`` must be set to ``True``
issues:
- In the latest stable version of keepalived there is a problem with the priority
calculation when a deployer has more than five keepalived nodes. The problem causes the
whole keepalived cluster to fail to work. To work around this issue it is recommended that
deployers limit the number of keepalived nodes to no more than five or that the priority
for each node is set as part of the configuration (cf. ``haproxy_keepalived_vars_file``
variable).
upgrade:
- There is a new default configuration for keepalived. When running the haproxy playbook,
the configuration change will cause a keepalived restart unless the deployer has used a custom
configuration file. The restart will cause the virtual IP addresses managed by keepalived to
be briefly unconfigured, then reconfigured.
- A new version of keepalived will be installed on the haproxy nodes if the variable
``keepalived_use_latest_stable`` is set to ``True`` and more than one haproxy node is
configured. The update of the package will cause keepalived to restart and therefore will
cause the virtual IP addresses managed by keepalived to be briefly unconfigured, then
reconfigured.