Add a known issue release note regarding inotify watch limits

This patch also bumps the opentack_hosts role SHA to include the
backported patch which increases the inotify watch limit

Change-Id: I3b5cadb510fd0804365c63cdc4d026d4a92d7508
(cherry picked from commit 3c7005360b)
This commit is contained in:
Jonathan Rosser 2019-02-18 09:22:45 +00:00
parent 3abb4b391d
commit 6808523cec
2 changed files with 17 additions and 1 deletions

View File

@ -45,7 +45,7 @@
- name: openstack_hosts
scm: git
src: https://git.openstack.org/openstack/openstack-ansible-openstack_hosts
version: f789af66b93b4c02f829c3c92959eb02090aec04
version: e1330b74148158067d962189b529c5dfbae9868b
- name: os_keystone
scm: git
src: https://git.openstack.org/openstack/openstack-ansible-os_keystone

View File

@ -0,0 +1,16 @@
---
issues:
- |
The number of inotify watch instances available is limited system wide
via a sysctl setting. It is possible for certain processes, such as
pypi-server, or elasticsearch from the ops repo to consume a large number
of inotify watches. If the system wide maximum is reached then any process
on the host or in any container on the host will be unable to create a new
inotify watch. Systemd uses inotify watches, and if there are none available
it is unable to restart services. The processes which synchronise the repo
server contents between infra nodes also relies on inotify watches. If the
repo servers fail to synchronise, or services fail to restart when expected
check the the inotify watch limit which is defined in the sysctl value
fs.inotify.max_user_watches. Patches have merged to increase these limits,
but for existing environments or those which have not upgraded to a recent
enough point release may have to apply an increased limit manually.