Merge "Add a known issue release note regarding inotify watch limits" into stable/queens
This commit is contained in:
commit
f674bbe519
|
@ -45,7 +45,7 @@
|
|||
- name: openstack_hosts
|
||||
scm: git
|
||||
src: https://git.openstack.org/openstack/openstack-ansible-openstack_hosts
|
||||
version: 6a34bb2f376e242645fd2e608ab06927df36f6ff
|
||||
version: fa1245375e07222f1ce60fd54ab49b7d1811ecb2
|
||||
- name: os_keystone
|
||||
scm: git
|
||||
src: https://git.openstack.org/openstack/openstack-ansible-os_keystone
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
---
|
||||
issues:
|
||||
- |
|
||||
The number of inotify watch instances available is limited system wide
|
||||
via a sysctl setting. It is possible for certain processes, such as
|
||||
pypi-server, or elasticsearch from the ops repo to consume a large number
|
||||
of inotify watches. If the system wide maximum is reached then any process
|
||||
on the host or in any container on the host will be unable to create a new
|
||||
inotify watch. Systemd uses inotify watches, and if there are none available
|
||||
it is unable to restart services. The processes which synchronise the repo
|
||||
server contents between infra nodes also relies on inotify watches. If the
|
||||
repo servers fail to synchronise, or services fail to restart when expected
|
||||
check the the inotify watch limit which is defined in the sysctl value
|
||||
fs.inotify.max_user_watches. Patches have merged to increase these limits,
|
||||
but for existing environments or those which have not upgraded to a recent
|
||||
enough point release may have to apply an increased limit manually.
|
Loading…
Reference in New Issue