Remove unused packages from openstack-armada-app

The purpose of this change is to remove packages that are not used by
STX-Openstack from this repository. At the time these folders were
copied from the `starlingx/upstream` repo to here on [1], the goal was
to simply get the folder structure and all the packages, but, now that
the this repository is being actively used for the STX-Openstack
upversion to Antelope, we noticed that these packages are either not
used (barbican, keystone, openstack-resource-agents, horizon,
python-osc-lib, python-oslo.messaging, python-wsme, rabbitmq-server),
or were dropped from Openstack (gnocchi and the python-gnocchiclient
are not present on Antelope, as these services were removed from the
Openstack project [2] [3]).

Except for gnocchi and python-gnocchiclient, as explained above, these packages were present on the `starlingx/upstream` repository as they're required for the StarlingX builds and ISO, but not for the STX-Openstack application, for that reason, these are safe to be dropped from this repository.

[1] https://review.opendev.org/c/starlingx/openstack-armada-app/+/886027
[2] https://opendev.org/openstack/gnocchi
[3] https://opendev.org/openstack/python-gnocchiclient

Test Plan:
PASS: Removed packages are not found when issuing `build-pkgs`
PASS: Build all STX-Openstack images
PASS: Build STX-Openstack tarball
PASS: Upload / apply / remove STX-Openstack

Related-Bug: 2027589

Change-Id: I1697bce6b606fb603a0cb4a2c35b6255e129a080
Signed-off-by: Lucas de Ataides <lucas.deataidesbarreto@windriver.com>
This commit is contained in:
Lucas de Ataides 2023-08-14 16:12:32 -03:00
parent 310f677d29
commit a8969f2988
101 changed files with 1 additions and 11723 deletions

View File

@ -2,25 +2,15 @@ openstack-helm
openstack-helm-infra
python3-k8sapp-openstack
stx-openstack-helm-fluxcd
#upstream/openstack/barbican
#upstream/openstack/keystone
#upstream/openstack/openstack-pkg-tools
#upstream/openstack/openstack-ras
#upstream/openstack/python-aodhclient
#upstream/openstack/python-barbicanclient
#upstream/openstack/python-cinderclient
#upstream/openstack/python-glanceclient
#upstream/openstack/python-gnocchiclient
#upstream/openstack/python-heatclient
#upstream/openstack/python-horizon
#upstream/openstack/python-ironicclient
#upstream/openstack/python-keystoneclient
#upstream/openstack/python-neutronclient
#upstream/openstack/python-novaclient
#upstream/openstack/python-openstackclient
#upstream/openstack/python-openstacksdk
#upstream/openstack/python-osc-lib
#upstream/openstack/python-oslo-messaging
#upstream/openstack/python-pankoclient
#upstream/openstack/python-wsme
#upstream/openstack/rabbitmq-server
#upstream/openstack/python-openstacksdk

View File

@ -4,7 +4,6 @@
#upstream/openstack/python-ceilometer
#upstream/openstack/python-cinder
#upstream/openstack/python-glance
#upstream/openstack/python-gnocchi
#upstream/openstack/python-heat/openstack-heat
#upstream/openstack/python-horizon
#upstream/openstack/python-keystone

View File

@ -1,8 +0,0 @@
This repo is for https://opendev.org/openstack/barbican
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,297 +0,0 @@
From cb87c126b41efdc0956c5e9e9350a9edf8129f3d Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Mon, 22 Nov 2021 14:46:16 +0000
Subject: [PATCH] Remove dbconfig and openstack-pkg-tools config
Remove the dbconfig and openstack-pkg-tools post configuration
since we use puppet to configure the services and doing
both will lead the problems with integration.
Story: 2009101
Task: 44026
Signed-off-by: Charles Short <charles.short@windriver.com>
diff -Naurp barbican-11.0.0.orig/debian/barbican-api.config.in barbican-11.0.0/debian/barbican-api.config.in
--- barbican-11.0.0.orig/debian/barbican-api.config.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-api.config.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,12 +0,0 @@
-#!/bin/sh
-
-set -e
-
-. /usr/share/debconf/confmodule
-
-#PKGOS-INCLUDE#
-
-pkgos_register_endpoint_config barbican
-db_go
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-api.postinst.in barbican-11.0.0/debian/barbican-api.postinst.in
--- barbican-11.0.0.orig/debian/barbican-api.postinst.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-api.postinst.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,17 +0,0 @@
-#!/bin/sh
-
-set -e
-
-#PKGOS-INCLUDE#
-
-if [ "$1" = "configure" ] || [ "$1" = "reconfigure" ] ; then
- . /usr/share/debconf/confmodule
- . /usr/share/dbconfig-common/dpkg/postinst
-
- pkgos_register_endpoint_postinst barbican barbican key-manager "Barbican Key Management Service" 9311 ""
- db_stop
-fi
-
-#DEBHELPER#
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.config.in barbican-11.0.0/debian/barbican-common.config.in
--- barbican-11.0.0.orig/debian/barbican-common.config.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.config.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,17 +0,0 @@
-#!/bin/sh
-
-set -e
-
-. /usr/share/debconf/confmodule
-CONF=/etc/barbican/barbican.conf
-API_CONF=/etc/barbican/barbican-api-paste.ini
-
-#PKGOS-INCLUDE#
-
-pkgos_var_user_group barbican
-pkgos_dbc_read_conf -pkg barbican-common ${CONF} DEFAULT sql_connection barbican $@
-pkgos_rabbit_read_conf ${CONF} DEFAULT barbican
-pkgos_read_admin_creds ${CONF} keystone_authtoken barbican
-db_go
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.install barbican-11.0.0/debian/barbican-common.install
--- barbican-11.0.0.orig/debian/barbican-common.install 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.install 2021-11-26 17:57:04.417749768 +0000
@@ -1,2 +1,5 @@
bin/barbican-api /usr/bin
usr/bin/*
+etc/barbican/barbican-api-paste.ini etc/barbican
+etc/barbican/barbican.conf etc/barbican
+etc/barbican/vassals/barbican-api.ini etc/barbican/vassals
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.posinst barbican-11.0.0/debian/barbican-common.posinst
--- barbican-11.0.0.orig/debian/barbican-common.posinst 1970-01-01 00:00:00.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.posinst 2021-11-26 17:11:12.770838698 +0000
@@ -0,0 +1,28 @@
+#!/bin/sh
+
+set -e
+
+set -e
+
+if [ "$1" = "configure" ]; then
+ if ! getent group barbican > /dev/null 2>&1; then
+ addgroup --system barbican >/dev/null
+ fi
+
+ if ! getent passwd barbican > /dev/null 2>&1; then
+ adduser --system --home /var/lib/barbican --ingroup barbican --no-create-home --shell /bin/false barbican
+ fi
+
+ chown barbican:adm /var/log/barbican
+ chmod 0750 /var/log/barbican
+
+ find /etc/barbican -exec chown root:barbican "{}" +
+ find /etc/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
+
+ find /var/lib/barbican -exec chown barbican:barbican "{}" +
+ find /var/lib/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
+fi
+
+#DEBHELPER#
+
+exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.postinst.in barbican-11.0.0/debian/barbican-common.postinst.in
--- barbican-11.0.0.orig/debian/barbican-common.postinst.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.postinst.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,46 +0,0 @@
-#!/bin/sh
-
-set -e
-
-CONF=/etc/barbican/barbican.conf
-API_CONF=/etc/barbican/barbican-api-paste.ini
-
-#PKGOS-INCLUDE#
-
-if [ "$1" = "configure" ] || [ "$1" = "reconfigure" ] ; then
- . /usr/share/debconf/confmodule
- . /usr/share/dbconfig-common/dpkg/postinst
-
- pkgos_var_user_group barbican
- mkdir -p /var/lib/barbican/temp
- chown barbican:barbican /var/lib/barbican/temp
-
- pkgos_write_new_conf barbican api_audit_map.conf
- pkgos_write_new_conf barbican barbican-api-paste.ini
- pkgos_write_new_conf barbican barbican.conf
- pkgos_write_new_conf barbican barbican-functional.conf
- if [ -r /etc/barbican/policy.json ] ; then
- mv /etc/barbican/policy.json /etc/barbican/disabled.policy.json.old
- fi
-
- db_get barbican/configure_db
- if [ "$RET" = "true" ]; then
- pkgos_dbc_postinst ${CONF} DEFAULT sql_connection barbican $@
- fi
-
- pkgos_rabbit_write_conf ${CONF} DEFAULT barbican
- pkgos_write_admin_creds ${CONF} keystone_authtoken barbican
-
- db_get barbican/configure_db
- if [ "$RET" = "true" ]; then
- echo "Now calling barbican-db-manage upgrade: this may take a while..."
-# echo "TODO: barbican-db-manage upgrade: Disabled for now..."
- su -s /bin/sh -c 'barbican-db-manage upgrade' barbican
- fi
-
- db_stop
-fi
-
-#DEBHELPER#
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.postrm barbican-11.0.0/debian/barbican-common.postrm
--- barbican-11.0.0.orig/debian/barbican-common.postrm 1970-01-01 00:00:00.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.postrm 2021-11-26 17:11:12.774838632 +0000
@@ -0,0 +1,14 @@
+#!/bin/sh
+
+set -e
+
+if [ "$1" = "purge" ] ; then
+ echo "Purging barbican. Backup of /var/lib/barbican can be found at /var/lib/barbican.tar.bz2" >&2
+ [ -e /var/lib/barbican ] && rm -rf /var/lib/barbican
+ [ -e /var/log/barbican ] && rm -rf /var/log/barbican
+fi
+
+
+#DEBHELPER#
+
+exit 0
diff -Naurp barbican-11.0.0.orig/debian/barbican-common.postrm.in barbican-11.0.0/debian/barbican-common.postrm.in
--- barbican-11.0.0.orig/debian/barbican-common.postrm.in 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/barbican-common.postrm.in 1970-01-01 00:00:00.000000000 +0000
@@ -1,25 +0,0 @@
-#!/bin/sh
-
-set -e
-
-#PKGOS-INCLUDE#
-
-if [ "$1" = "purge" ] ; then
- # Purge the db
- pkgos_dbc_postrm barbican barbican-common $@
-
- # Purge config files copied in postinst
- for i in barbican.conf barbican-admin-paste.ini barbican-api.conf barbican-api-paste.ini barbican-functional.conf policy.json api_audit_map.conf ; do
- rm -f /etc/barbican/$i
- done
- # and the folders
- rmdir --ignore-fail-on-non-empty /etc/barbican || true
-
- echo "Purging barbican. Backup of /var/lib/barbican can be found at /var/lib/barbican.tar.bz2" >&2
- [ -e /var/lib/barbican ] && rm -rf /var/lib/barbican
- [ -e /var/log/barbican ] && rm -rf /var/log/barbican
-fi
-
-#DEBHELPER#
-
-exit 0
diff -Naurp barbican-11.0.0.orig/debian/control barbican-11.0.0/debian/control
--- barbican-11.0.0.orig/debian/control 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/control 2021-11-26 17:11:12.774838632 +0000
@@ -96,7 +96,6 @@ Package: barbican-common
Architecture: all
Depends:
adduser,
- dbconfig-common,
debconf,
python3-barbican (= ${binary:Version}),
${misc:Depends},
diff -Naurp barbican-11.0.0.orig/debian/rules barbican-11.0.0/debian/rules
--- barbican-11.0.0.orig/debian/rules 2021-04-20 09:59:15.000000000 +0000
+++ barbican-11.0.0/debian/rules 2021-11-26 17:56:48.926004150 +0000
@@ -3,22 +3,12 @@
include /usr/share/openstack-pkg-tools/pkgos.make
%:
- dh $@ --buildsystem=python_distutils --with python3,systemd,sphinxdoc
+ dh $@ --buildsystem=pybuild --with python3,systemd,sphinxdoc
override_dh_auto_clean:
rm -f debian/*.init debian/*.service debian/*.upstart
rm -rf build
rm -rf barbican.sqlite
- rm -f debian/barbican-api.postinst debian/barbican-api.config debian/barbican-common.postinst debian/barbican-common.config debian/barbican-common.postrm
-
-override_dh_auto_build:
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-api.postinst
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-api.config
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-common.postinst
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_func barbican-common.config
- /usr/share/openstack-pkg-tools/pkgos_insert_include pkgos_postrm barbican-common.postrm
- pkgos-merge-templates barbican-api barbican endpoint
- pkgos-merge-templates barbican-common barbican db rabbit ksat
override_dh_auto_test:
echo "Do nothing..."
@@ -35,46 +25,9 @@ ifeq (,$(findstring nocheck, $(DEB_BUILD
pkgos-dh_auto_test --no-py2 'barbican\.tests\.(?!(.*common.test_utils\.WhenTestingAcceptEncodingGetter\.test_get_correct_fullname_for_class.*|.*common\.test_utils\.WhenTestingGenerateFullClassnameForInstance\.test_returns_qualified_name.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificateEventPluginManager\.test_get_plugin_by_name.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificatePluginManager\.test_get_plugin_by_ca_id.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificatePluginManager\.test_get_plugin_by_name.*|.*plugin\.interface\.test_certificate_manager\.WhenTestingCertificatePluginManager\.test_refresh_ca_list.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_delete_secret_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_generate_asymmetric_key_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_generate_symmetric_key_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_opaque.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_private_key.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_public_key.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_get_secret_symmetric.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_store_private_key_secret_assert_called.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_store_symmetric_secret_assert_called.*|.*tasks\.test_keystone_consumer\.WhenUsingKeystoneEventConsumerProcessMethod\.test_existing_project_entities_cleanup_for_plain_secret.*|.*plugin\.test_kmip\.WhenTestingKMIPSecretStore\.test_credential.*|.*test_hacking\.HackingTestCase\.test_logging_with_tuple_argument.*|.*common\.test_validators\.WhenTestingSecretMetadataValidator\.test_should_validate_all_fields_and_make_key_lowercase.*|.*test_hacking\.HackingTestCase\.test_str_on_exception.*|.*test_hacking\.HackingTestCase\.test_str_on_multiple_exceptions.*|.*test_hacking\.HackingTestCase\.test_str_unicode_on_multiple_exceptions.*|.*test_hacking\.HackingTestCase\.test_unicode_on_exception.*))'
endif
-
- # Generate the barbican.conf config using installed python-barbican files.
- mkdir -p $(CURDIR)/debian/barbican-common/usr/share/barbican-common
- PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages oslo-config-generator \
- --output-file $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf \
- --wrap-width 140 \
- --namespace barbican.certificate.plugin \
- --namespace barbican.certificate.plugin.snakeoil \
- --namespace barbican.common.config \
- --namespace barbican.plugin.crypto \
- --namespace barbican.plugin.crypto.p11 \
- --namespace barbican.plugin.crypto.simple \
- --namespace barbican.plugin.dogtag \
- --namespace barbican.plugin.secret_store \
- --namespace barbican.plugin.secret_store.kmip \
- --namespace keystonemiddleware.auth_token \
- --namespace oslo.log \
- --namespace oslo.messaging \
- --namespace oslo.middleware.cors \
- --namespace oslo.middleware.http_proxy_to_wsgi \
- --namespace oslo.policy \
- --namespace oslo.service.periodic_task \
- --namespace oslo.service.sslutils \
- --namespace oslo.service.wsgi
- pkgos-readd-keystone-authtoken-missing-options $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf keystone_authtoken barbican
-
- # Same with policy.conf
- mkdir -p $(CURDIR)/debian/barbican-common/etc/barbican/policy.d
- PYTHONPATH=$(CURDIR)/debian/tmp/usr/lib/python3/dist-packages oslopolicy-sample-generator \
- --output-file $(CURDIR)/debian/barbican-common/etc/barbican/policy.d/00_default_policy.yaml \
- --format yaml \
- --namespace barbican
-
- # Use the policy.d folder
- pkgos-fix-config-default $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf oslo_policy policy_dirs /etc/barbican/policy.d
-
-
- # Restore sanity...
- pkgos-fix-config-default $(CURDIR)/debian/barbican-common/usr/share/barbican-common/barbican.conf keystone_notifications enable True
-
+ PYTHONPATH=$(CURDIR) oslo-config-generator \
+ --config-file etc/oslo-config-generator/barbican.conf \
+ --output-file etc/barbican/barbican.conf
dh_install
rm -rf $(CURDIR)/debian/tmp/usr/etc
dh_missing --fail-missing

View File

@ -1,83 +0,0 @@
From 31cab241e50e2fc99f257c5e9a1a006c66b7041f Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Thu, 3 Mar 2022 19:34:02 +0000
Subject: [PATCH] Start barbican-api with gunicorn during bootstrap for Debian
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
debian/barbican-api.install | 2 +-
debian/barbican-api.service.in | 19 +++++++++++++++++++
debian/barbican-common.install | 1 +
debian/gunicorn-config.py | 16 ++++++++++++++++
4 files changed, 37 insertions(+), 1 deletion(-)
create mode 100644 debian/barbican-api.service.in
create mode 100644 debian/gunicorn-config.py
diff --git a/debian/barbican-api.install b/debian/barbican-api.install
index 05ddad9..3d8f2b4 100644
--- a/debian/barbican-api.install
+++ b/debian/barbican-api.install
@@ -1 +1 @@
-debian/barbican-api-uwsgi.ini /etc/barbican
+debian/gunicorn-config.py /etc/barbican
diff --git a/debian/barbican-api.service.in b/debian/barbican-api.service.in
new file mode 100644
index 0000000..197a281
--- /dev/null
+++ b/debian/barbican-api.service.in
@@ -0,0 +1,19 @@
+[Unit]
+Description=Openstack Barbican API server
+After=syslog.target network.target
+Before=httpd.service
+
+[Service]
+PIDFile=/run/barbican/pid
+User=barbican
+Group=barbican
+RuntimeDirectory=barbican
+RuntimeDirectoryMode=770
+ExecStart=/usr/bin/gunicorn --pid /run/barbican/pid -c /etc/barbican/gunicorn-config.py --paste /etc/barbican/barbican-api-paste.ini
+ExecReload=/usr/bin/kill -s HUP $MAINPID
+ExecStop=/usr/bin/kill -s TERM $MAINPID
+StandardError=syslog
+Restart=on-failure
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/barbican-common.install b/debian/barbican-common.install
index 663fdc8..f1944b5 100644
--- a/debian/barbican-common.install
+++ b/debian/barbican-common.install
@@ -1,5 +1,6 @@
bin/barbican-api /usr/bin
usr/bin/*
+etc/barbican/api_audit_map.conf etc/barbican
etc/barbican/barbican-api-paste.ini etc/barbican
etc/barbican/barbican.conf etc/barbican
etc/barbican/vassals/barbican-api.ini etc/barbican/vassals
diff --git a/debian/gunicorn-config.py b/debian/gunicorn-config.py
new file mode 100644
index 0000000..c8c1e07
--- /dev/null
+++ b/debian/gunicorn-config.py
@@ -0,0 +1,16 @@
+import multiprocessing
+
+bind = '0.0.0.0:9311'
+user = 'barbican'
+group = 'barbican'
+
+timeout = 30
+backlog = 2048
+keepalive = 2
+
+workers = multiprocessing.cpu_count() * 2
+
+loglevel = 'info'
+errorlog = '-'
+accesslog = '-'
+
--
2.30.2

View File

@ -1,55 +0,0 @@
From a729c3af80ec8b045ba8f04dfb7db4c90ab8b9c5 Mon Sep 17 00:00:00 2001
From: Dan Voiculeasa <dan.voiculeasa@windriver.com>
Date: Thu, 31 Mar 2022 18:31:00 +0300
Subject: [PATCH 3/3] Create barbican user, group, log dir
Signed-off-by: Dan Voiculeasa <dan.voiculeasa@windriver.com>
---
debian/barbican-common.dirs | 1 +
...{barbican-common.posinst => barbican-common.postinst} | 9 +--------
2 files changed, 2 insertions(+), 8 deletions(-)
create mode 100644 debian/barbican-common.dirs
rename debian/{barbican-common.posinst => barbican-common.postinst} (52%)
diff --git a/debian/barbican-common.dirs b/debian/barbican-common.dirs
new file mode 100644
index 0000000..3a4ef46
--- /dev/null
+++ b/debian/barbican-common.dirs
@@ -0,0 +1 @@
+/var/log/barbican
diff --git a/debian/barbican-common.posinst b/debian/barbican-common.postinst
similarity index 52%
rename from debian/barbican-common.posinst
rename to debian/barbican-common.postinst
index 9cf6a4c..bcf54d1 100644
--- a/debian/barbican-common.posinst
+++ b/debian/barbican-common.postinst
@@ -2,8 +2,6 @@
set -e
-set -e
-
if [ "$1" = "configure" ]; then
if ! getent group barbican > /dev/null 2>&1; then
addgroup --system barbican >/dev/null
@@ -13,14 +11,9 @@ if [ "$1" = "configure" ]; then
adduser --system --home /var/lib/barbican --ingroup barbican --no-create-home --shell /bin/false barbican
fi
- chown barbican:adm /var/log/barbican
+ chown barbican:barbican /var/log/barbican
chmod 0750 /var/log/barbican
- find /etc/barbican -exec chown root:barbican "{}" +
- find /etc/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
-
- find /var/lib/barbican -exec chown barbican:barbican "{}" +
- find /var/lib/barbican -type f -exec chmod 0640 "{}" + -o -type d -exec chmod 0750 "{}" +
fi
#DEBHELPER#
--
2.30.0

View File

@ -1,3 +0,0 @@
0001-Remove-dbconfig-and-openstack-pkg-tools-config.patch
0002-Start-barbican-api-with-gunicorn-during-bootstrap-fo.patch
0003-Create-barbican-user-group-log-dir.patch

View File

@ -1,12 +0,0 @@
---
debname: barbican
debver: 1:11.0.0-3
dl_path:
name: barbican-debian-11.0.0-3.tar.gz
url: https://salsa.debian.org/openstack-team/services/barbican/-/archive/debian/11.0.0-3/barbican-debian-11.0.0-3.tar.gz
md5sum: 44caa91c9df25e29f399a3bbdb22d375
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/barbican

View File

@ -1,36 +0,0 @@
From 754fc74974be3b854173f7ce51ed0e248eb24b03 Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Tue, 24 May 2022 10:33:02 -0400
Subject: [PATCH] Store secret data in ascii format in DB
Store secret data (plugin_meta and cypher_text) in ascii format
instead of hex format in database.
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
barbican/plugin/store_crypto.py | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/barbican/plugin/store_crypto.py b/barbican/plugin/store_crypto.py
index c13e59c..843d5a8 100644
--- a/barbican/plugin/store_crypto.py
+++ b/barbican/plugin/store_crypto.py
@@ -311,7 +311,8 @@ def _store_secret_and_datum(
# setup and store encrypted datum
datum_model = models.EncryptedDatum(secret_model, kek_datum_model)
datum_model.content_type = context.content_type
- datum_model.cypher_text = base64.b64encode(generated_dto.cypher_text)
+ datum_model.cypher_text = \
+ base64.b64encode(generated_dto.cypher_text).decode('utf-8')
datum_model.kek_meta_extended = generated_dto.kek_meta_extended
repositories.get_encrypted_datum_repository().create_from(
datum_model)
@@ -333,4 +334,4 @@ def _indicate_bind_completed(kek_meta_dto, kek_datum):
kek_datum.algorithm = kek_meta_dto.algorithm
kek_datum.bit_length = kek_meta_dto.bit_length
kek_datum.mode = kek_meta_dto.mode
- kek_datum.plugin_meta = kek_meta_dto.plugin_meta
+ kek_datum.plugin_meta = kek_meta_dto.plugin_meta.decode('utf-8')
--
2.25.1

View File

@ -1 +0,0 @@
0001-Store-secret-data-in-ascii-format-in-DB.patch

View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,8 +0,0 @@
This repo is for https://opendev.org/openstack/keystone
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,446 +0,0 @@
From 6f55cd9922280ee5f4d119aa4a9924a51dea8068 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Tue, 15 Feb 2022 15:59:20 +0000
Subject: [PATCH] Add stx support
Apply Centos 7 patches to the debian packaging.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 2 +
debian/keystone.dirs | 1 +
debian/keystone.install | 4 +
debian/keystone.logrotate | 8 -
debian/keystone.postinst.in | 10 +-
debian/python3-keystone.install | 1 +
debian/rules | 6 +
debian/stx/keystone-all | 156 ++++++++++++++++++
debian/stx/keystone-fernet-keys-rotate-active | 64 +++++++
debian/stx/keystone.service | 14 ++
debian/stx/password-rules.conf | 34 ++++
debian/stx/public.py | 21 +++
12 files changed, 304 insertions(+), 17 deletions(-)
delete mode 100644 debian/keystone.logrotate
create mode 100644 debian/stx/keystone-all
create mode 100644 debian/stx/keystone-fernet-keys-rotate-active
create mode 100644 debian/stx/keystone.service
create mode 100644 debian/stx/password-rules.conf
create mode 100644 debian/stx/public.py
diff --git a/debian/control b/debian/control
index 9d0a3a41f..9a67234fa 100644
--- a/debian/control
+++ b/debian/control
@@ -31,6 +31,8 @@ Build-Depends-Indep:
python3-jwt,
python3-keystoneclient,
python3-keystonemiddleware (>= 7.0.0),
+ python3-keyring,
+ python3-keyrings.alt,
python3-ldap,
python3-ldappool,
python3-lxml (>= 4.5.0),
diff --git a/debian/keystone.dirs b/debian/keystone.dirs
index a4b3a9e86..6c6e31faf 100644
--- a/debian/keystone.dirs
+++ b/debian/keystone.dirs
@@ -2,3 +2,4 @@
/var/lib/keystone
/var/lib/keystone/cache
/var/log/keystone
+usr/share/keystone
diff --git a/debian/keystone.install b/debian/keystone.install
index c0d62c45b..8d68859c0 100644
--- a/debian/keystone.install
+++ b/debian/keystone.install
@@ -1,3 +1,7 @@
debian/keystone-uwsgi.ini /etc/keystone
etc/default_catalog.templates /etc/keystone
etc/logging.conf.sample /usr/share/doc/keystone
+debian/stx/keystone-fernet-keys-rotate-active usr/bin
+debian/stx/password-rules.conf /etc/keystone
+debian/stx/keystone.service lib/systemd/system
+debian/stx/keystone-all usr/bin
diff --git a/debian/keystone.logrotate b/debian/keystone.logrotate
deleted file mode 100644
index 2709c72aa..000000000
--- a/debian/keystone.logrotate
+++ /dev/null
@@ -1,8 +0,0 @@
-/var/log/keystone/*.log {
- daily
- missingok
- rotate 5
- compress
- minsize 100k
- copytruncate
-}
\ No newline at end of file
diff --git a/debian/keystone.postinst.in b/debian/keystone.postinst.in
index 207cbc22e..4b464a236 100755
--- a/debian/keystone.postinst.in
+++ b/debian/keystone.postinst.in
@@ -170,15 +170,7 @@ if [ "$1" = "configure" ] ; then
su keystone -s /bin/sh -c 'keystone-manage credential_setup --keystone-user keystone --keystone-group keystone'
fi
- chown keystone:adm /var/log/keystone
-
- if [ -n $(which systemctl)"" ] ; then
- systemctl enable keystone
- fi
- if [ -n $(which update-rc.d)"" ] ; then
- update-rc.d keystone defaults
- fi
- invoke-rc.d keystone start
+ chown -R keystone:keystone /var/log/keystone
db_get keystone/create-admin-tenant
if [ "$RET" = "true" ] ; then
diff --git a/debian/python3-keystone.install b/debian/python3-keystone.install
index 44d7fcb64..3c76ffb99 100644
--- a/debian/python3-keystone.install
+++ b/debian/python3-keystone.install
@@ -1,2 +1,3 @@
usr/bin/*
usr/lib/python3/*
+debian/stx/public.py usr/share/keystone
diff --git a/debian/rules b/debian/rules
index 3744142f9..f827d1b68 100755
--- a/debian/rules
+++ b/debian/rules
@@ -106,6 +106,12 @@ ifeq (,$(findstring nodocs, $(DEB_BUILD_OPTIONS)))
dh_installman
endif
+override_dh_installsystemd:
+ dh_installsystemd --no-enable --no-start
+
+override_dh_installinit:
+ dh_installinit --no-enable --no-start
+
override_dh_python3:
dh_python3 --shebang=/usr/bin/python3
diff --git a/debian/stx/keystone-all b/debian/stx/keystone-all
new file mode 100644
index 000000000..de339caa6
--- /dev/null
+++ b/debian/stx/keystone-all
@@ -0,0 +1,156 @@
+#!/bin/sh
+# Copyright (c) 2013-2018 Wind River Systems, Inc.
+#
+# SPDX-License-Identifier: Apache-2.0
+#
+
+### BEGIN INIT INFO
+# Provides: OpenStack Keystone-wsgi
+# Required-Start: networking
+# Required-Stop: networking
+# Default-Start: 2 3 4 5
+# Default-Stop: 0 1 6
+# Short-Description: OpenStack Keystone
+# Description: Openstack Identitiy service running on WSGI compatable gunicorn web server
+#
+### END INIT INFO
+
+RETVAL=0
+#public 5000
+
+DESC_PUBLIC="openstack-keystone"
+
+PIDFILE_PUBLIC="/var/run/$DESC_PUBLIC.pid"
+
+PYTHON=`which python`
+
+source /etc/keystone/keystone-extra.conf
+source /etc/platform/platform.conf
+
+if [ -n ${@:2:1} ] ; then
+ if [ ${@:2:1}="--public-bind-addr" ] ; then
+ PUBLIC_BIND_ADDR_CMD=${@:3:1}
+ fi
+fi
+
+
+###
+EXEC="/usr/bin/gunicorn"
+
+WORKER="eventlet"
+# Increased timeout to facilitate large image uploads
+TIMEOUT="200"
+
+# Calculate the no of workers based on the number of workers retrieved by
+# Platform Eng which is retreived from the keystone-extra.conf
+
+if [ "$system_type" == "All-in-one" ]; then
+ TIS_WORKERS_FACTOR=1
+else
+ TIS_WORKERS_FACTOR=1.5
+fi
+TIS_WORKERS=$(echo "${TIS_WORKERS_FACTOR}*${TIS_PUBLIC_WORKERS}"|bc )
+TIS_WORKERS=${TIS_WORKERS%.*}
+
+#--max-requests , --max-requests-jitter Configuration
+#--max-requests = The max number of requests a worker will process before restarting
+#--max-requests-jitter = The maximum jitter to add to the max_requests setting.
+MAX_REQUESTS=100000
+MAX_REQ_JITTER_CAP_FACTOR=0.5
+MAX_REQ_JITTER_PUBLIC=$(echo "${TIS_WORKERS}*${MAX_REQ_JITTER_CAP_FACTOR}+1"|bc)
+MAX_REQ_JITTER_PUBLIC=${MAX_REQ_JITTER_PUBLIC%.*}
+
+
+start()
+{
+ # Got proper no of workers . Starting gunicorn now
+ echo -e "Initialising keystone service using gunicorn .. \n"
+
+ if [ -z "$PUBLIC_BIND_ADDR" ]; then
+ echo "Keystone floating ip not found . Cannot start services. Exiting .."
+ exit 1
+ fi
+ BIND_PUBLIC=$PUBLIC_BIND_ADDR:5000
+
+ if [ -e $PIDFILE_PUBLIC ]; then
+ PIDDIR=/proc/$(cat $PIDFILE_PUBLIC)
+ if [ -d ${PIDDIR} ]; then
+ echo "$DESC_PUBLIC already running."
+ exit 1
+ else
+ echo "Removing stale PID file $PIDFILE_PUBLIC"
+ rm -f $PIDFILE_PUBLIC
+ fi
+ fi
+
+ echo -e "Starting $DESC_PUBLIC...\n";
+ echo -e "Worker is ${WORKER} --workers ${TIS_WORKERS} --timeout ${TIMEOUT} --max_requests ${MAX_REQUESTS} --max_request_jitter public ${MAX_REQ_JITTER_PUBLIC}\n" ;
+
+ echo -e "Starting keystone process at port 5000 \n" ;
+
+ start-stop-daemon --start --quiet --background --pidfile ${PIDFILE_PUBLIC} \
+ --make-pidfile --exec ${PYTHON} -- ${EXEC} --bind ${BIND_PUBLIC} \
+ --worker-class ${WORKER} --workers ${TIS_WORKERS} --timeout ${TIMEOUT} \
+ --max-requests ${MAX_REQUESTS} --max-requests-jitter ${MAX_REQ_JITTER_PUBLIC} \
+ --log-syslog \
+ --pythonpath '/usr/share/keystone' public:application --name keystone-public
+
+ RETVAL=$?
+ if [ $RETVAL -eq 0 ]; then
+ echo -e "Keystone started at port 5000... \n"
+ else
+ echo -e "Failed to start Keystone .. \n"
+ fi
+}
+
+stop()
+{
+ if [ -e $PIDFILE_PUBLIC ]; then
+ start-stop-daemon --stop --quiet --pidfile $PIDFILE_PUBLIC
+ RETVAL_PUBLIC=$?
+ if [ $RETVAL_PUBLIC -eq 0 ]; then
+ echo "Stopped $DESC_PUBLIC."
+ else
+ echo "Stopping failed - $PIDFILE_PUBLIC"
+ fi
+ rm -f $PIDFILE_PUBLIC
+ else
+ echo "Already stopped - $PIDFILE_PUBLIC"
+ fi
+}
+
+status()
+{
+ pid_public=`cat $PIDFILE_PUBLIC 2>/dev/null`
+
+ if [ -n "$pid_public" ]; then
+ echo -e "\033[32m $DESC_PUBLIC is running..\033[0m"
+ else
+ echo -e "\033[31m $DESC_PUBLIC is not running..\033[0m"
+ fi
+}
+
+
+
+case "$1" in
+ start)
+ start
+ ;;
+ stop)
+ stop
+ ;;
+ restart|force-reload|reload)
+ stop
+ start
+ ;;
+ status)
+ status
+ ;;
+ *)
+ #echo "Usage: $0 {start|stop|force-reload|restart|reload|status} OR {/usr/bin/keystone-all start --public-bind-addr xxx.xxx.xxx}"
+ start
+ #RETVAL=1
+ ;;
+esac
+
+exit $RETVAL
diff --git a/debian/stx/keystone-fernet-keys-rotate-active b/debian/stx/keystone-fernet-keys-rotate-active
new file mode 100644
index 000000000..e2124eee3
--- /dev/null
+++ b/debian/stx/keystone-fernet-keys-rotate-active
@@ -0,0 +1,64 @@
+#!/bin/bash
+
+#
+# Wrapper script to rotate keystone fernet keys on active controller only
+#
+KEYSTONE_KEYS_ROTATE_INFO="/var/run/keystone-keys-rotate.info"
+KEYSTONE_KEYS_ROTATE_CMD="/usr/bin/nice -n 2 /usr/bin/keystone-manage fernet_rotate --keystone-user keystone --keystone-group keystone"
+
+function is_active_pgserver()
+{
+ # Determine whether we're running on the same controller as the service.
+ local service=postgres
+ local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
+ if [ "x$enabledactive" == "x" ]
+ then
+ # enabled-active not found for that service on this controller
+ return 1
+ else
+ # enabled-active found for that resource
+ return 0
+ fi
+}
+
+if is_active_pgserver
+then
+ if [ ! -f ${KEYSTONE_KEYS_ROTATE_INFO} ]
+ then
+ echo delay_count=0 > ${KEYSTONE_KEYS_ROTATE_INFO}
+ fi
+
+ source ${KEYSTONE_KEYS_ROTATE_INFO}
+ sudo -u postgres psql -d fm -c "SELECT alarm_id, entity_instance_id from alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
+ if [ $? -eq 0 ]
+ then
+ source /etc/platform/platform.conf
+ if [ "${system_type}" = "All-in-one" ]
+ then
+ source /etc/init.d/task_affinity_functions.sh
+ idle_core=$(get_most_idle_core)
+ if [ "$idle_core" -ne "0" ]
+ then
+ sh -c "exec taskset -c $idle_core ${KEYSTONE_KEYS_ROTATE_CMD}"
+ sed -i "/delay_count/s/=.*/=0/" ${KEYSTONE_KEYS_ROTATE_INFO}
+ exit 0
+ fi
+ fi
+
+ if [ "$delay_count" -lt "3" ]
+ then
+ newval=$(($delay_count+1))
+ sed -i "/delay_count/s/=.*/=$newval/" ${KEYSTONE_KEYS_ROTATE_INFO}
+ (sleep 3600; /usr/bin/keystone-fernet-keys-rotate-active) &
+ exit 0
+ fi
+
+ fi
+
+ eval ${KEYSTONE_KEYS_ROTATE_CMD}
+ sed -i "/delay_count/s/=.*/=0/" ${KEYSTONE_KEYS_ROTATE_INFO}
+
+fi
+
+exit 0
+
diff --git a/debian/stx/keystone.service b/debian/stx/keystone.service
new file mode 100644
index 000000000..a72aa84be
--- /dev/null
+++ b/debian/stx/keystone.service
@@ -0,0 +1,14 @@
+[Unit]
+Description=OpenStack Identity Service (code-named Keystone)
+After=syslog.target network.target
+
+[Service]
+Type=forking
+#ReminAfterExit is set to yes as we have 2 pids to monitor
+RemainAfterExit=yes
+ExecStart=/usr/bin/keystone-all start
+ExecStop=/usr/bin/keystone-all stop
+ExecReload=/usr/bin/keystone-all reload
+
+[Install]
+WantedBy=multi-user.target
diff --git a/debian/stx/password-rules.conf b/debian/stx/password-rules.conf
new file mode 100644
index 000000000..e7ce65602
--- /dev/null
+++ b/debian/stx/password-rules.conf
@@ -0,0 +1,34 @@
+# The password rules captures the [security_compliance]
+# section of the generic Keystone configuration (keystone.conf)
+# This configuration is used to statically define the password
+# rules for password validation in pre-Keystone environments
+#
+# N.B: Only set non-default keys here (default commented configuration
+# items not needed)
+
+[security_compliance]
+
+#
+# From keystone
+#
+
+# This controls the number of previous user password iterations to keep in
+# history, in order to enforce that newly created passwords are unique. Setting
+# the value to one (the default) disables this feature. Thus, to enable this
+# feature, values must be greater than 1. This feature depends on the `sql`
+# backend for the `[identity] driver`. (integer value)
+# Minimum value: 1
+unique_last_password_count = 3
+
+# The regular expression used to validate password strength requirements. By
+# default, the regular expression will match any password. The following is an
+# example of a pattern which requires at least 1 letter, 1 digit, and have a
+# minimum length of 7 characters: ^(?=.*\d)(?=.*[a-zA-Z]).{7,}$ This feature
+# depends on the `sql` backend for the `[identity] driver`. (string value)
+password_regex = ^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()<>{}+=_\\\[\]\-?|~`,.;:]).{7,}$
+
+# Describe your password regular expression here in language for humans. If a
+# password fails to match the regular expression, the contents of this
+# configuration variable will be returned to users to explain why their
+# requested password was insufficient. (string value)
+password_regex_description = Password must have a minimum length of 7 characters, and must contain at least 1 upper case, 1 lower case, 1 digit, and 1 special character
diff --git a/debian/stx/public.py b/debian/stx/public.py
new file mode 100644
index 000000000..d3a29f3b3
--- /dev/null
+++ b/debian/stx/public.py
@@ -0,0 +1,21 @@
+# Copyright (c) 2013-2017 Wind River Systems, Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+
+from keystone.server import wsgi as wsgi_server
+
+import sys
+sys.argv = sys.argv[:1]
+
+application = wsgi_server.initialize_public_application()
--
2.34.1

View File

@ -1,44 +0,0 @@
From 8cf5b37f70ade287cb5eaea7dd48d1eeb1ae737d Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Mon, 14 Mar 2022 10:35:39 -0400
Subject: [PATCH] Add login fail lockout security compliance options
Added two login fail lockout security compliance options:
lockout_duration
lockout_failure_attempts
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
debian/stx/password-rules.conf | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/debian/stx/password-rules.conf b/debian/stx/password-rules.conf
index e7ce656..ac18ef9 100644
--- a/debian/stx/password-rules.conf
+++ b/debian/stx/password-rules.conf
@@ -32,3 +32,22 @@ password_regex = ^(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()<>{}+=_\\\[\]\-?
# configuration variable will be returned to users to explain why their
# requested password was insufficient. (string value)
password_regex_description = Password must have a minimum length of 7 characters, and must contain at least 1 upper case, 1 lower case, 1 digit, and 1 special character
+
+# The number of seconds a user account will be locked when the maximum number
+# of failed authentication attempts (as specified by `[security_compliance]
+# lockout_failure_attempts`) is exceeded. Setting this option will have no
+# effect unless you also set `[security_compliance] lockout_failure_attempts`
+# to a non-zero value. This feature depends on the `sql` backend for the
+# `[identity] driver`. (integer value)
+# Minimum value: 1
+lockout_duration=1800
+
+# The maximum number of times that a user can fail to authenticate before the
+# user account is locked for the number of seconds specified by
+# `[security_compliance] lockout_duration`. This feature is disabled by
+# default. If this feature is enabled and `[security_compliance]
+# lockout_duration` is not set, then users may be locked out indefinitely
+# until the user is explicitly enabled via the API. This feature depends on
+# the `sql` backend for the `[identity] driver`. (integer value)
+# Minimum value: 1
+lockout_failure_attempts=5
--
2.25.1

View File

@ -1,2 +0,0 @@
0001-Add-stx-support.patch
0002-Add-login-fail-lockout-security-compliance-options.patch

View File

@ -1,13 +0,0 @@
---
debname: keystone
debver: 2:18.0.0-3
dl_path:
name: keystone-debian-18.0.0-3.tar.gz
url: https://salsa.debian.org/openstack-team/services/keystone/-/archive/debian/18.0.0-3/keystone-debian-18.0.0-3.tar.gz
md5sum: fba7c47672b976cdcab5c33f49a5d2fd
revision:
dist: $STX_DIST
PKG_GITREVCOUNT: true
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/keystone

View File

@ -1,151 +0,0 @@
From 45b5c5b71b4ad70c5694f06126adfc60a31c51fc Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Tue, 5 Apr 2022 10:39:32 -0400
Subject: [PATCH] Support storing users in keyring
This patch added support to store keystone users in keyring in
"CGCS" service.
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
keystone/exception.py | 6 +++++
keystone/identity/core.py | 54 +++++++++++++++++++++++++++++++++++++++
requirements.txt | 1 +
3 files changed, 61 insertions(+)
diff --git a/keystone/exception.py b/keystone/exception.py
index c62338b..3cbddfb 100644
--- a/keystone/exception.py
+++ b/keystone/exception.py
@@ -227,6 +227,12 @@ class CredentialLimitExceeded(ForbiddenNotSecurity):
"of %(limit)d already exceeded for user.")
+class WRSForbiddenAction(Error):
+ message_format = _("That action is not permitted")
+ code = 403
+ title = 'Forbidden'
+
+
class SecurityError(Error):
"""Security error exception.
diff --git a/keystone/identity/core.py b/keystone/identity/core.py
index 38ebe2f..31d6cd6 100644
--- a/keystone/identity/core.py
+++ b/keystone/identity/core.py
@@ -17,6 +17,7 @@
import copy
import functools
import itertools
+import keyring
import operator
import os
import threading
@@ -54,6 +55,7 @@ MEMOIZE_ID_MAPPING = cache.get_memoization_decorator(group='identity',
DOMAIN_CONF_FHEAD = 'keystone.'
DOMAIN_CONF_FTAIL = '.conf'
+KEYRING_CGCS_SERVICE = "CGCS"
# The number of times we will attempt to register a domain to use the SQL
# driver, if we find that another process is in the middle of registering or
@@ -1125,6 +1127,26 @@ class Manager(manager.Manager):
if new_ref['domain_id'] != orig_ref['domain_id']:
raise exception.ValidationError(_('Cannot change Domain ID'))
+ def _update_keyring_password(self, user, new_password):
+ """Update user password in Keyring backend.
+ This method Looks up user entries in Keyring backend
+ and accordingly update the corresponding user password.
+ :param user : keyring user struct
+ :param new_password : new password to set
+ """
+ if (new_password is not None) and ('name' in user):
+ try:
+ # only update if an entry exists
+ if (keyring.get_password(KEYRING_CGCS_SERVICE, user['name'])):
+ keyring.set_password(KEYRING_CGCS_SERVICE,
+ user['name'], new_password)
+ except (keyring.errors.PasswordSetError, RuntimeError):
+ msg = ('Failed to Update Keyring Password for the user %s')
+ LOG.warning(msg, user['name'])
+ # only raise an exception if this is the admin user
+ if (user['name'] == 'admin'):
+ raise exception.WRSForbiddenAction(msg % user['name'])
+
def _update_user_with_federated_objects(self, user, driver, entity_id):
# If the user did not pass a federated object along inside the user
# object then we simply update the user as normal and add the
@@ -1181,6 +1203,17 @@ class Manager(manager.Manager):
ref = self._update_user_with_federated_objects(user, driver, entity_id)
+ # Certain local Keystone users are stored in Keystone as opposed
+ # to the default SQL Identity backend, such as the admin user.
+ # When its password is updated, we need to update Keyring as well
+ # as certain services retrieve this user context from Keyring and
+ # will get auth failures
+ # Need update password before send out notification. Otherwise,
+ # any process monitor the notification will still get old password
+ # from Keyring.
+ if ('password' in user) and ('name' in ref):
+ self._update_keyring_password(ref, user['password'])
+
notifications.Audit.updated(self._USER, user_id, initiator)
enabled_change = ((user.get('enabled') is False) and
@@ -1210,6 +1243,7 @@ class Manager(manager.Manager):
hints.add_filter('user_id', user_id)
fed_users = PROVIDERS.shadow_users_api.list_federated_users_info(hints)
+ username = user_old.get('name', "")
driver.delete_user(entity_id)
PROVIDERS.assignment_api.delete_user_assignments(user_id)
self.get_user.invalidate(self, user_id)
@@ -1223,6 +1257,18 @@ class Manager(manager.Manager):
PROVIDERS.credential_api.delete_credentials_for_user(user_id)
PROVIDERS.id_mapping_api.delete_id_mapping(user_id)
+
+ # Delete the keyring entry associated with this user (if present)
+ try:
+ keyring.delete_password(KEYRING_CGCS_SERVICE, username)
+ except keyring.errors.PasswordDeleteError:
+ LOG.warning(('delete_user: PasswordDeleteError for %s'),
+ username)
+ pass
+ except exception.UserNotFound:
+ LOG.warning(('delete_user: UserNotFound for %s'),
+ username)
+ pass
notifications.Audit.deleted(self._USER, user_id, initiator)
# Invalidate user role assignments cache region, as it may be caching
@@ -1475,6 +1521,14 @@ class Manager(manager.Manager):
notifications.Audit.updated(self._USER, user_id, initiator)
self._persist_revocation_event_for_user(user_id)
+ user = self.get_user(user_id)
+ # Update Keyring password for the 'user' if it
+ # has an entry in Keyring
+ if (original_password) and ('name' in user):
+ # Change the 'user' password in keyring, provided the user
+ # has an entry in Keyring backend
+ self._update_keyring_password(user, new_password)
+
@MEMOIZE
def _shadow_nonlocal_user(self, user):
try:
diff --git a/requirements.txt b/requirements.txt
index 33a2c42..1119c52 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -36,3 +36,4 @@ pycadf!=2.0.0,>=1.1.0 # Apache-2.0
msgpack>=0.5.0 # Apache-2.0
osprofiler>=1.4.0 # Apache-2.0
pytz>=2013.6 # MIT
+keyring>=5.3
--
2.25.1

View File

@ -1 +0,0 @@
0001-Support-storing-users-in-keyring.patch

View File

@ -1,8 +0,0 @@
This repo is for https://github.com/starlingx-staging/stx-openstack-ras
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,26 +0,0 @@
From 254b2348d105c86438bf4057a4d428c67d51ed37 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Fri, 5 Nov 2021 11:45:54 -0300
Subject: [PATCH] update package dependencies
Signed-off-by: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
---
debian/control | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index 1e4f8c5..ffeb41e 100644
--- a/debian/control
+++ b/debian/control
@@ -9,7 +9,7 @@ Homepage: http://github.com/madkiss/openstack-resource-agents
Package: openstack-resource-agents
Architecture: all
-Depends: ${misc:Depends}, netstat, python-keystoneclient, python-glanceclient, python-novaclient, curl
+Depends: ${misc:Depends}, net-tools, python3-keystoneclient, python3-glanceclient, python3-novaclient, curl
Description: pacemaker resource agents for OpenStack
This package contains resource agents to run most of the OpenStack
components inside a pacemaker-controlled high availability cluster.
--
2.17.1

View File

@ -1 +0,0 @@
0001-update-package-dependencies.patch

View File

@ -1,11 +0,0 @@
debver: 2012.2~f3-1
debname: openstack-resource-agents
dl_path:
name: openstack-resource-agents-2012.2~f3-1.tar.gz
url: https://github.com/starlingx-staging/stx-openstack-ras/tarball/4ba6047db1b70ee2bb3dd43739de7d2fb4e85ebd
md5sum: 58b82fa1d64ea59bad345d01bafb71be
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/openstack-ras

View File

@ -1,24 +0,0 @@
From c63d0c06606969ddfb85538706a1665122e69c44 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Wed, 3 Nov 2021 12:10:34 -0300
Subject: [PATCH] remove unwanted files
Signed-off-by: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
---
Makefile | 3 +++
1 file changed, 3 insertions(+)
diff --git a/Makefile b/Makefile
index c95c187..08c9fa6 100644
--- a/Makefile
+++ b/Makefile
@@ -26,3 +26,6 @@ install:
for file in ocf/*; do \
$(INSTALL) -t $(DESTDIR)/usr/lib/ocf/resource.d/openstack -m 0755 $${file} ; \
done
+ rm -rf $(DESTDIR)/usr/lib/ocf/resource.d/openstack/ceilometer-agent-central
+ rm -rf $(DESTDIR)/usr/lib/ocf/resource.d/openstack/ceilometer-alarm-evaluator
+ rm -rf $(DESTDIR)/usr/lib/ocf/resource.d/openstack/ceilometer-alarm-notifier
--
2.17.1

View File

@ -1 +0,0 @@
0001-remove-unwanted-files.patch

View File

@ -1,221 +0,0 @@
Index: git/ocf/cinder-api
===================================================================
--- git.orig/ocf/cinder-api 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/cinder-api 2014-09-23 10:22:33.294302829 -0400
@@ -244,18 +244,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Cinder API (cinder-api) monitor succeeded"
Index: git/ocf/glance-api
===================================================================
--- git.orig/ocf/glance-api 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/glance-api 2014-09-23 10:16:35.903826295 -0400
@@ -236,11 +236,9 @@
fi
# Monitor the RA by retrieving the image list
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
+ if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \
--os_username "$OCF_RESKEY_os_username" \
- --os_password "$OCF_RESKEY_os_password" \
--os_tenant_name "$OCF_RESKEY_os_tenant_name" \
--os_auth_url "$OCF_RESKEY_os_auth_url" \
index > /dev/null 2>&1
Index: git/ocf/glance-registry
===================================================================
--- git.orig/ocf/glance-registry 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/glance-registry 2014-09-23 10:22:58.078475044 -0400
@@ -246,18 +246,27 @@
# Check whether we are supposed to monitor by logging into glance-registry
# and do it if that's the case.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack ImageService (glance-registry): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack ImageService (glance-registry): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack ImageService (glance-registry) monitor succeeded"
Index: git/ocf/keystone
===================================================================
--- git.orig/ocf/keystone 2014-09-17 13:13:09.768471050 -0400
+++ git/ocf/keystone 2014-09-23 10:18:30.736618732 -0400
@@ -237,12 +237,10 @@
# Check whether we are supposed to monitor by logging into Keystone
# and do it if that's the case.
- if [ -n "$OCF_RESKEY_client_binary" ] && [ -n "$OCF_RESKEY_os_username" ] \
- && [ -n "$OCF_RESKEY_os_password" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] \
+ if [ -n "$OCF_RESKEY_client_binary" ] && [ -n "$OCF_RESKEY_os_password" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] \
&& [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \
--os-username "$OCF_RESKEY_os_username" \
- --os-password "$OCF_RESKEY_os_password" \
--os-tenant-name "$OCF_RESKEY_os_tenant_name" \
--os-auth-url "$OCF_RESKEY_os_auth_url" \
user-list > /dev/null 2>&1
Index: git/ocf/neutron-server
===================================================================
--- git.orig/ocf/neutron-server 2014-09-17 13:13:13.872502871 -0400
+++ git/ocf/neutron-server 2014-09-23 10:23:39.358761926 -0400
@@ -256,18 +256,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Neutron API (neutron-server): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Neutron API (neutron-server): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Neutron Server (neutron-server) monitor succeeded"
Index: git/ocf/nova-api
===================================================================
--- git.orig/ocf/nova-api 2014-09-17 13:13:15.240513478 -0400
+++ git/ocf/nova-api 2014-09-23 10:23:20.454630543 -0400
@@ -244,18 +244,27 @@
fi
# Check detailed information about this specific version of the API.
- if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
- && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
- token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
- \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
- -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
- | cut -d'"' -f4 | head --lines 1`
- http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
- rc=$?
- if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
- ocf_log err "Failed to connect to the OpenStack Nova API (nova-api): $rc and $http_code"
- return $OCF_NOT_RUNNING
- fi
+# if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_password" ] \
+# && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+# token=`curl -s -d "{\"auth\":{\"passwordCredentials\": {\"username\": \"$OCF_RESKEY_os_username\", \
+# \"password\": \"$OCF_RESKEY_os_password\"}, \"tenantName\": \"$OCF_RESKEY_os_tenant_name\"}}" \
+# -H "Content-type: application/json" $OCF_RESKEY_keystone_get_token_url | tr ',' '\n' | grep '"id":' \
+# | cut -d'"' -f4 | head --lines 1`
+# http_code=`curl --write-out %{http_code} --output /dev/null -sH "X-Auth-Token: $token" $OCF_RESKEY_url`
+# rc=$?
+# if [ $rc -ne 0 ] || [ $http_code -ne 200 ]; then
+# ocf_log err "Failed to connect to the OpenStack Nova API (nova-api): $rc and $http_code"
+# return $OCF_NOT_RUNNING
+# fi
+# fi
+ #suppress the information displayed while checking detailed information about this specific version of the API
+ if [ -n "$OCF_RESKEY_os_username"] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_keystone_get_token_url" ]; then
+ ./validation $OCF_RESKEY_keystone_get_token_url $OCF_RESKEY_os_username $OCF_RESKEY_os_tenant_name
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Failed to connect to the OpenStack Cinder API (cinder-api): $rc and $http_code"
+ return $OCF_NOT_RUNNING
+ fi
fi
ocf_log debug "OpenStack Nova API (nova-api) monitor succeeded"
Index: git/ocf/validation
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ git/ocf/validation 2014-09-23 10:06:37.011706573 -0400
@@ -0,0 +1,5 @@
+#!/usr/bin/env python
+
+from keystoneclient import probe
+
+probe.main()

File diff suppressed because it is too large Load Diff

View File

@ -1,374 +0,0 @@
Index: git/ocf/ceilometer-mem-db
===================================================================
--- /dev/null
+++ git/ocf/ceilometer-mem-db
@@ -0,0 +1,369 @@
+#!/bin/sh
+#
+#
+# OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+#
+# Description: Manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
+#
+# Authors: Emilien Macchi
+# Mainly inspired by the Nova Scheduler resource agent written by Sebastien Han
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+# Copyright (c) 2014 Wind River Systems, Inc.
+# SPDX-License-Identifier: Apache-2.0
+#
+#
+#
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_amqp_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="ceilometer-mem-db"
+OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_user_default="root"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_amqp_server_port_default="5672"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
+
+ The 'start' operation starts the scheduler service.
+ The 'stop' operation stops the scheduler service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the scheduler service is running
+ The 'monitor' operation reports whether the scheduler service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="ceilometer-mem-db">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+May manage a ceilometer-mem-db instance or a clone set that
+creates a distributed ceilometer-mem-db cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB (ceilometer-mem-db registry) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="amqp_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">AMQP listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
+</parameter>
+
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
+</longdesc>
+<shortdesc lang="en">Additional parameters for ceilometer-mem-db</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+ceilometer_mem_db_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+ceilometer_mem_db_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ ceilometer_mem_db_check_port $OCF_RESKEY_amqp_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+ceilometer_mem_db_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
+ rm -f $OCF_RESKEY_pid
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+ceilometer_mem_db_monitor() {
+ local rc
+ local pid
+ local scheduler_amqp_check
+
+ ceilometer_mem_db_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the connections according to the PID.
+ # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
+ pid=`cat $OCF_RESKEY_pid`
+ scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Mem DB is not connected to the AMQP server : $rc"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+ceilometer_mem_db_start() {
+ local rc
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual ceilometer-mem-db daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ ceilometer_mem_db_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) started"
+ return $OCF_SUCCESS
+}
+
+ceilometer_mem_db_confirm_stop() {
+ local my_bin
+ local my_processes
+
+ my_binary=`which ${OCF_RESKEY_binary}`
+ my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
+
+ if [ -n "${my_processes}" ]
+ then
+ ocf_log info "About to SIGKILL the following: ${my_processes}"
+ pkill -KILL -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"
+ fi
+}
+
+ceilometer_mem_db_stop() {
+ local rc
+ local pid
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already stopped"
+ ceilometer_mem_db_confirm_stop
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) couldn't be stopped"
+ ceilometer_mem_db_confirm_stop
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) still hasn't stopped yet. Waiting ..."
+ done
+
+ ceilometer_mem_db_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+ ceilometer_mem_db_confirm_stop
+
+ ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+ceilometer_mem_db_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) ceilometer_mem_db_start;;
+ stop) ceilometer_mem_db_stop;;
+ status) ceilometer_mem_db_status;;
+ monitor) ceilometer_mem_db_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac

View File

@ -1,28 +0,0 @@
Index: git/ocf/ceilometer-collector
===================================================================
--- git.orig/ocf/ceilometer-collector 2014-08-07 21:08:46.637211162 -0400
+++ git/ocf/ceilometer-collector 2014-08-07 21:09:24.893475317 -0400
@@ -223,15 +223,16 @@
return $rc
fi
- # Check the connections according to the PID.
- # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
- pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc=$?
- if [ $rc -ne 0 ]; then
+ # Check the connections according to the PID of the child process since
+ # the parent is not the one with the AMQP connection
+ ppid=`cat $OCF_RESKEY_pid`
+ pid=`pgrep -P $ppid`
+ scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc=$?
+ if [ $rc -ne 0 ]; then
ocf_log err "Collector is not connected to the AMQP server : $rc"
return $OCF_NOT_RUNNING
- fi
+ fi
ocf_log debug "OpenStack Ceilometer Collector (ceilometer-collector) monitor succeeded"
return $OCF_SUCCESS

View File

@ -1,22 +0,0 @@
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -183,7 +183,7 @@ ceilometer_api_validate() {
local rc
check_binary $OCF_RESKEY_binary
- check_binary netstat
+ check_binary lsof
ceilometer_api_check_port $OCF_RESKEY_api_listen_port
# A config file on shared storage that is not available
@@ -244,7 +244,7 @@ ceilometer_api_monitor() {
# Check the connections according to the PID.
# We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -apunt | grep -s "$OCF_RESKEY_api_listen_port" | grep -s "$pid" | grep -qs "LISTEN"`
+ scheduler_amqp_check=`lsof -nPp ${pid} | grep -s ":${OCF_RESKEY_api_listen_port}\s\+(LISTEN)"`
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "API is not listening for connections: $rc"

View File

@ -1,63 +0,0 @@
Index: git/ocf/ceilometer-agent-central
===================================================================
--- git.orig/ocf/ceilometer-agent-central
+++ git/ocf/ceilometer-agent-central
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-agent-central"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
Index: git/ocf/ceilometer-agent-notification
===================================================================
--- git.orig/ocf/ceilometer-agent-notification
+++ git/ocf/ceilometer-agent-notification
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-agent-notification"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -34,6 +34,7 @@
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+. /usr/bin/tsconfig
#######################################################################
@@ -41,7 +42,7 @@
OCF_RESKEY_binary_default="ceilometer-api"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/${SW_VERSION}/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_api_listen_port_default="8777"

File diff suppressed because it is too large Load Diff

View File

@ -1,150 +0,0 @@
Index: git/ocf/ceilometer-agent-central
===================================================================
--- git.orig/ocf/ceilometer-agent-central
+++ git/ocf/ceilometer-agent-central
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-agent-central"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer Cen
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Central Agent Service (ceilometer-agent-central) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Central Agent (ceilometer-agent-central registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer Central Agent Service (ceilometer-agent-central)
@@ -247,6 +258,7 @@ ceilometer_agent_central_start() {
# run the actual ceilometer-agent-central daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.
Index: git/ocf/ceilometer-agent-notification
===================================================================
--- git.orig/ocf/ceilometer-agent-notification
+++ git/ocf/ceilometer-agent-notification
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-agent-notification"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_amqp_server_port_default="5672"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer Cen
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer Central Agent Service (ceilometer-agent-notification) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer Central Agent (ceilometer-agent-notification registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer Central Agent Service (ceilometer-agent-notification)
@@ -247,6 +258,7 @@ ceilometer_agent_notification_start() {
# run the actual ceilometer-agent-notification daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.
Index: git/ocf/ceilometer-api
===================================================================
--- git.orig/ocf/ceilometer-api
+++ git/ocf/ceilometer-api
@@ -23,6 +23,7 @@
# OCF instance parameters:
# OCF_RESKEY_binary
# OCF_RESKEY_config
+# OCF_RESKEY_pipeline
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_monitor_binary
@@ -40,12 +41,14 @@
OCF_RESKEY_binary_default="ceilometer-api"
OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
+OCF_RESKEY_pipeline_default="/opt/cgcs/ceilometer/pipeline.yaml"
OCF_RESKEY_user_default="root"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_api_listen_port_default="8777"
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_pipeline=${OCF_RESKEY_pipeline_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_api_listen_port=${OCF_RESKEY_api_listen_port_default}}
@@ -99,6 +102,14 @@ Location of the OpenStack Ceilometer API
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
+<parameter name="pipeline" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Ceilometer API Service (ceilometer-api) pipeline file
+</longdesc>
+<shortdesc lang="en">OpenStack Ceilometer API (ceilometer-api registry) pipeline file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pipeline_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Ceilometer API Service (ceilometer-api)
@@ -257,6 +268,7 @@ ceilometer_api_start() {
# run the actual ceilometer-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ --pipeline_cfg_file=$OCF_RESKEY_pipeline \
$OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -1,141 +0,0 @@
--- a/ocf/cinder-volume
+++ b/ocf/cinder-volume
@@ -221,10 +221,73 @@ cinder_volume_status() {
fi
}
+cinder_volume_get_service_status() {
+ source /etc/nova/openrc
+ python - <<'EOF'
+from __future__ import print_function
+
+from cinderclient import client as cinder_client
+import keyring
+from keystoneclient import session as keystone_session
+from keystoneclient.auth.identity import v3
+import os
+import sys
+
+DEFAULT_OS_VOLUME_API_VERSION = 2
+CINDER_CLIENT_TIMEOUT_SEC = 3
+
+def create_cinder_client():
+ password = keyring.get_password('CGCS', os.environ['OS_USERNAME'])
+ auth = v3.Password(
+ user_domain_name=os.environ['OS_USER_DOMAIN_NAME'],
+ username = os.environ['OS_USERNAME'],
+ password = password,
+ project_domain_name = os.environ['OS_PROJECT_DOMAIN_NAME'],
+ project_name = os.environ['OS_PROJECT_NAME'],
+ auth_url = os.environ['OS_AUTH_URL'])
+ session = keystone_session.Session(auth=auth)
+ return cinder_client.Client(
+ DEFAULT_OS_VOLUME_API_VERSION,
+ username = os.environ['OS_USERNAME'],
+ auth_url = os.environ['OS_AUTH_URL'],
+ region_name=os.environ['OS_REGION_NAME'],
+ session = session, timeout = CINDER_CLIENT_TIMEOUT_SEC)
+
+def service_is_up(s):
+ return s.state == 'up'
+
+def cinder_volume_service_status(cc):
+ services = cc.services.list(
+ host='controller',
+ binary='cinder-volume')
+ if not len(services):
+ return (False, False)
+ exists, is_up = (True, service_is_up(services[0]))
+ for s in services[1:]:
+ # attempt to merge statuses
+ if is_up != service_is_up(s):
+ raise Exception(('Found multiple cinder-volume '
+ 'services with different '
+ 'statuses: {}').format(
+ [s.to_dict() for s in services]))
+ return (exists, is_up)
+
+try:
+ status = cinder_volume_service_status(
+ create_cinder_client())
+ print(('exists={0[0]}\n'
+ 'is_up={0[1]}').format(status))
+except Exception as e:
+ print(str(e), file=sys.stderr)
+ sys.exit(1)
+EOF
+}
+
cinder_volume_monitor() {
local rc
local pid
local volume_amqp_check
+ local check_service_status=$1; shift
cinder_volume_status
rc=$?
@@ -279,6 +342,46 @@ cinder_volume_monitor() {
touch $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ if [ $check_service_status == "check-service-status" ]; then
+ local retries_left
+ local retry_interval
+
+ retries_left=3
+ retry_interval=3
+ while [ $retries_left -gt 0 ]; do
+ retries_left=`expr $retries_left - 1`
+ status=$(cinder_volume_get_service_status)
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "Unable to get Cinder Volume status"
+ if [ $retries_left -gt 0 ]; then
+ sleep $retry_interval
+ continue
+ else
+ return $OCF_ERR_GENERIC
+ fi
+ fi
+
+ local exists
+ local is_up
+ eval $status
+
+ if [ "$exists" == "True" ] && [ "$is_up" == "False" ]; then
+ ocf_log err "Cinder Volume service status is down"
+ if [ $retries_left -gt 0 ]; then
+ sleep $retry_interval
+ continue
+ else
+ ocf_log info "Trigger Cinder Volume guru meditation report"
+ ocf_run kill -s USR2 $pid
+ return $OCF_ERR_GENERIC
+ fi
+ fi
+
+ break
+ done
+ fi
+
ocf_log debug "OpenStack Cinder Volume (cinder-volume) monitor succeeded"
return $OCF_SUCCESS
}
@@ -386,7 +489,7 @@ cinder_volume_stop() {
# SIGTERM didn't help either, try SIGKILL
ocf_log info "OpenStack Cinder Volume (cinder-volume) failed to stop after ${shutdown_timeout}s \
using SIGTERM. Trying SIGKILL ..."
- ocf_run kill -s KILL $pid
+ ocf_run kill -s KILL -$pid
fi
cinder_volume_confirm_stop
@@ -414,7 +517,7 @@ case "$1" in
start) cinder_volume_start;;
stop) cinder_volume_stop;;
status) cinder_volume_status;;
- monitor) cinder_volume_monitor;;
+ monitor) cinder_volume_monitor "check-service-status";;
validate-all) ;;
*) usage
exit $OCF_ERR_UNIMPLEMENTED;;

View File

@ -1,18 +0,0 @@
Index: git/ocf/cinder-volume
===================================================================
--- git.orig/ocf/cinder-volume
+++ git/ocf/cinder-volume
@@ -224,6 +224,13 @@ cinder_volume_monitor() {
pid=`cat $OCF_RESKEY_pid`
if ocf_is_true "$OCF_RESKEY_multibackend"; then
+ pids=`ps -o pid --no-headers --ppid $pid`
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "No child processes from Cinder Volume (yet...): $rc"
+ return $OCF_NOT_RUNNING
+ fi
+
# Grab the child's PIDs
for i in `ps -o pid --no-headers --ppid $pid`
do

View File

@ -1,93 +0,0 @@
Index: git/ocf/cinder-volume
===================================================================
--- git.orig/ocf/cinder-volume
+++ git/ocf/cinder-volume
@@ -55,6 +55,20 @@ OCF_RESKEY_multibackend_default="false"
#######################################################################
+#######################################################################
+
+#
+# The following file is used to determine if Cinder-Volume should be
+# failed if the AMQP check does not pass. Cinder-Volume initializes
+# it's backend before connecting to Rabbit. In Ceph configurations,
+# Cinder-Volume will not connect to Rabbit until the storage blades
+# are provisioned (this can take a long time, no need to restart the
+# process over and over again).
+VOLUME_FAIL_ON_AMQP_CHECK_FILE="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.fail_on_amqp_check"
+
+#######################################################################
+
+
usage() {
cat <<UEND
usage: $0 (start|stop|validate-all|meta-data|status|monitor)
@@ -237,8 +251,13 @@ cinder_volume_monitor() {
volume_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$i" | grep -qs "ESTABLISHED"`
rc=$?
if [ $rc -ne 0 ]; then
- ocf_log err "This child process from Cinder Volume is not connected to the AMQP server: $rc"
- return $OCF_NOT_RUNNING
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ]; then
+ ocf_log err "This child process from Cinder Volume is not connected to the AMQP server: $rc"
+ return $OCF_NOT_RUNNING
+ else
+ ocf_log info "Cinder Volume initializing, child process is not connected to the AMQP server: $rc"
+ return $OCF_SUCCESS
+ fi
fi
done
else
@@ -248,11 +267,18 @@ cinder_volume_monitor() {
volume_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
rc=$?
if [ $rc -ne 0 ]; then
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ]; then
ocf_log err "Cinder Volume is not connected to the AMQP server: $rc"
return $OCF_NOT_RUNNING
+ else
+ ocf_log info "Cinder Volume initializing, not connected to the AMQP server: $rc"
+ return $OCF_SUCCESS
+ fi
fi
fi
+ touch $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+
ocf_log debug "OpenStack Cinder Volume (cinder-volume) monitor succeeded"
return $OCF_SUCCESS
}
@@ -260,6 +286,10 @@ cinder_volume_monitor() {
cinder_volume_start() {
local rc
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
cinder_volume_status
rc=$?
if [ $rc -eq $OCF_SUCCESS ]; then
@@ -293,6 +323,10 @@ cinder_volume_confirm_stop() {
local my_bin
local my_processes
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
my_binary=`which ${OCF_RESKEY_binary}`
my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
@@ -307,6 +341,10 @@ cinder_volume_stop() {
local rc
local pid
+ if [ -e "$VOLUME_FAIL_ON_AMQP_CHECK_FILE" ] ; then
+ rm $VOLUME_FAIL_ON_AMQP_CHECK_FILE >> /dev/null 2>&1
+ fi
+
cinder_volume_status
rc=$?
if [ $rc -eq $OCF_NOT_RUNNING ]; then

View File

@ -1,95 +0,0 @@
From 3ba260dbc2d69a797c8deb55ff0871e752dddebd Mon Sep 17 00:00:00 2001
From: Chris Friesen <chris.friesen@windriver.com>
Date: Tue, 11 Aug 2015 18:48:45 -0400
Subject: [PATCH] CGTS-1851: enable multiple nova-conductor workers
Enable multiple nova-conductor workers by properly handling
the fact that when there are multiple workers the first one just
coordinates the others and doesn't itself connect to AMQP or the DB.
This also fixes up a bunch of whitespace issues, replacing a number
of hard tabs with spaces to make it easier to follow the code.
---
ocf/nova-conductor | 58 ++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 41 insertions(+), 17 deletions(-)
diff --git a/ocf/nova-conductor b/ocf/nova-conductor
index aa1ee2a..25e5f8f 100644
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -239,6 +239,18 @@ nova_conductor_status() {
fi
}
+check_port() {
+ local port=$1
+ local pid=$2
+ netstat -punt | grep -s "$port" | grep -s "$pid" | grep -qs "ESTABLISHED"
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return 0
+ else
+ return 1
+ fi
+}
+
nova_conductor_monitor() {
local rc
local pid
@@ -258,24 +270,36 @@ nova_conductor_monitor() {
# Check the connections according to the PID.
# We are sure to hit the conductor process and not other nova process with the same connection behavior (for example nova-cert)
if ocf_is_true "$OCF_RESKEY_zeromq"; then
- pid=`cat $OCF_RESKEY_pid`
- conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_db=$?
- if [ $rc_db -ne 0 ]; then
- ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
- return $OCF_NOT_RUNNING
- fi
- else
pid=`cat $OCF_RESKEY_pid`
- conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_db=$?
- conductor_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc_amqp=$?
- if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
- ocf_log err "Nova Conductor is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
- return $OCF_NOT_RUNNING
- fi
- fi
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ rc_amqp=`check_port $OCF_RESKEY_amqp_server_port $pid`
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ # may have multiple workers, in which case $pid is the parent and we want to check the children
+ # If there are no children or at least one child is not connected to both DB and AMQP then we fail.
+ KIDPIDS=`pgrep -P $pid -f nova-conductor`
+ if [ ! -z "$KIDPIDS" ]; then
+ for pid in $KIDPIDS
+ do
+ rc_db=`check_port $OCF_RESKEY_database_server_port $pid`
+ rc_amqp=`check_port $OCF_RESKEY_amqp_server_port $pid`
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor pid $pid is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ done
+ else
+ ocf_log err "Nova Conductor pid $pid is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ fi
+ fi
ocf_log debug "OpenStack Nova Conductor (nova-conductor) monitor succeeded"
return $OCF_SUCCESS
--
1.9.1

View File

@ -1,16 +0,0 @@
---
ocf/glance-api | 3 +++
1 file changed, 3 insertions(+)
--- a/ocf/glance-api
+++ b/ocf/glance-api
@@ -243,6 +243,9 @@ glance_api_monitor() {
return $rc
fi
+ ### DPENNEY: Bypass monitor until keyring functionality is ported
+ return $OCF_SUCCESS
+
# Monitor the RA by retrieving the image list
if [ -n "$OCF_RESKEY_os_username" ] && [ -n "$OCF_RESKEY_os_tenant_name" ] && [ -n "$OCF_RESKEY_os_auth_url" ]; then
ocf_run -q $OCF_RESKEY_client_binary \

View File

@ -1,13 +0,0 @@
Index: git/ocf/glance-api
===================================================================
--- git.orig/ocf/glance-api
+++ git/ocf/glance-api
@@ -249,7 +249,7 @@ glance_api_monitor() {
--os_username "$OCF_RESKEY_os_username" \
--os_tenant_name "$OCF_RESKEY_os_tenant_name" \
--os_auth_url "$OCF_RESKEY_os_auth_url" \
- index > /dev/null 2>&1
+ image-list > /dev/null 2>&1
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "Failed to connect to the OpenStack ImageService (glance-api): $rc"

View File

@ -1,349 +0,0 @@
Index: git/ocf/heat-api-cloudwatch
===================================================================
--- /dev/null
+++ git/ocf/heat-api-cloudwatch
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api-cloudwatch) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api-cloudwatch"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8000"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api-cloudwatch) process as an HA resource
+
+ The 'start' operation starts the heat-api-cloudwatch service.
+ The 'stop' operation stops the heat-api-cloudwatch service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api-cloudwatch service is running
+ The 'monitor' operation reports whether the heat-api-cloudwatch service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api-cloudwatch">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+May manage a heat-api-cloudwatch instance or a clone set that
+creates a distributed heat-api-cloudwatch cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api-cloudwatch)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api-cloudwatch)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api-cloudwatch) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api-cloudwatch) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cloudwatch) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api-cloudwatch) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cloudwatch) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api-cloudwatch server.
+
+</longdesc>
+<shortdesc lang="en">heat-api-cloudwatch listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api-cloudwatch)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api-cloudwatch</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_cloudwatch_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_cloudwatch_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_cloudwatch_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_cloudwatch_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api-cloudwatch) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_cloudwatch_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_cloudwatch_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api-cloudwatch is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cloudwatch) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_cloudwatch_start() {
+ local rc
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api-cloudwatch daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_cloudwatch_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cloudwatch) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_cloudwatch_stop() {
+ local rc
+ local pid
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cloudwatch) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cloudwatch) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_cloudwatch_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cloudwatch) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_cloudwatch_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_cloudwatch_start;;
+ stop) heat_api_cloudwatch_stop;;
+ status) heat_api_cloudwatch_status;;
+ monitor) heat_api_cloudwatch_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+

View File

@ -1,52 +0,0 @@
---
ocf/heat-engine | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
--- a/ocf/heat-engine
+++ b/ocf/heat-engine
@@ -238,6 +238,24 @@ heat_engine_status() {
fi
}
+# Function to check a process for port usage, as well as children
+check_port() {
+ local port=$1
+ local pid=$2
+
+ local children=`ps -ef | awk -v ppid=$pid '$3 == ppid { print $2}'`
+
+ for p in $pid $children
+ do
+ netstat -punt | grep -s "$port" | grep -s "$p" | grep -qs "ESTABLISHED"
+ if [ $? -eq 0 ]
+ then
+ return 0
+ fi
+ done
+ return 1
+}
+
heat_engine_monitor() {
local rc
local pid
@@ -258,7 +276,7 @@ heat_engine_monitor() {
# We are sure to hit the heat-engine process and not other heat process with the same connection behavior (for example heat-api)
if ocf_is_true "$OCF_RESKEY_zeromq"; then
pid=`cat $OCF_RESKEY_pid`
- engine_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ engine_db_check=`check_port "$OCF_RESKEY_database_server_port" "$pid"`
rc_db=$?
if [ $rc_db -ne 0 ]; then
ocf_log err "heat-engine is not connected to the database server: $rc_db"
@@ -266,9 +284,9 @@ heat_engine_monitor() {
fi
else
pid=`cat $OCF_RESKEY_pid`
- engine_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ engine_db_check=`check_port "$OCF_RESKEY_database_server_port" "$pid"`
rc_db=$?
- engine_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ engine_amqp_check=`check_port "$OCF_RESKEY_amqp_server_port" "$pid"`
rc_amqp=$?
if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
ocf_log err "Heat Engine is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"

View File

@ -1,698 +0,0 @@
Index: git/ocf/heat-api
===================================================================
--- /dev/null
+++ git/ocf/heat-api
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8004"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api) process as an HA resource
+
+ The 'start' operation starts the heat-api service.
+ The 'stop' operation stops the heat-api service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api service is running
+ The 'monitor' operation reports whether the heat-api service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api)
+May manage a heat-api instance or a clone set that
+creates a distributed heat-api cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api server.
+
+</longdesc>
+<shortdesc lang="en">heat-api listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_start() {
+ local rc
+
+ heat_api_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_stop() {
+ local rc
+ local pid
+
+ heat_api_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_start;;
+ stop) heat_api_stop;;
+ status) heat_api_status;;
+ monitor) heat_api_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+
Index: git/ocf/heat-api-cfn
===================================================================
--- /dev/null
+++ git/ocf/heat-api-cfn
@@ -0,0 +1,344 @@
+#!/bin/sh
+#
+#
+# OpenStack Orchestration Engine Service (heat-api-cfn)
+#
+# Description: Manages an OpenStack Orchestration Engine Service (heat-api-cfn) process as an HA resource
+#
+# Authors: Emilien Macchi
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_server_port
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="heat-api-cfn"
+OCF_RESKEY_config_default="/etc/heat/heat.conf"
+OCF_RESKEY_user_default="heat"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_server_port_default="8000"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_server_port=${OCF_RESKEY_server_port_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack Orchestration Engine Service (heat-api-cfn) process as an HA resource
+
+ The 'start' operation starts the heat-api-cfn service.
+ The 'stop' operation stops the heat-api-cfn service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the heat-api-cfn service is running
+ The 'monitor' operation reports whether the heat-api-cfn service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="heat-api-cfn">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Orchestration Engine Service (heat-api-cfn)
+May manage a heat-api-cfn instance or a clone set that
+creates a distributed heat-api-cfn cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Orchestration Engine Service (heat-api-cfn)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine server binary (heat-api-cfn)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine server binary (heat-api-cfn)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Orchestration Engine Service (heat-api-cfn) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine (heat-api-cfn) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Orchestration Engine Service (heat-api-cfn)
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cfn) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Orchestration Engine Service (heat-api-cfn) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Orchestration Engine Service (heat-api-cfn) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the heat-api-cfn server.
+
+</longdesc>
+<shortdesc lang="en">heat-api-cfn listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_server_port_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Orchestration Engine Service (heat-api-cfn)
+</longdesc>
+<shortdesc lang="en">Additional parameters for heat-api-cfn</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+heat_api_cfn_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+heat_api_cfn_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ heat_api_cfn_check_port $OCF_RESKEY_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+heat_api_cfn_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Orchestration Engine (heat-api-cfn) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+heat_api_cfn_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local engine_db_check
+
+ heat_api_cfn_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the server is listening on the server port
+ engine_db_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "heat-api-cfn is not listening on $OCF_RESKEY_console_port: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cfn) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+heat_api_cfn_start() {
+ local rc
+
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual heat-api-cfn daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ heat_api_cfn_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cfn) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) started"
+ return $OCF_SUCCESS
+}
+
+heat_api_cfn_stop() {
+ local rc
+ local pid
+
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Orchestration Engine (heat-api-cfn) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Orchestration Engine (heat-api-cfn) still hasn't stopped yet. Waiting ..."
+ done
+
+ heat_api_cfn_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Orchestration Engine (heat-api-cfn) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+heat_api_cfn_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) heat_api_cfn_start;;
+ stop) heat_api_cfn_stop;;
+ status) heat_api_cfn_status;;
+ monitor) heat_api_cfn_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+

View File

@ -1,15 +0,0 @@
---
ocf/neutron-server | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/ocf/neutron-server
+++ b/ocf/neutron-server
@@ -288,7 +288,7 @@ neutron_server_start() {
# Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- --config-file=$OCF_RESKEY_plugin_config --log-file=/var/log/neutron/server.log $OCF_RESKEY_additional_parameters"' >> \
+ --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
/dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -1,52 +0,0 @@
Index: openstack-resource-agents-git-64e633d/ocf/neutron-server
===================================================================
--- openstack-resource-agents-git-64e633d.orig/ocf/neutron-server 2016-08-09 19:09:49.981633000 -0400
+++ openstack-resource-agents-git-64e633d/ocf/neutron-server 2016-08-10 09:31:41.221558000 -0400
@@ -25,6 +25,7 @@
# OCF_RESKEY_binary
# OCF_RESKEY_config
# OCF_RESKEY_plugin_config
+# OCF_RESKEY_sriov_plugin_config
# OCF_RESKEY_user
# OCF_RESKEY_pid
# OCF_RESKEY_os_username
@@ -45,6 +46,7 @@
OCF_RESKEY_binary_default="neutron-server"
OCF_RESKEY_config_default="/etc/neutron/neutron.conf"
OCF_RESKEY_plugin_config_default="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
+OCF_RESKEY_sriov_plugin_config_default="/etc/neutron/plugins/ml2/ml2_conf_sriov.ini"
OCF_RESKEY_user_default="neutron"
OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
OCF_RESKEY_url_default="http://127.0.0.1:9696"
@@ -53,6 +55,7 @@
: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
: ${OCF_RESKEY_plugin_config=${OCF_RESKEY_plugin_config_default}}
+: ${OCF_RESKEY_sriov_plugin_config=${OCF_RESKEY_sriov_plugin_config_default}}
: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
: ${OCF_RESKEY_url=${OCF_RESKEY_url_default}}
@@ -115,6 +118,14 @@
<content type="string" default="${OCF_RESKEY_plugin_config_default}" />
</parameter>
+<parameter name="sriov_plugin_config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack sriov plugin configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack neutron sriov config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_sriov_plugin_config_default}" />
+</parameter>
+
<parameter name="user" unique="0" required="0">
<longdesc lang="en">
User running OpenStack Neutron Server (neutron-server)
@@ -288,7 +299,7 @@
# Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
+ --config-file=$OCF_RESKEY_plugin_config --config-file=$OCF_RESKEY_sriov_plugin_config $OCF_RESKEY_additional_parameters"' >> \
/dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -1,64 +0,0 @@
---
ocf/nova-novnc | 23 ++++++++++++++++++++++-
1 file changed, 22 insertions(+), 1 deletion(-)
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -139,7 +139,7 @@ Additional parameters to pass on to the
<actions>
<action name="start" timeout="10" />
-<action name="stop" timeout="10" />
+<action name="stop" timeout="15" />
<action name="status" timeout="10" />
<action name="monitor" timeout="5" interval="10" />
<action name="validate-all" timeout="5" />
@@ -260,6 +260,23 @@ nova_vnc_console_start() {
return $OCF_SUCCESS
}
+nova_vnc_console_stop_all() {
+ # Make sure nova-novncproxy and all the children are stopped.
+ for sig in TERM KILL
+ do
+ for pid in $(ps -eo pid,cmd | grep python |\
+ grep "nova-novncproxy" | \
+ grep -v grep | awk '{print $1}')
+ do
+ ocf_log info "Manually killing $pid with $sig"
+ kill -$sig $pid
+ done
+ sleep 1
+ done
+
+ return $OCF_SUCCESS
+}
+
nova_vnc_console_stop() {
local rc
local pid
@@ -268,6 +285,7 @@ nova_vnc_console_stop() {
rc=$?
if [ $rc -eq $OCF_NOT_RUNNING ]; then
ocf_log info "OpenStack Nova VNC Console (nova-novncproxy) already stopped"
+ nova_vnc_console_stop_all
return $OCF_SUCCESS
fi
@@ -277,6 +295,7 @@ nova_vnc_console_stop() {
rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "OpenStack Nova VNC Console (nova-novncproxy) couldn't be stopped"
+ nova_vnc_console_stop_all
exit $OCF_ERR_GENERIC
fi
@@ -310,6 +329,8 @@ nova_vnc_console_stop() {
rm -f $OCF_RESKEY_pid
+ nova_vnc_console_stop_all
+
return $OCF_SUCCESS
}

View File

@ -1,42 +0,0 @@
diff --git a/ocf/nova-api b/ocf/nova-api
index 5764adc..b67c4e5 100644
--- a/ocf/nova-api
+++ b/ocf/nova-api
@@ -275,6 +275,9 @@ nova_api_start() {
# Change the working dir to /, to be sure it's accesible
cd /
+ # Run the pre-start hooks. This can be used to trigger a nova database sync, for example.
+ /usr/bin/nova-controller-runhooks
+
# run the actual nova-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
diff --git a/ocf/nova-conductor b/ocf/nova-conductor
index dfcff97..aa1ee2a 100644
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -294,6 +294,9 @@ nova_conductor_start() {
# Change the working dir to /, to be sure it's accesible
cd /
+ # Run the pre-start hooks. This can be used to trigger a nova database sync, for example.
+ /usr/bin/nova-controller-runhooks
+
# run the actual nova-conductor daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
diff --git a/ocf/nova-scheduler b/ocf/nova-scheduler
index afaf8e9..45378ca 100644
--- a/ocf/nova-scheduler
+++ b/ocf/nova-scheduler
@@ -294,6 +294,9 @@ nova_scheduler_start() {
# Change the working dir to /, to be sure it's accesible
cd /
+ # Run the pre-start hooks. This can be used to trigger a nova database sync, for example.
+ /usr/bin/nova-controller-runhooks
+
# run the actual nova-scheduler daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \

View File

@ -1,94 +0,0 @@
---
ocf/nova-api | 3 +++
ocf/nova-cert | 3 +++
ocf/nova-conductor | 3 +++
ocf/nova-consoleauth | 3 +++
ocf/nova-network | 3 +++
ocf/nova-novnc | 3 +++
ocf/nova-scheduler | 3 +++
7 files changed, 21 insertions(+)
--- a/ocf/nova-api
+++ b/ocf/nova-api
@@ -272,6 +272,9 @@ nova_api_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-api daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-cert
+++ b/ocf/nova-cert
@@ -285,6 +285,9 @@ nova_cert_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-cert daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-conductor
+++ b/ocf/nova-conductor
@@ -284,6 +284,9 @@ nova_conductor_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-conductor daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-consoleauth
+++ b/ocf/nova-consoleauth
@@ -285,6 +285,9 @@ nova_consoleauth_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-consoleauth daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-network
+++ b/ocf/nova-network
@@ -264,6 +264,9 @@ nova_network_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-network daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -235,6 +235,9 @@ nova_vnc_console_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-novncproxy daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config --web /usr/share/novnc/ \
--- a/ocf/nova-scheduler
+++ b/ocf/nova-scheduler
@@ -284,6 +284,9 @@ nova_scheduler_start() {
return $OCF_SUCCESS
fi
+ # Change the working dir to /, to be sure it's accesible
+ cd /
+
# run the actual nova-scheduler daemon. Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \

View File

@ -1,405 +0,0 @@
---
ocf/nova-conductor | 383 +++++++++++++++++++++++++++++++++++++++++++++++++++++
ocf/nova-novnc | 5
2 files changed, 387 insertions(+), 1 deletion(-)
--- /dev/null
+++ b/ocf/nova-conductor
@@ -0,0 +1,383 @@
+#!/bin/sh
+#
+#
+# OpenStack Conductor Service (nova-conductor)
+#
+# Description: Manages an OpenStack Conductor Service (nova-conductor) process as an HA resource
+#
+# Authors: Sébastien Han
+# Mainly inspired by the Glance API resource agent written by Martin Gerhard Loschwitz from Hastexo: http://goo.gl/whLpr
+#
+# Support: openstack@lists.launchpad.net
+# License: Apache Software License (ASL) 2.0
+#
+#
+# See usage() function below for more details ...
+#
+# OCF instance parameters:
+# OCF_RESKEY_binary
+# OCF_RESKEY_config
+# OCF_RESKEY_user
+# OCF_RESKEY_pid
+# OCF_RESKEY_monitor_binary
+# OCF_RESKEY_database_server_port
+# OCF_RESKEY_amqp_server_port
+# OCF_RESKEY_zeromq
+# OCF_RESKEY_additional_parameters
+#######################################################################
+# Initialization:
+
+: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
+. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
+
+#######################################################################
+
+# Fill in some defaults if no values are specified
+
+OCF_RESKEY_binary_default="nova-conductor"
+OCF_RESKEY_config_default="/etc/nova/nova.conf"
+OCF_RESKEY_user_default="nova"
+OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
+OCF_RESKEY_database_server_port_default="3306"
+OCF_RESKEY_amqp_server_port_default="5672"
+OCF_RESKEY_zeromq_default="false"
+
+: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
+: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
+: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
+: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
+: ${OCF_RESKEY_database_server_port=${OCF_RESKEY_database_server_port_default}}
+: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
+: ${OCF_RESKEY_zeromq=${OCF_RESKEY_zeromq_default}}
+
+#######################################################################
+
+usage() {
+ cat <<UEND
+ usage: $0 (start|stop|validate-all|meta-data|status|monitor)
+
+ $0 manages an OpenStack ConductorService (nova-conductor) process as an HA resource
+
+ The 'start' operation starts the conductor service.
+ The 'stop' operation stops the conductor service.
+ The 'validate-all' operation reports whether the parameters are valid
+ The 'meta-data' operation reports this RA's meta-data information
+ The 'status' operation reports whether the conductor service is running
+ The 'monitor' operation reports whether the conductor service seems to be working
+
+UEND
+}
+
+meta_data() {
+ cat <<END
+<?xml version="1.0"?>
+<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
+<resource-agent name="nova-conductor">
+<version>1.0</version>
+
+<longdesc lang="en">
+Resource agent for the OpenStack Nova Conductor Service (nova-conductor)
+May manage a nova-conductor instance or a clone set that
+creates a distributed nova-conductor cluster.
+</longdesc>
+<shortdesc lang="en">Manages the OpenStack Conductor Service (nova-conductor)</shortdesc>
+<parameters>
+
+<parameter name="binary" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Nova Conductor server binary (nova-conductor)
+</longdesc>
+<shortdesc lang="en">OpenStack Nova Conductor server binary (nova-conductor)</shortdesc>
+<content type="string" default="${OCF_RESKEY_binary_default}" />
+</parameter>
+
+<parameter name="config" unique="0" required="0">
+<longdesc lang="en">
+Location of the OpenStack Conductor Service (nova-conductor) configuration file
+</longdesc>
+<shortdesc lang="en">OpenStack Nova Conductor (nova-conductor) config file</shortdesc>
+<content type="string" default="${OCF_RESKEY_config_default}" />
+</parameter>
+
+<parameter name="user" unique="0" required="0">
+<longdesc lang="en">
+User running OpenStack Conductor Service (nova-conductor)
+</longdesc>
+<shortdesc lang="en">OpenStack Conductor Service (nova-conductor) user</shortdesc>
+<content type="string" default="${OCF_RESKEY_user_default}" />
+</parameter>
+
+<parameter name="pid" unique="0" required="0">
+<longdesc lang="en">
+The pid file to use for this OpenStack Conductor Service (nova-conductor) instance
+</longdesc>
+<shortdesc lang="en">OpenStack Conductor Service (nova-conductor) pid file</shortdesc>
+<content type="string" default="${OCF_RESKEY_pid_default}" />
+</parameter>
+
+<parameter name="database_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the database server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">Database listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_database_server_port_default}" />
+</parameter>
+
+<parameter name="amqp_server_port" unique="0" required="0">
+<longdesc lang="en">
+The listening port number of the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">AMQP listening port</shortdesc>
+<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
+</parameter>
+
+<parameter name="zeromq" unique="0" required="0">
+<longdesc lang="en">
+If zeromq is used, this will disable the connection test to the AMQP server. Use for monitoring purposes
+</longdesc>
+<shortdesc lang="en">Zero-MQ usage</shortdesc>
+<content type="boolean" default="${OCF_RESKEY_zeromq_default}" />
+</parameter>
+
+<parameter name="additional_parameters" unique="0" required="0">
+<longdesc lang="en">
+Additional parameters to pass on to the OpenStack Conductor Service (nova-conductor)
+</longdesc>
+<shortdesc lang="en">Additional parameters for nova-conductor</shortdesc>
+<content type="string" />
+</parameter>
+
+</parameters>
+
+<actions>
+<action name="start" timeout="20" />
+<action name="stop" timeout="20" />
+<action name="status" timeout="20" />
+<action name="monitor" timeout="30" interval="20" />
+<action name="validate-all" timeout="5" />
+<action name="meta-data" timeout="5" />
+</actions>
+</resource-agent>
+END
+}
+
+#######################################################################
+# Functions invoked by resource manager actions
+
+nova_conductor_check_port() {
+# This function has been taken from the squid RA and improved a bit
+# The length of the integer must be 4
+# Examples of valid port: "1080", "0080"
+# Examples of invalid port: "1080bad", "0", "0000", ""
+
+ local int
+ local cnt
+
+ int="$1"
+ cnt=${#int}
+ echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
+
+ if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
+ ocf_log err "Invalid port number: $1"
+ exit $OCF_ERR_CONFIGURED
+ fi
+}
+
+nova_conductor_validate() {
+ local rc
+
+ check_binary $OCF_RESKEY_binary
+ check_binary netstat
+ nova_conductor_check_port $OCF_RESKEY_database_server_port
+ nova_conductor_check_port $OCF_RESKEY_amqp_server_port
+
+ # A config file on shared storage that is not available
+ # during probes is OK.
+ if [ ! -f $OCF_RESKEY_config ]; then
+ if ! ocf_is_probe; then
+ ocf_log err "Config $OCF_RESKEY_config doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+ ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
+ fi
+
+ getent passwd $OCF_RESKEY_user >/dev/null 2>&1
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "User $OCF_RESKEY_user doesn't exist"
+ return $OCF_ERR_INSTALLED
+ fi
+
+ true
+}
+
+nova_conductor_status() {
+ local pid
+ local rc
+
+ if [ ! -f $OCF_RESKEY_pid ]; then
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) is not running"
+ return $OCF_NOT_RUNNING
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ fi
+
+ ocf_run -warn kill -s 0 $pid
+ rc=$?
+ if [ $rc -eq 0 ]; then
+ return $OCF_SUCCESS
+ else
+ ocf_log info "Old PID file found, but OpenStack Nova Conductor (nova-conductor) is not running"
+ return $OCF_NOT_RUNNING
+ fi
+}
+
+nova_conductor_monitor() {
+ local rc
+ local pid
+ local rc_db
+ local rc_amqp
+ local conductor_db_check
+ local conductor_amqp_check
+
+ nova_conductor_status
+ rc=$?
+
+ # If status returned anything but success, return that immediately
+ if [ $rc -ne $OCF_SUCCESS ]; then
+ return $rc
+ fi
+
+ # Check the connections according to the PID.
+ # We are sure to hit the conductor process and not other nova process with the same connection behavior (for example nova-cert)
+ if ocf_is_true "$OCF_RESKEY_zeromq"; then
+ pid=`cat $OCF_RESKEY_pid`
+ conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc_db=$?
+ if [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the database server: $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ else
+ pid=`cat $OCF_RESKEY_pid`
+ conductor_db_check=`netstat -punt | grep -s "$OCF_RESKEY_database_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc_db=$?
+ conductor_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
+ rc_amqp=$?
+ if [ $rc_amqp -ne 0 ] || [ $rc_db -ne 0 ]; then
+ ocf_log err "Nova Conductor is not connected to the AMQP server and/or the database server: AMQP connection test returned $rc_amqp and database connection test returned $rc_db"
+ return $OCF_NOT_RUNNING
+ fi
+ fi
+
+ ocf_log debug "OpenStack Nova Conductor (nova-conductor) monitor succeeded"
+ return $OCF_SUCCESS
+}
+
+nova_conductor_start() {
+ local rc
+
+ nova_conductor_status
+ rc=$?
+ if [ $rc -eq $OCF_SUCCESS ]; then
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) already running"
+ return $OCF_SUCCESS
+ fi
+
+ # run the actual nova-conductor daemon. Don't use ocf_run as we're sending the tool's output
+ # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+
+ # Spin waiting for the server to come up.
+ while true; do
+ nova_conductor_monitor
+ rc=$?
+ [ $rc -eq $OCF_SUCCESS ] && break
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ ocf_log err "OpenStack Nova Conductor (nova-conductor) start failed"
+ exit $OCF_ERR_GENERIC
+ fi
+ sleep 1
+ done
+
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) started"
+ return $OCF_SUCCESS
+}
+
+nova_conductor_stop() {
+ local rc
+ local pid
+
+ nova_conductor_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) already stopped"
+ return $OCF_SUCCESS
+ fi
+
+ # Try SIGTERM
+ pid=`cat $OCF_RESKEY_pid`
+ ocf_run kill -s TERM $pid
+ rc=$?
+ if [ $rc -ne 0 ]; then
+ ocf_log err "OpenStack Nova Conductor (nova-conductor) couldn't be stopped"
+ exit $OCF_ERR_GENERIC
+ fi
+
+ # stop waiting
+ shutdown_timeout=15
+ if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
+ shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
+ fi
+ count=0
+ while [ $count -lt $shutdown_timeout ]; do
+ nova_conductor_status
+ rc=$?
+ if [ $rc -eq $OCF_NOT_RUNNING ]; then
+ break
+ fi
+ count=`expr $count + 1`
+ sleep 1
+ ocf_log debug "OpenStack Nova Conductor (nova-conductor) still hasn't stopped yet. Waiting ..."
+ done
+
+ nova_conductor_status
+ rc=$?
+ if [ $rc -ne $OCF_NOT_RUNNING ]; then
+ # SIGTERM didn't help either, try SIGKILL
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) failed to stop after ${shutdown_timeout}s \
+ using SIGTERM. Trying SIGKILL ..."
+ ocf_run kill -s KILL $pid
+ fi
+
+ ocf_log info "OpenStack Nova Conductor (nova-conductor) stopped"
+
+ rm -f $OCF_RESKEY_pid
+
+ return $OCF_SUCCESS
+}
+
+#######################################################################
+
+case "$1" in
+ meta-data) meta_data
+ exit $OCF_SUCCESS;;
+ usage|help) usage
+ exit $OCF_SUCCESS;;
+esac
+
+# Anything except meta-data and help must pass validation
+nova_conductor_validate || exit $?
+
+# What kind of method was invoked?
+case "$1" in
+ start) nova_conductor_start;;
+ stop) nova_conductor_stop;;
+ status) nova_conductor_status;;
+ monitor) nova_conductor_monitor;;
+ validate-all) ;;
+ *) usage
+ exit $OCF_ERR_UNIMPLEMENTED;;
+esac
+
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -214,7 +214,10 @@ nova_vnc_console_monitor() {
# Check whether we are supposed to monitor by logging into nova-novncproxy
# and do it if that's the case.
vnc_list_check=`netstat -a | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
- rc=$?
+ #rc=$?
+ # not sure why grep is returning 1 .. should root cause at some point.
+ # return success for now since service and port are both up
+ rc=0
if [ $rc -ne 0 ]; then
ocf_log err "Nova VNC Console doesn't seem to listen on his default port: $rc"
return $OCF_NOT_RUNNING

View File

@ -1,57 +0,0 @@
---
ocf/nova-novnc | 8 +++-----
ocf/neutron-agent-dhcp | 2 +-
ocf/neutron-agent-l3 | 2 +-
ocf/neutron-server | 2 +-
4 files changed, 6 insertions(+), 8 deletions(-)
--- a/ocf/neutron-agent-dhcp
+++ b/ocf/neutron-agent-dhcp
@@ -95,7 +95,7 @@ Location of the OpenStack Quantum Servic
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
-<parameter name="plugin config" unique="0" required="0">
+<parameter name="plugin_config" unique="0" required="0">
<longdesc lang="en">
Location of the OpenStack DHCP Service (neutron-dhcp-agent) configuration file
</longdesc>
--- a/ocf/neutron-agent-l3
+++ b/ocf/neutron-agent-l3
@@ -95,7 +95,7 @@ Location of the OpenStack Quantum Servic
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
-<parameter name="plugin config" unique="0" required="0">
+<parameter name="plugin_config" unique="0" required="0">
<longdesc lang="en">
Location of the OpenStack L3 Service (neutron-l3-agent) configuration file
</longdesc>
--- a/ocf/neutron-server
+++ b/ocf/neutron-server
@@ -101,7 +101,7 @@ Location of the OpenStack Quantum Server
<content type="string" default="${OCF_RESKEY_config_default}" />
</parameter>
-<parameter name="plugin config" unique="0" required="0">
+<parameter name="plugin_config" unique="0" required="0">
<longdesc lang="en">
Location of the OpenStack Default Plugin (Open-vSwitch) configuration file
</longdesc>
--- a/ocf/nova-novnc
+++ b/ocf/nova-novnc
@@ -213,11 +213,9 @@ nova_vnc_console_monitor() {
# Check whether we are supposed to monitor by logging into nova-novncproxy
# and do it if that's the case.
- vnc_list_check=`netstat -a | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
- #rc=$?
- # not sure why grep is returning 1 .. should root cause at some point.
- # return success for now since service and port are both up
- rc=0
+ # Adding -n to netstat so that dns delays will not impact this.
+ vnc_list_check=`netstat -an | grep -s "$OCF_RESKEY_console_port" | grep -qs "LISTEN"`
+ rc=$?
if [ $rc -ne 0 ]; then
ocf_log err "Nova VNC Console doesn't seem to listen on his default port: $rc"
return $OCF_NOT_RUNNING

View File

@ -1,20 +0,0 @@
---
ocf/neutron-server | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
--- a/ocf/neutron-server
+++ b/ocf/neutron-server
@@ -287,8 +287,11 @@ neutron_server_start() {
# run the actual neutron-server daemon with correct configurations files (server + plugin)
# Don't use ocf_run as we're sending the tool's output
# straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
- su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
+ ## DPENNEY: Removing plugin ref
+ ##su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
+ ## --config-file=$OCF_RESKEY_plugin_config $OCF_RESKEY_additional_parameters"' >> \
+ ## /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
+ su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config"' >> \
/dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
# Spin waiting for the server to come up.

View File

@ -1,388 +0,0 @@
From daaf82a9e83f28e1e1072fc6d77ca57d4eb22c5d Mon Sep 17 00:00:00 2001
From: Angie Wang <Angie.Wang@windriver.com>
Date: Mon, 14 Nov 2016 13:58:27 -0500
Subject: [PATCH] remove-ceilometer-mem-db
---
ocf/ceilometer-mem-db | 369 --------------------------------------------------
1 file changed, 369 deletions(-)
delete mode 100644 ocf/ceilometer-mem-db
diff --git a/ocf/ceilometer-mem-db b/ocf/ceilometer-mem-db
deleted file mode 100644
index d7112d8..0000000
--- a/ocf/ceilometer-mem-db
+++ /dev/null
@@ -1,369 +0,0 @@
-#!/bin/sh
-#
-#
-# OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-#
-# Description: Manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
-#
-# Authors: Emilien Macchi
-# Mainly inspired by the Nova Scheduler resource agent written by Sebastien Han
-#
-# Support: openstack@lists.launchpad.net
-# License: Apache Software License (ASL) 2.0
-#
-# Copyright (c) 2014-2016 Wind River Systems, Inc.
-# SPDX-License-Identifier: Apache-2.0
-#
-#
-#
-#
-#
-# See usage() function below for more details ...
-#
-# OCF instance parameters:
-# OCF_RESKEY_binary
-# OCF_RESKEY_config
-# OCF_RESKEY_user
-# OCF_RESKEY_pid
-# OCF_RESKEY_monitor_binary
-# OCF_RESKEY_amqp_server_port
-# OCF_RESKEY_additional_parameters
-#######################################################################
-# Initialization:
-
-: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
-. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
-
-#######################################################################
-
-# Fill in some defaults if no values are specified
-
-OCF_RESKEY_binary_default="ceilometer-mem-db"
-OCF_RESKEY_config_default="/etc/ceilometer/ceilometer.conf"
-OCF_RESKEY_user_default="root"
-OCF_RESKEY_pid_default="$HA_RSCTMP/$OCF_RESOURCE_INSTANCE.pid"
-OCF_RESKEY_amqp_server_port_default="5672"
-
-: ${OCF_RESKEY_binary=${OCF_RESKEY_binary_default}}
-: ${OCF_RESKEY_config=${OCF_RESKEY_config_default}}
-: ${OCF_RESKEY_user=${OCF_RESKEY_user_default}}
-: ${OCF_RESKEY_pid=${OCF_RESKEY_pid_default}}
-: ${OCF_RESKEY_amqp_server_port=${OCF_RESKEY_amqp_server_port_default}}
-
-#######################################################################
-
-usage() {
- cat <<UEND
- usage: $0 (start|stop|validate-all|meta-data|status|monitor)
-
- $0 manages an OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) process as an HA resource
-
- The 'start' operation starts the scheduler service.
- The 'stop' operation stops the scheduler service.
- The 'validate-all' operation reports whether the parameters are valid
- The 'meta-data' operation reports this RA's meta-data information
- The 'status' operation reports whether the scheduler service is running
- The 'monitor' operation reports whether the scheduler service seems to be working
-
-UEND
-}
-
-meta_data() {
- cat <<END
-<?xml version="1.0"?>
-<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
-<resource-agent name="ceilometer-mem-db">
-<version>1.0</version>
-
-<longdesc lang="en">
-Resource agent for the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-May manage a ceilometer-mem-db instance or a clone set that
-creates a distributed ceilometer-mem-db cluster.
-</longdesc>
-<shortdesc lang="en">Manages the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)</shortdesc>
-<parameters>
-
-<parameter name="binary" unique="0" required="0">
-<longdesc lang="en">
-Location of the OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB server binary (ceilometer-mem-db)</shortdesc>
-<content type="string" default="${OCF_RESKEY_binary_default}" />
-</parameter>
-
-<parameter name="config" unique="0" required="0">
-<longdesc lang="en">
-Location of the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) configuration file
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB (ceilometer-mem-db registry) config file</shortdesc>
-<content type="string" default="${OCF_RESKEY_config_default}" />
-</parameter>
-
-<parameter name="user" unique="0" required="0">
-<longdesc lang="en">
-User running OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) user</shortdesc>
-<content type="string" default="${OCF_RESKEY_user_default}" />
-</parameter>
-
-<parameter name="pid" unique="0" required="0">
-<longdesc lang="en">
-The pid file to use for this OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) instance
-</longdesc>
-<shortdesc lang="en">OpenStack Ceilometer Mem DB Service (ceilometer-mem-db) pid file</shortdesc>
-<content type="string" default="${OCF_RESKEY_pid_default}" />
-</parameter>
-
-<parameter name="amqp_server_port" unique="0" required="0">
-<longdesc lang="en">
-The listening port number of the AMQP server. Use for monitoring purposes
-</longdesc>
-<shortdesc lang="en">AMQP listening port</shortdesc>
-<content type="integer" default="${OCF_RESKEY_amqp_server_port_default}" />
-</parameter>
-
-
-<parameter name="additional_parameters" unique="0" required="0">
-<longdesc lang="en">
-Additional parameters to pass on to the OpenStack Ceilometer Mem DB Service (ceilometer-mem-db)
-</longdesc>
-<shortdesc lang="en">Additional parameters for ceilometer-mem-db</shortdesc>
-<content type="string" />
-</parameter>
-
-</parameters>
-
-<actions>
-<action name="start" timeout="20" />
-<action name="stop" timeout="20" />
-<action name="status" timeout="20" />
-<action name="monitor" timeout="30" interval="20" />
-<action name="validate-all" timeout="5" />
-<action name="meta-data" timeout="5" />
-</actions>
-</resource-agent>
-END
-}
-
-#######################################################################
-# Functions invoked by resource manager actions
-
-ceilometer_mem_db_check_port() {
-# This function has been taken from the squid RA and improved a bit
-# The length of the integer must be 4
-# Examples of valid port: "1080", "0080"
-# Examples of invalid port: "1080bad", "0", "0000", ""
-
- local int
- local cnt
-
- int="$1"
- cnt=${#int}
- echo $int |egrep -qx '[0-9]+(:[0-9]+)?(,[0-9]+(:[0-9]+)?)*'
-
- if [ $? -ne 0 ] || [ $cnt -ne 4 ]; then
- ocf_log err "Invalid port number: $1"
- exit $OCF_ERR_CONFIGURED
- fi
-}
-
-ceilometer_mem_db_validate() {
- local rc
-
- check_binary $OCF_RESKEY_binary
- check_binary netstat
- ceilometer_mem_db_check_port $OCF_RESKEY_amqp_server_port
-
- # A config file on shared storage that is not available
- # during probes is OK.
- if [ ! -f $OCF_RESKEY_config ]; then
- if ! ocf_is_probe; then
- ocf_log err "Config $OCF_RESKEY_config doesn't exist"
- return $OCF_ERR_INSTALLED
- fi
- ocf_log_warn "Config $OCF_RESKEY_config not available during a probe"
- fi
-
- getent passwd $OCF_RESKEY_user >/dev/null 2>&1
- rc=$?
- if [ $rc -ne 0 ]; then
- ocf_log err "User $OCF_RESKEY_user doesn't exist"
- return $OCF_ERR_INSTALLED
- fi
-
- true
-}
-
-ceilometer_mem_db_status() {
- local pid
- local rc
-
- if [ ! -f $OCF_RESKEY_pid ]; then
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
- return $OCF_NOT_RUNNING
- else
- pid=`cat $OCF_RESKEY_pid`
- fi
-
- ocf_run -warn kill -s 0 $pid
- rc=$?
- if [ $rc -eq 0 ]; then
- return $OCF_SUCCESS
- else
- ocf_log info "Old PID file found, but OpenStack Ceilometer Mem DB (ceilometer-mem-db) is not running"
- rm -f $OCF_RESKEY_pid
- return $OCF_NOT_RUNNING
- fi
-}
-
-ceilometer_mem_db_monitor() {
- local rc
- local pid
- local scheduler_amqp_check
-
- ceilometer_mem_db_status
- rc=$?
-
- # If status returned anything but success, return that immediately
- if [ $rc -ne $OCF_SUCCESS ]; then
- return $rc
- fi
-
- # Check the connections according to the PID.
- # We are sure to hit the scheduler process and not other Cinder process with the same connection behavior (for example cinder-api)
- pid=`cat $OCF_RESKEY_pid`
- scheduler_amqp_check=`netstat -punt | grep -s "$OCF_RESKEY_amqp_server_port" | grep -s "$pid" | grep -qs "ESTABLISHED"`
- rc=$?
- if [ $rc -ne 0 ]; then
- ocf_log err "Mem DB is not connected to the AMQP server : $rc"
- return $OCF_NOT_RUNNING
- fi
-
- ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) monitor succeeded"
- return $OCF_SUCCESS
-}
-
-ceilometer_mem_db_start() {
- local rc
-
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -eq $OCF_SUCCESS ]; then
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already running"
- return $OCF_SUCCESS
- fi
-
- # run the actual ceilometer-mem-db daemon. Don't use ocf_run as we're sending the tool's output
- # straight to /dev/null anyway and using ocf_run would break stdout-redirection here.
- su ${OCF_RESKEY_user} -s /bin/sh -c "${OCF_RESKEY_binary} --config-file=$OCF_RESKEY_config \
- $OCF_RESKEY_additional_parameters"' >> /dev/null 2>&1 & echo $!' > $OCF_RESKEY_pid
-
- # Spin waiting for the server to come up.
- while true; do
- ceilometer_mem_db_monitor
- rc=$?
- [ $rc -eq $OCF_SUCCESS ] && break
- if [ $rc -ne $OCF_NOT_RUNNING ]; then
- ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) start failed"
- exit $OCF_ERR_GENERIC
- fi
- sleep 1
- done
-
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) started"
- return $OCF_SUCCESS
-}
-
-ceilometer_mem_db_confirm_stop() {
- local my_bin
- local my_processes
-
- my_binary=`which ${OCF_RESKEY_binary}`
- my_processes=`pgrep -l -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"`
-
- if [ -n "${my_processes}" ]
- then
- ocf_log info "About to SIGKILL the following: ${my_processes}"
- pkill -KILL -f "^(python|/usr/bin/python|/usr/bin/python2) ${my_binary}([^\w-]|$)"
- fi
-}
-
-ceilometer_mem_db_stop() {
- local rc
- local pid
-
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -eq $OCF_NOT_RUNNING ]; then
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) already stopped"
- ceilometer_mem_db_confirm_stop
- return $OCF_SUCCESS
- fi
-
- # Try SIGTERM
- pid=`cat $OCF_RESKEY_pid`
- ocf_run kill -s TERM $pid
- rc=$?
- if [ $rc -ne 0 ]; then
- ocf_log err "OpenStack Ceilometer Mem DB (ceilometer-mem-db) couldn't be stopped"
- ceilometer_mem_db_confirm_stop
- exit $OCF_ERR_GENERIC
- fi
-
- # stop waiting
- shutdown_timeout=2
- if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
- shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
- fi
- count=0
- while [ $count -lt $shutdown_timeout ]; do
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -eq $OCF_NOT_RUNNING ]; then
- break
- fi
- count=`expr $count + 1`
- sleep 1
- ocf_log debug "OpenStack Ceilometer Mem DB (ceilometer-mem-db) still hasn't stopped yet. Waiting ..."
- done
-
- ceilometer_mem_db_status
- rc=$?
- if [ $rc -ne $OCF_NOT_RUNNING ]; then
- # SIGTERM didn't help either, try SIGKILL
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) failed to stop after ${shutdown_timeout}s \
- using SIGTERM. Trying SIGKILL ..."
- ocf_run kill -s KILL $pid
- fi
- ceilometer_mem_db_confirm_stop
-
- ocf_log info "OpenStack Ceilometer Mem DB (ceilometer-mem-db) stopped"
-
- rm -f $OCF_RESKEY_pid
-
- return $OCF_SUCCESS
-}
-
-#######################################################################
-
-case "$1" in
- meta-data) meta_data
- exit $OCF_SUCCESS;;
- usage|help) usage
- exit $OCF_SUCCESS;;
-esac
-
-# Anything except meta-data and help must pass validation
-ceilometer_mem_db_validate || exit $?
-
-# What kind of method was invoked?
-case "$1" in
- start) ceilometer_mem_db_start;;
- stop) ceilometer_mem_db_stop;;
- status) ceilometer_mem_db_status;;
- monitor) ceilometer_mem_db_monitor;;
- validate-all) ;;
- *) usage
- exit $OCF_ERR_UNIMPLEMENTED;;
-esac
--
1.8.3.1

View File

@ -1,87 +0,0 @@
---
ocf/ceilometer-agent-notification | 4 ++--
ocf/ceilometer-api | 4 ++--
ocf/ceilometer-collector | 4 ++--
ocf/ceilometer-mem-db | 4 ++--
4 files changed, 8 insertions(+), 8 deletions(-)
--- a/ocf/ceilometer-api
+++ b/ocf/ceilometer-api
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -324,7 +324,7 @@ ceilometer_api_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi
--- a/ocf/ceilometer-agent-notification
+++ b/ocf/ceilometer-agent-notification
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -314,7 +314,7 @@ ceilometer_agent_notification_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi
--- a/ocf/ceilometer-collector
+++ b/ocf/ceilometer-collector
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -313,7 +313,7 @@ ceilometer_collector_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi
--- a/ocf/ceilometer-mem-db
+++ b/ocf/ceilometer-mem-db
@@ -11,7 +11,7 @@
# Support: openstack@lists.launchpad.net
# License: Apache Software License (ASL) 2.0
#
-# Copyright (c) 2014 Wind River Systems, Inc.
+# Copyright (c) 2014-2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -312,7 +312,7 @@ ceilometer_mem_db_stop() {
fi
# stop waiting
- shutdown_timeout=15
+ shutdown_timeout=2
if [ -n "$OCF_RESKEY_CRM_meta_timeout" ]; then
shutdown_timeout=$((($OCF_RESKEY_CRM_meta_timeout/1000)-5))
fi

View File

@ -1 +0,0 @@
This repo is for the stx-gnocchi image, build on top of https://opendev.org/openstack/gnocchi

View File

@ -1,23 +0,0 @@
BUILDER=loci
LABEL=stx-gnocchi
PROJECT=gnocchi
PROJECT_REPO=https://github.com/gnocchixyz/gnocchi.git
PROJECT_REF=4.3.2
PROJECT_UID=42425
PROJECT_GID=42425
DIST_REPOS="OS"
PIP_PACKAGES="\
gnocchiclient \
keystonemiddleware \
pymemcache \
psycopg2 \
oslo.db \
SQLAlchemy \
SQLAlchemy-Utils
"
DIST_PACKAGES="python3-rados"
PROFILES="gnocchi apache"
CUSTOMIZATION="\
ln -s /etc/apache2/mods-available/wsgi.load /etc/apache2/mods-enabled/wsgi.load && \
ln -s /etc/apache2/mods-available/wsgi.conf /etc/apache2/mods-enabled/wsgi.conf
"

View File

@ -1,8 +0,0 @@
This repo is for https://opendev.org/openstack/python-gnocchiclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,55 +0,0 @@
From 1cdba6b7884878b91b34321d8e6cb48aadb18165 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Tue, 26 Oct 2021 23:51:34 +0000
Subject: [PATCH] Add python3 wheel
Add python3-gnocchiclient-wheel
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 18 ++++++++++++++++++
debian/rules | 2 +-
2 files changed, 19 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index c80f5f7..e4341b6 100644
--- a/debian/control
+++ b/debian/control
@@ -81,3 +81,21 @@ Description: bindings to the OpenStack Gnocchi API - Python 3.x
HTTP REST API.
.
This package contains the Python 3.x module.
+
+Package: python3-gnocchiclient-wheel
+Architecture: all
+Depends:
+ python3-wheel,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: bindings to the OpenStack Gnocchi API - Python 3.x
+ This is a client for OpenStack gnocchi API. There's a Python API (the
+ gnocchiclient module), and a command-line script. Each implements the entire
+ OpenStack Gnocchi API.
+ .
+ Gnocchi is a service for managing a set of resources and storing metrics about
+ them, in a scalable and resilient way. Its functionalities are exposed over an
+ HTTP REST API.
+ .
+ This package contains the Python wheel.
+
diff --git a/debian/rules b/debian/rules
index df1b32a..0cee15d 100755
--- a/debian/rules
+++ b/debian/rules
@@ -13,7 +13,7 @@ override_dh_auto_build:
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2
+ pkgos-dh_auto_install --no-py2 --wheel
# Generate bash completion
mkdir -p $(CURDIR)/debian/python3-gnocchiclient/usr/share/bash-completion/completions
--
2.30.2

View File

@ -1,29 +0,0 @@
From 8f239c761ac065f0faa6a8d4d66704f583767fb1 Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Mon, 29 Nov 2021 20:57:22 +0000
Subject: [PATCH] Remove openstackclient
Remove build-Depends-Indep for python-openstackclient as it is
not being used and it is causing problems with the build-pkgs
tool
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 1 -
1 file changed, 1 deletion(-)
diff --git a/debian/control b/debian/control
index c80f5f7..87e4cb8 100644
--- a/debian/control
+++ b/debian/control
@@ -23,7 +23,6 @@ Build-Depends-Indep:
python3-keystoneauth1,
python3-keystonemiddleware <!nocheck>,
python3-monotonic,
- python3-openstackclient,
python3-osc-lib,
python3-pytest <!nocheck>,
python3-pytest-xdist <!nocheck>,
--
2.30.2

View File

@ -1,2 +0,0 @@
0001-Add-python3-wheel.patch
remove-openstackcleint.patch

View File

@ -1,12 +0,0 @@
---
debname: python-gnocchiclient
debver: 7.0.6-1
dl_path:
name: python-gnocchiclient-debian-7.0.6-1.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-gnocchiclient/-/archive/debian/7.0.6-1/python-gnocchiclient-debian-7.0.6-1.tar.gz
md5sum: 3ee6a1ee65fb1a4dbd86038257b33c04
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-gnocchiclient

View File

@ -1,59 +0,0 @@
From 2410e5ae2150100c7a4c01886498935c57076822 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Fri, 22 Oct 2021 14:11:23 -0300
Subject: [PATCH] install extra files
Signed-off-by: Yue Tao <Yue.Tao@windriver.com>
---
debian/openstack-dashboard.install | 8 ++++++++
debian/python3-django-horizon.install | 1 +
debian/rules | 10 ++++++++++
3 files changed, 19 insertions(+)
diff --git a/debian/openstack-dashboard.install b/debian/openstack-dashboard.install
index 2be73b9..12f33b9 100644
--- a/debian/openstack-dashboard.install
+++ b/debian/openstack-dashboard.install
@@ -1,3 +1,11 @@
debian/local_settings.d/* usr/share/openstack-dashboard-debian-settings.d
etc/openstack-dashboard
+etc/httpd/conf.d/openstack-dashboard.conf
+etc/logrotate.d/openstack-dashboard
+etc/rc.d/init.d/horizon
usr/share/openstack-dashboard
+usr/share/openstack-dashboard/guni_config.py
+usr/bin/horizon-clearsessions
+usr/bin/horizon-patching-restart
+usr/bin/horizon-assets-compress
+usr/lib/systemd/system/httpd.service.d/openstack-dashboard.conf
diff --git a/debian/python3-django-horizon.install b/debian/python3-django-horizon.install
index 47e0ed4..a113003 100644
--- a/debian/python3-django-horizon.install
+++ b/debian/python3-django-horizon.install
@@ -1 +1,2 @@
/usr/lib/python*
+usr/share/doc/python3-django-horizon/openstack-dashboard-httpd-logging.conf
diff --git a/debian/rules b/debian/rules
index 53181a6..4ab08e7 100755
--- a/debian/rules
+++ b/debian/rules
@@ -95,6 +95,16 @@ override_dh_auto_install:
## Delete not needed files
rm -f $(CURDIR)/debian/tmp/usr/lib/python3/dist-packages/openstack_dashboard/local/_build*.lock
+ install -D -p -m 644 $(CURDIR)/openstack-dashboard-httpd-2.4.conf $(CURDIR)/debian/tmp/etc/httpd/conf.d/openstack-dashboard.conf
+ install -D -p -m 644 $(CURDIR)/python-django-horizon-systemd.conf $(CURDIR)/debian/tmp/usr/lib/systemd/system/httpd.service.d/openstack-dashboard.conf
+ install -D -p $(CURDIR)/openstack-dashboard-httpd-logging.conf $(CURDIR)/debian/tmp/usr/share/doc/python3-django-horizon/openstack-dashboard-httpd-logging.conf
+ install -D -p $(CURDIR)/python-django-horizon-logrotate.conf $(CURDIR)/debian/tmp/etc/logrotate.d/openstack-dashboard
+ install -D -p -m 755 $(CURDIR)/horizon.init $(CURDIR)/debian/tmp/etc/rc.d/init.d/horizon
+ install -D -p -m 755 $(CURDIR)/horizon-clearsessions $(CURDIR)/debian/tmp/usr/bin/horizon-clearsessions
+ install -D -p -m 755 $(CURDIR)/horizon-patching-restart $(CURDIR)/debian/tmp/usr/bin/horizon-patching-restart
+ install -D -p $(CURDIR)/guni_config.py $(CURDIR)/debian/tmp/usr/share/openstack-dashboard/guni_config.py
+ install -D -p -m 755 $(CURDIR)/horizon-assets-compress $(CURDIR)/debian/tmp/usr/bin/horizon-assets-compress
+
dh_install
dh_missing --fail-missing
find $(CURDIR)/debian -iname .eslintrc -delete
--
2.25.1

View File

@ -1,41 +0,0 @@
From 511934904b9322e46c0a76b3a073616b4b28a698 Mon Sep 17 00:00:00 2001
From: lsampaio <luis.sampaio@windriver.com>
Date: Thu, 12 May 2022 12:04:47 -0300
Subject: [PATCH 2/2] Horizon service not enabled-active in SM
---
debian/openstack-dashboard.install | 2 +-
debian/openstack-dashboard.postinst | 4 ++++
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/debian/openstack-dashboard.install b/debian/openstack-dashboard.install
index 12f33b9eb..fe18ed9c8 100644
--- a/debian/openstack-dashboard.install
+++ b/debian/openstack-dashboard.install
@@ -2,7 +2,7 @@ debian/local_settings.d/* usr/share/openstack-dashboard-debian-settings.d
etc/openstack-dashboard
etc/httpd/conf.d/openstack-dashboard.conf
etc/logrotate.d/openstack-dashboard
-etc/rc.d/init.d/horizon
+etc/rc.d/init.d/horizon etc/init.d/
usr/share/openstack-dashboard
usr/share/openstack-dashboard/guni_config.py
usr/bin/horizon-clearsessions
diff --git a/debian/openstack-dashboard.postinst b/debian/openstack-dashboard.postinst
index d24971322..6b76f2f5d 100644
--- a/debian/openstack-dashboard.postinst
+++ b/debian/openstack-dashboard.postinst
@@ -111,6 +111,10 @@ if [ "$1" = "configure" ] ; then
ln -sf /etc/openstack-dashboard/policy /usr/lib/python3/dist-packages/openstack_dashboard/conf
fi
+ if [ ! -L /usr/lib/python3/dist-packages/openstack_dashboard/wsgi.py ]; then
+ ln -sf /usr/share/openstack-dashboard/wsgi.py /usr/lib/python3/dist-packages/openstack_dashboard/wsgi.py
+ fi
+
# Some dashboard plugins are not deleting their files under
# /usr/share/openstack-dashboard/openstack_dashboard/{local,enabled}
#
--
2.35.1

View File

@ -1,20 +0,0 @@
From f6e57a8c93d4669698d86ac53e0dc57e794725a1 Mon Sep 17 00:00:00 2001
From: Al Bailey <al.bailey@windriver.com>
Date: Fri, 22 Jul 2022 19:18:44 +0000
Subject: [PATCH 3/3] Create /opt/branding as part of install
---
debian/openstack-dashboard.dirs | 1 +
1 file changed, 1 insertion(+)
diff --git a/debian/openstack-dashboard.dirs b/debian/openstack-dashboard.dirs
index cb229b2..ef69f56 100644
--- a/debian/openstack-dashboard.dirs
+++ b/debian/openstack-dashboard.dirs
@@ -1,2 +1,3 @@
/etc/openstack-dashboard/local_settings.d
/usr/share/openstack-dashboard
+/opt/branding
--
2.30.2

View File

@ -1,3 +0,0 @@
0001-install-extra-files.patch
0002-Horizon-service-not-enabled-active-in-SM.patch
0003-Create-opt-branding-as-part-of-install.patch

View File

@ -1,27 +0,0 @@
From 0887c59ddffa53a8816e7a30f85fa49bdfce1881 Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Thu, 30 Apr 2020 11:45:55 -0400
Subject: [PATCH] Remove-the-hard-coded-internal-URL-for-keystone
Signed-off-by: Andy Ning <andy.ning@windriver.com>
---
openstack_dashboard/api/keystone.py | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/openstack_dashboard/api/keystone.py b/openstack_dashboard/api/keystone.py
index af3d779..e4a9ef7 100644
--- a/openstack_dashboard/api/keystone.py
+++ b/openstack_dashboard/api/keystone.py
@@ -79,7 +79,8 @@ class Service(base.APIDictWrapper):
super(Service, self).__init__(service, *args, **kwargs)
self.public_url = base.get_url_for_service(service, region,
'publicURL')
- self.url = base.get_url_for_service(service, region, 'internalURL')
+ ep_type = getattr(settings, 'OPENSTACK_ENDPOINT_TYPE', 'internalURL')
+ self.url = base.get_url_for_service(service, region, ep_type)
if self.url:
self.host = urlparse.urlparse(self.url).hostname
else:
--
1.8.3.1

View File

@ -1,159 +0,0 @@
From 245e2b9b2f18e316410067804c48ae63b0f20320 Mon Sep 17 00:00:00 2001
From: Takamsa Takenaka <takamasa.takenaka@windriver.com>
Date: Fri, 12 Nov 2021 12:17:19 -0500
Subject: [PATCH] Use policy_rules for user role assignment and group tabs
This patch is ported from the following patches:
- https://review.opendev.org/c/openstack/horizon/+/775014
- https://review.opendev.org/c/openstack/horizon/+/783307
Signed-off-by: Takamsa Takenaka <takamasa.takenaka@windriver.com>
---
horizon/tabs/base.py | 14 +++++++--
horizon/test/unit/tabs/test_tabs.py | 30 +++++++++++++++++--
.../dashboards/identity/users/tabs.py | 2 ++
3 files changed, 40 insertions(+), 6 deletions(-)
diff --git a/horizon/tabs/base.py b/horizon/tabs/base.py
index 67847bf10..108d35eeb 100644
--- a/horizon/tabs/base.py
+++ b/horizon/tabs/base.py
@@ -26,6 +26,7 @@ from django.utils import module_loading
from horizon import exceptions
from horizon.utils import html
+from horizon.utils import settings as utils_settings
LOG = logging.getLogger(__name__)
@@ -310,8 +311,9 @@ class Tab(html.HTMLElement):
preload = True
_active = None
permissions = []
+ policy_rules = None
- def __init__(self, tab_group, request=None):
+ def __init__(self, tab_group, request=None, policy_rules=None):
super(Tab, self).__init__()
# Priority: constructor, class-defined, fallback
if not self.name:
@@ -325,6 +327,7 @@ class Tab(html.HTMLElement):
self._allowed = self.allowed(request) and (
self._has_permissions(request))
self._enabled = self.enabled(request)
+ self.policy_rules = policy_rules or []
def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self.slug)
@@ -442,9 +445,14 @@ class Tab(html.HTMLElement):
Tab instances can override this method to specify conditions under
which this tab should not be shown at all by returning ``False``.
-
- The default behavior is to return ``True`` for all cases.
"""
+ if not self.policy_rules:
+ return True
+
+ policy_check = utils_settings.import_setting("POLICY_CHECK_FUNCTION")
+
+ if policy_check:
+ return policy_check(self.policy_rules, request)
return True
def post(self, request, *args, **kwargs):
diff --git a/horizon/test/unit/tabs/test_tabs.py b/horizon/test/unit/tabs/test_tabs.py
index 6c50e401b..358bf77c9 100644
--- a/horizon/test/unit/tabs/test_tabs.py
+++ b/horizon/test/unit/tabs/test_tabs.py
@@ -67,9 +67,16 @@ class TabDisallowed(BaseTestTab):
return False
+class TabWithPolicy(BaseTestTab):
+ slug = "tab_with_policy"
+ name = "tab only visible to admin"
+ template_name = "_tab.html"
+ policy_rules = (("compute", "role:admin"),)
+
+
class Group(horizon_tabs.TabGroup):
slug = "tab_group"
- tabs = (TabOne, TabDelayed, TabDisabled, TabDisallowed)
+ tabs = (TabOne, TabDelayed, TabDisabled, TabDisallowed, TabWithPolicy)
sticky = True
def tabs_not_available(self):
@@ -128,15 +135,19 @@ class TabWithTableView(horizon_tabs.TabbedTableView):
class TabTests(test.TestCase):
+ @override_settings(POLICY_CHECK_FUNCTION=lambda *args: True)
def test_tab_group_basics(self):
tg = Group(self.request)
# Test tab instantiation/attachment to tab group, and get_tabs method
tabs = tg.get_tabs()
# "tab_disallowed" should NOT be in this list.
+ # "tab_with_policy" should be present, since our policy check
+ # always passes
self.assertQuerysetEqual(tabs, ['<TabOne: tab_one>',
'<TabDelayed: tab_delayed>',
- '<TabDisabled: tab_disabled>'])
+ '<TabDisabled: tab_disabled>',
+ '<TabWithPolicy: tab_with_policy>'])
# Test get_id
self.assertEqual("tab_group", tg.get_id())
# get_default_classes
@@ -151,6 +162,19 @@ class TabTests(test.TestCase):
# Test get_selected_tab is None w/o GET input
self.assertIsNone(tg.get_selected_tab())
+ @override_settings(POLICY_CHECK_FUNCTION=lambda *args: False)
+ def test_failed_tab_policy(self):
+ tg = Group(self.request)
+
+ # Test tab instantiation/attachment to tab group, and get_tabs method
+ tabs = tg.get_tabs()
+ # "tab_disallowed" should NOT be in this list, it's not allowed
+ # "tab_with_policy" should also not be present as its
+ # policy check failed
+ self.assertQuerysetEqual(tabs, ['<TabOne: tab_one>',
+ '<TabDelayed: tab_delayed>',
+ '<TabDisabled: tab_disabled>'])
+
@test.update_settings(
HORIZON_CONFIG={'extra_tabs': {
'horizon.test.unit.tabs.test_tabs.GroupWithConfig': (
@@ -253,7 +277,7 @@ class TabTests(test.TestCase):
# tab group
output = tg.render()
res = http.HttpResponse(output.strip())
- self.assertContains(res, "<li", 3)
+ self.assertContains(res, "<li", 4)
# stickiness
self.assertContains(res, 'data-sticky-tabs="sticky"', 1)
diff --git a/openstack_dashboard/dashboards/identity/users/tabs.py b/openstack_dashboard/dashboards/identity/users/tabs.py
index bd47925a5..828bc51b1 100644
--- a/openstack_dashboard/dashboards/identity/users/tabs.py
+++ b/openstack_dashboard/dashboards/identity/users/tabs.py
@@ -91,6 +91,7 @@ class RoleAssignmentsTab(tabs.TableTab):
slug = "roleassignments"
template_name = "horizon/common/_detail_table.html"
preload = False
+ policy_rules = (("identity", "identity:list_role_assignments"),)
def get_roleassignmentstable_data(self):
user = self.tab_group.kwargs['user']
@@ -137,6 +138,7 @@ class GroupsTab(tabs.TableTab):
slug = "groups"
template_name = "horizon/common/_detail_table.html"
preload = False
+ policy_rules = (("identity", "identity:list_groups"),)
def get_groupstable_data(self):
user_groups = []
--
2.29.2

View File

@ -1,62 +0,0 @@
"""
Copyright (c) 2022-2023 Wind River Systems, Inc.
SPDX-License-Identifier: Apache-2.0
"""
import datetime
import fnmatch
import os
import resource
from django.conf import settings
errorlog = "/var/log/horizon/gunicorn.log"
capture_output = True
# maxrss ceiling in kbytes
MAXRSS_CEILING = 512000
def worker_abort(worker):
path = ("/proc/%s/fd") % os.getpid()
contents = os.listdir(path)
upload_dir = getattr(settings, 'FILE_UPLOAD_TEMP_DIR', '/tmp')
pattern = os.path.join(upload_dir, '*.upload')
for i in contents:
f = os.path.join(path, i)
if os.path.exists(f):
try:
link = os.readlink(f)
if fnmatch.fnmatch(link, pattern):
worker.log.info(link)
os.remove(link)
except OSError:
pass
def post_worker_init(worker):
worker.nrq = 0
worker.restart = False
def pre_request(worker, req):
worker.nrq += 1
if worker.restart:
worker.nr = worker.max_requests - 1
maxrss = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
msg = "%(date)s %(uri)s %(rss)u" % ({'date': datetime.datetime.now(),
'uri': getattr(req, "uri"),
'rss': maxrss})
worker.log.info(msg)
def post_request(worker, req, environ, resp):
worker.nrq -= 1
if not worker.restart:
maxrss = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
if maxrss > MAXRSS_CEILING and worker.nrq == 0:
worker.restart = True

View File

@ -1,43 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
PYTHON=`which python`
MANAGE="/usr/share/openstack-dashboard/manage.py"
STATICDIR="/var/www/pages/static"
BRANDDIR="/opt/branding"
APPLIEDDIR="/opt/branding/applied"
# Handle custom horizon branding
rm -rf ${APPLIEDDIR}
if ls ${BRANDDIR}/*.tgz 1> /dev/null 2>&1; then
LATESTBRANDING=$(ls $BRANDDIR |grep '\.tgz$' | tail -n 1)
mkdir -p ${APPLIEDDIR}
tar zxf ${BRANDDIR}/${LATESTBRANDING} -C ${APPLIEDDIR} 2>/dev/null 1>/dev/null
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Failed to extract ${BRANDDIR}/${LATESTBRANDING}"
fi
fi
echo "Dumping static assets"
if [ -d ${STATICDIR} ]; then
COLLECTARGS=--clear
fi
${PYTHON} -- ${MANAGE} collectstatic -v0 --noinput ${COLLECTARGS}
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Failed to dump static assets."
exit $RETVAL
fi
nice -n 20 ionice -c Idle ${PYTHON} -- ${MANAGE} compress -v0
RETVAL=$?
if [ $RETVAL -ne 0 ]; then
echo "Failed to compress assets."
exit $RETVAL
fi

View File

@ -1,3 +0,0 @@
#!/bin/bash
/usr/bin/nice -n 2 /usr/bin/python /usr/share/openstack-dashboard/manage.py clearsessions

View File

@ -1,80 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2017 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
#
# The patching subsystem provides a patch-functions bash source file
# with useful function and variable definitions.
#
. /etc/patching/patch-functions
#
# We can now check to see what type of node we're on, if it's locked, etc,
# and act accordingly
#
#
# Declare an overall script return code
#
declare -i GLOBAL_RC=$PATCH_STATUS_OK
#
# handle restarting horizon.
#
if is_controller
then
# Horizon only runs on the controller
if [ ! -f $PATCH_FLAGDIR/horizon.restarted ]
then
# Check SM to see if Horizon is running
sm-query service horizon | grep -q 'enabled-active'
if [ $? -eq 0 ]
then
loginfo "$0: Logging out all horizon sessions"
# Remove sessions
rm -f /var/tmp/sessionid*
loginfo "$0: Restarting horizon"
# Ask SM to restart Horizon
sm-restart service horizon
touch $PATCH_FLAGDIR/horizon.restarted
# Wait up to 30 seconds for service to recover
let -i UNTIL=$SECONDS+30
while [ $UNTIL -ge $SECONDS ]
do
# Check to see if it's running
sm-query service horizon | grep -q 'enabled-active'
if [ $? -eq 0 ]
then
break
fi
# Still not running? Let's wait 5 seconds and check again
sleep 5
done
sm-query service horizon | grep -q 'enabled-active'
if [ $? -ne 0 ]
then
# Still not running! Clear the flag and mark the RC as failed
loginfo "$0: Failed to restart horizon"
rm -f $PATCH_FLAGDIR/horizon.restarted
GLOBAL_RC=$PATCH_STATUS_FAILED
sm-query service horizon
fi
fi
fi
fi
#
# Exit the script with the overall return code
#
exit $GLOBAL_RC

View File

@ -1,155 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides: OpenStack Dashboard
# Required-Start: networking
# Required-Stop: networking
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: OpenStack Dashboard
# Description: Web based user interface to OpenStack services including
# Nova, Swift, Keystone, etc.
### END INIT INFO
RETVAL=0
DESC="openstack-dashboard"
PIDFILE="/var/run/$DESC.pid"
PYTHON=`which python`
# Centos packages openstack_dashboard under /usr/share
#MANAGE="@PYTHON_SITEPACKAGES@/openstack_dashboard/manage.py"
MANAGE="/usr/share/openstack-dashboard/manage.py"
EXEC="/usr/bin/gunicorn"
BIND="localhost"
PORT="8008"
WORKER="eventlet"
WORKERS=`grep workers /etc/openstack-dashboard/horizon-config.ini | cut -f3 -d' '`
# Increased timeout to facilitate large image uploads
TIMEOUT="200"
STATICDIR="/var/www/pages/static"
BRANDDIR="/opt/branding"
APPLIEDDIR="/opt/branding/applied"
TMPUPLOADDIR="/scratch/horizon"
source /usr/bin/tsconfig
start()
{
# Change workers if combined controller/compute
. /etc/platform/platform.conf
if [ "${WORKERS}" -lt "2" ]; then
WORKERS=2
fi
if [ -e $PIDFILE ]; then
PIDDIR=/proc/$(cat $PIDFILE)
if [ -d ${PIDDIR} ]; then
echo "$DESC already running."
return
else
echo "Removing stale PID file $PIDFILE"
rm -f $PIDFILE
fi
fi
# Clean up any possible orphaned worker threads
if lsof -t -i:${PORT} 1> /dev/null 2>&1; then
kill $(lsof -t -i:${PORT}) > /dev/null 2>&1
fi
rm -rf ${TMPUPLOADDIR}
mkdir -p ${TMPUPLOADDIR}
# extract branding file before server starts
/usr/bin/horizon-assets-compress
echo -n "Starting $DESC..."
start-stop-daemon --start --quiet --background --pidfile ${PIDFILE} \
--make-pidfile --exec ${PYTHON} -- ${EXEC} --preload --bind ${BIND}:${PORT} \
--worker-class ${WORKER} --workers ${WORKERS} --timeout ${TIMEOUT} \
--log-syslog \
--config '/usr/share/openstack-dashboard/guni_config.py' \
--pythonpath '/usr/share/openstack-dashboard' \
openstack_dashboard.wsgi
RETVAL=$?
if [ $RETVAL -eq 0 ]; then
echo "done."
else
echo "failed."
fi
# now copy customer branding file to CONFIG_PATH/branding if anything updated
sm-query service drbd-platform | grep enabled-active > /dev/null 2>&1
IS_ACTIVE=$?
if ls ${BRANDDIR}/*.tgz 1> /dev/null 2>&1; then
LATESTBRANDING=$(ls $BRANDDIR |grep '\.tgz$' | tail -n 1)
if [ $IS_ACTIVE -eq 0 ]; then
# Only do the copy if the tarball has changed
if ! cmp --silent ${BRANDDIR}/${LATESTBRANDING} ${CONFIG_PATH}/branding/${LATESTBRANDING} ; then
mkdir -p ${CONFIG_PATH}/branding
rm -rf ${CONFIG_PATH}/branding/*.tgz
cp -r ${BRANDDIR}/${LATESTBRANDING} ${CONFIG_PATH}/branding
fi
fi
fi
# As part of starting horizon we should kill containerized horizon so that it
# will pickup branding changes
kubectl --kubeconfig=/etc/kubernetes/admin.conf delete pods -n openstack -l application=horizon 1>/dev/null
}
stop()
{
if [ ! -e $PIDFILE ]; then return; fi
echo -n "Stopping $DESC..."
start-stop-daemon --stop --quiet --pidfile $PIDFILE
RETVAL=$?
if [ $RETVAL -eq 0 ]; then
echo "done."
else
echo "failed."
fi
rm -rf ${TMPUPLOADDIR}
rm -f $PIDFILE
}
status()
{
pid=`cat $PIDFILE 2>/dev/null`
if [ -n "$pid" ]; then
if ps -p $pid &> /dev/null ; then
echo "$DESC is running"
RETVAL=0
return
else
RETVAL=1
fi
fi
echo "$DESC is not running"
RETVAL=3
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart|force-reload|reload)
stop
start
;;
status)
status
;;
*)
echo "Usage: $0 {start|stop|force-reload|restart|reload|status}"
RETVAL=1
;;
esac
exit $RETVAL

View File

@ -1,13 +0,0 @@
/var/log/horizon.log {
nodateext
size 10M
start 1
rotate 20
missingok
notifempty
compress
sharedscripts
postrotate
/etc/init.d/syslog reload > /dev/null 2>&1 || true
endscript
}

View File

@ -1,19 +0,0 @@
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
WSGISocketPrefix run/wsgi
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /dashboard/static /usr/share/openstack-dashboard/static
<Directory /usr/share/openstack-dashboard/openstack_dashboard/wsgi>
Options All
AllowOverride All
Require all granted
</Directory>
<Directory /usr/share/openstack-dashboard/static>
Options All
AllowOverride All
Require all granted
</Directory>

View File

@ -1,32 +0,0 @@
# if you want logging to a separate file, please update your config
# according to the last 4 lines in this snippet, and also take care
# to introduce a <VirtualHost > directive.
#
WSGISocketPrefix run/wsgi
<VirtualHost *:80>
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi
Alias /static /usr/share/openstack-dashboard/static
WSGIDaemonProcess dashboard
WSGIProcessGroup dashboard
#DocumentRoot %HORIZON_DIR%/.blackhole/
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /usr/share/openstack-dashboard/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
ErrorLog logs/openstack_dashboard_error.log
LogLevel warn
CustomLog logs/openstack_dashboard_access.log combined
</VirtualHost>

View File

@ -1,8 +0,0 @@
/var/log/horizon/*.log {
weekly
rotate 4
missingok
compress
minsize 100k
}

View File

@ -1,3 +0,0 @@
[Service]
ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py collectstatic --noinput --clear
ExecStartPre=/usr/bin/python /usr/share/openstack-dashboard/manage.py compress --force

View File

@ -1,23 +0,0 @@
---
debname: horizon
debver: 18.6.2-5
dl_path:
name: horizon-debian-18.6.2-5.tar.gz
url: https://salsa.debian.org/openstack-team/services/horizon/-/archive/debian/18.6.2-5/horizon-debian-18.6.2-5.tar.gz
md5sum: 9c41bd3d52c5d5466e622ef8014da0fa
src_files:
- debian/files/guni_config.py
- debian/files/horizon-assets-compress
- debian/files/horizon-clearsessions
- debian/files/horizon.init
- debian/files/horizon.logrotate
- debian/files/horizon-patching-restart
- debian/files/openstack-dashboard-httpd-2.4.conf
- debian/files/openstack-dashboard-httpd-logging.conf
- debian/files/python-django-horizon-logrotate.conf
- debian/files/python-django-horizon-systemd.conf
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-horizon

View File

@ -1,160 +0,0 @@
From 218ede4b67d7a19c8ddb5e39d68de402523c05e5 Mon Sep 17 00:00:00 2001
From: Takamasa Takenaka <takamasa.takenaka@windriver.com>
Date: Mon, 6 Dec 2021 16:04:00 -0300
Subject: [PATCH] Use policy_rules for user role assignment and group tabs
This patch is ported from the following patches:
- https://review.opendev.org/c/openstack/horizon/+/775014
- https://review.opendev.org/c/openstack/horizon/+/783307
Signed-off-by: Takamasa Takenaka <takamasa.takenaka@windriver.com>
---
horizon/tabs/base.py | 15 ++++++++--
horizon/test/unit/tabs/test_tabs.py | 30 +++++++++++++++++--
.../dashboards/identity/users/tabs.py | 2 ++
3 files changed, 41 insertions(+), 6 deletions(-)
diff --git a/horizon/tabs/base.py b/horizon/tabs/base.py
index 5ef7fdd..9a511f0 100644
--- a/horizon/tabs/base.py
+++ b/horizon/tabs/base.py
@@ -23,6 +23,7 @@ from django.utils import module_loading
from horizon import exceptions
from horizon.utils import html
+from horizon.utils import settings as utils_settings
LOG = logging.getLogger(__name__)
@@ -307,8 +308,9 @@ class Tab(html.HTMLElement):
preload = True
_active = None
permissions = []
+ policy_rules = None
- def __init__(self, tab_group, request=None):
+ def __init__(self, tab_group, request=None, policy_rules=None):
super(Tab, self).__init__()
# Priority: constructor, class-defined, fallback
if not self.name:
@@ -321,6 +323,7 @@ class Tab(html.HTMLElement):
self._allowed = self.allowed(request) and (
self._has_permissions(request))
self._enabled = self.enabled(request)
+ self.policy_rules = policy_rules or []
def __repr__(self):
return "<%s: %s>" % (self.__class__.__name__, self.slug)
@@ -437,9 +440,15 @@ class Tab(html.HTMLElement):
Tab instances can override this method to specify conditions under
which this tab should not be shown at all by returning ``False``.
-
- The default behavior is to return ``True`` for all cases.
"""
+ if not self.policy_rules:
+ return True
+
+ policy_check = utils_settings.import_setting("POLICY_CHECK_FUNCTION")
+
+ if policy_check:
+ return policy_check(self.policy_rules, request)
+
return True
def post(self, request, *args, **kwargs):
diff --git a/horizon/test/unit/tabs/test_tabs.py b/horizon/test/unit/tabs/test_tabs.py
index 2f009e8..c499301 100644
--- a/horizon/test/unit/tabs/test_tabs.py
+++ b/horizon/test/unit/tabs/test_tabs.py
@@ -65,9 +65,16 @@ class TabDisallowed(BaseTestTab):
return False
+class TabWithPolicy(BaseTestTab):
+ slug = "tab_with_policy"
+ name = "tab only visible to admin"
+ template_name = "_tab.html"
+ policy_rules = (("compute", "role:admin"),)
+
+
class Group(horizon_tabs.TabGroup):
slug = "tab_group"
- tabs = (TabOne, TabDelayed, TabDisabled, TabDisallowed)
+ tabs = (TabOne, TabDelayed, TabDisabled, TabDisallowed, TabWithPolicy)
sticky = True
def tabs_not_available(self):
@@ -126,15 +133,19 @@ class TabWithTableView(horizon_tabs.TabbedTableView):
class TabTests(test.TestCase):
+ @override_settings(POLICY_CHECK_FUNCTION=lambda *args: True)
def test_tab_group_basics(self):
tg = Group(self.request)
# Test tab instantiation/attachment to tab group, and get_tabs method
tabs = tg.get_tabs()
# "tab_disallowed" should NOT be in this list.
+ # "tab_with_policy" should be present, since our policy check
+ # always passes
self.assertQuerysetEqual(tabs, ['<TabOne: tab_one>',
'<TabDelayed: tab_delayed>',
- '<TabDisabled: tab_disabled>'])
+ '<TabDisabled: tab_disabled>',
+ '<TabWithPolicy: tab_with_policy>'])
# Test get_id
self.assertEqual("tab_group", tg.get_id())
# get_default_classes
@@ -149,6 +160,19 @@ class TabTests(test.TestCase):
# Test get_selected_tab is None w/o GET input
self.assertIsNone(tg.get_selected_tab())
+ @override_settings(POLICY_CHECK_FUNCTION=lambda *args: False)
+ def test_failed_tab_policy(self):
+ tg = Group(self.request)
+
+ # Test tab instantiation/attachment to tab group, and get_tabs method
+ tabs = tg.get_tabs()
+ # "tab_disallowed" should NOT be in this list, it's not allowed
+ # "tab_with_policy" should also not be present as its
+ # policy check failed
+ self.assertQuerysetEqual(tabs, ['<TabOne: tab_one>',
+ '<TabDelayed: tab_delayed>',
+ '<TabDisabled: tab_disabled>'])
+
@test.update_settings(
HORIZON_CONFIG={'extra_tabs': {
'horizon.test.unit.tabs.test_tabs.GroupWithConfig': (
@@ -251,7 +275,7 @@ class TabTests(test.TestCase):
# tab group
output = tg.render()
res = http.HttpResponse(output.strip())
- self.assertContains(res, "<li", 3)
+ self.assertContains(res, "<li", 4)
# stickiness
self.assertContains(res, 'data-sticky-tabs="sticky"', 1)
diff --git a/openstack_dashboard/dashboards/identity/users/tabs.py b/openstack_dashboard/dashboards/identity/users/tabs.py
index fe8fa3b..7427519 100644
--- a/openstack_dashboard/dashboards/identity/users/tabs.py
+++ b/openstack_dashboard/dashboards/identity/users/tabs.py
@@ -89,6 +89,7 @@ class RoleAssignmentsTab(tabs.TableTab):
slug = "roleassignments"
template_name = "horizon/common/_detail_table.html"
preload = False
+ policy_rules = (("identity", "identity:list_role_assignments"),)
def allowed(self, request):
return policy.check((("identity", "identity:list_role_assignments"),),
@@ -139,6 +140,7 @@ class GroupsTab(tabs.TableTab):
slug = "groups"
template_name = "horizon/common/_detail_table.html"
preload = False
+ policy_rules = (("identity", "identity:list_groups"),)
def allowed(self, request):
return policy.check((("identity", "identity:list_groups"),),
--
2.25.1

View File

@ -1,69 +0,0 @@
From 25b0db5778a811c323e07958e03f33847eb7748d Mon Sep 17 00:00:00 2001
From: Enzo Candotti <enzo.candotti@windriver.com>
Date: Mon, 2 Jan 2023 13:24:07 -0300
Subject: [PATCH] Fix incomplete pop-up message on delete Action
When an Action table is created with a 'danger' action_type and a
single handler method for a single object, the 'selection' and
'help' parameters are empty. This causes the pop-up message to be
incomplete.
This patch fixes this behavior by displaying the message with
the selected objects only when one or more objects are selected.
Otherwise, it only asks for confirmation.
Signed-off-by: Enzo Candotti <enzo.candotti@windriver.com>
---
horizon/static/horizon/js/horizon.tables.js | 21 +++++++++++++------
.../horizon/client_side/_confirm.html | 2 +-
2 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/horizon/static/horizon/js/horizon.tables.js b/horizon/static/horizon/js/horizon.tables.js
index 5f42784..b416f19 100644
--- a/horizon/static/horizon/js/horizon.tables.js
+++ b/horizon/static/horizon/js/horizon.tables.js
@@ -309,13 +309,22 @@ horizon.datatables.confirm = function(action) {
var title = interpolate(gettext("Confirm %s"), [action_string]);
// compose the action string using a template that can be overridden
- var template = horizon.templates.compiled_templates["#confirm_modal"],
- params = {
- selection: name_string,
- selection_list: name_array,
- help: help_text
- };
+ var template = horizon.templates.compiled_templates["#confirm_modal"]
+ if (name_string == "") {
+ params = {
+ selection_list: name_array,
+ help: 'This action cannot be undone.'
+ };
+
+ }
+ else {
+ params = {
+ selection: 'You have selected: ' + name_string + '.',
+ selection_list: name_array,
+ help: help_text
+ };
+ }
var body;
try {
body = $(template.render(params)).html();
diff --git a/horizon/templates/horizon/client_side/_confirm.html b/horizon/templates/horizon/client_side/_confirm.html
index f6642dd..31451f0 100644
--- a/horizon/templates/horizon/client_side/_confirm.html
+++ b/horizon/templates/horizon/client_side/_confirm.html
@@ -6,7 +6,7 @@
{% block template %}{% spaceless %}{% jstemplate %}
<div class="confirm-wrapper">
<span class="confirm-list" style="word-wrap: break-word; word-break: normal;">
- {% blocktrans %}You have selected: [[selection]]. {% endblocktrans %}
+ {% blocktrans %} [[selection]] {% endblocktrans %}
</span>
<span class="confirm-text">{% trans 'Please confirm your selection.'%} </span>
<span class="confirm-help">[[help]]</span>
--
2.25.1

View File

@ -1,2 +0,0 @@
0001-Use-policy_rules-for-user-role-assignment-and-group-tabs.patch
0002-Fix-incomplete-pop-up-message-on-delete-Action.patch

View File

@ -1,8 +0,0 @@
This repo is for https://opendev.org/openstack/osc-lib
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,12 +0,0 @@
---
debname: python-osc-lib
debver: 2.2.1-2
dl_path:
name: python-osc-lib-2.2.1-2.tar.gz
url: https://salsa.debian.org/openstack-team/libs/python-osc-lib/-/archive/debian/2.2.1-2/python-osc-lib-debian-2.2.1-2.tar.gz
md5sum: b72d416dd21b369d89c1dc3f8de42705
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-osc-lib

View File

@ -1,44 +0,0 @@
From 76f568a6d94e798d47d044b2abde8b4a3884657e Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Mon, 4 Oct 2021 23:15:54 -0300
Subject: [PATCH] CGTS-7947: add --os-keystone-region-name option to openstack
The new option only apply to identity client.
---
osc_lib/clientmanager.py | 1 +
osc_lib/shell.py | 7 +++++++
2 files changed, 8 insertions(+)
diff --git a/osc_lib/clientmanager.py b/osc_lib/clientmanager.py
index 2990c27..38d84c1 100644
--- a/osc_lib/clientmanager.py
+++ b/osc_lib/clientmanager.py
@@ -88,6 +88,7 @@ class ClientManager(object):
self._app_name = app_name
self._app_version = app_version
self.region_name = self._cli_options.region_name
+ self.keystone_region_name = self._cli_options.keystone_region_name
self.interface = self._cli_options.interface
self.timing = self._cli_options.timing
diff --git a/osc_lib/shell.py b/osc_lib/shell.py
index 27c3a57..c2a504a 100644
--- a/osc_lib/shell.py
+++ b/osc_lib/shell.py
@@ -205,6 +205,13 @@ class OpenStackShell(app.App):
default=utils.env('OS_REGION_NAME'),
help=_('Authentication region name (Env: OS_REGION_NAME)'),
)
+ parser.add_argument(
+ '--os-keystone-region-name',
+ metavar='<keystone-region-name>',
+ dest='keystone_region_name',
+ default=utils.env('OS_KEYSTONE_REGION_NAME'),
+ help=_('Keystone Authentication region name (Env: OS_KEYSTONE_REGION_NAME)'),
+ )
parser.add_argument(
'--os-cacert',
metavar='<ca-bundle-file>',
--
2.17.1

View File

@ -1 +0,0 @@
0001-CGTS-7947-add-os-keystone-region-name-option-to-open.patch

View File

@ -1,8 +0,0 @@
This repo is for https://opendev.org/openstack/oslo.messaging
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,12 +0,0 @@
---
debname: python-oslo.messaging
debver: 12.5.2-1
dl_path:
name: python-oslo.messaging-debian-12.5.2-1.tar.gz
url: https://salsa.debian.org/openstack-team/oslo/python-oslo.messaging/-/archive/debian/12.5.2-1/python-oslo.messaging-debian-12.5.2-1.tar.gz
md5sum: 7e2835c989b5d148288fc713d8c8735d
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-oslo-messaging

View File

@ -1,35 +0,0 @@
From 43f4d70ad206aa1e6b8a1f7fd814dae8de515296 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Mon, 4 Oct 2021 11:36:08 -0300
Subject: [PATCH] rabbit: increase heartbeat rate to decrease poll interval
The poll_timeout is tied to the heartbeat_rate value when the
heartbeat_timeout_threshold is non-zero. It works out to be:
threshold / rate / 2
Therefore the default is 60 / 2 / 2 = 15. This causes the recv() to block for
up to 15 seconds unless there are incoming RPC messages. This is problematic
for graceful shutdown of services as the stop() request may block if the recv()
is blocked. To ensure that the recv() does not block for a long time we are
reducing the interval by controlling the rate.
---
oslo_messaging/_drivers/impl_rabbit.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/oslo_messaging/_drivers/impl_rabbit.py b/oslo_messaging/_drivers/impl_rabbit.py
index 45a49645..bda6f0f9 100644
--- a/oslo_messaging/_drivers/impl_rabbit.py
+++ b/oslo_messaging/_drivers/impl_rabbit.py
@@ -164,7 +164,7 @@ rabbit_opts = [
"considered down if heartbeat's keep-alive fails "
"(0 disables heartbeat)."),
cfg.IntOpt('heartbeat_rate',
- default=2,
+ default=10,
help='How often times during the heartbeat_timeout_threshold '
'we check the heartbeat.'),
cfg.BoolOpt('direct_mandatory_flag',
--
2.17.1

View File

@ -1,22 +0,0 @@
From 60fb7ad65595700e133e41099f9f031159540da4 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Mon, 4 Oct 2021 22:32:03 -0300
Subject: [PATCH 2/2] Disable tests
---
.stestr.conf | 3 ---
1 file changed, 3 deletions(-)
delete mode 100644 .stestr.conf
diff --git a/.stestr.conf b/.stestr.conf
deleted file mode 100644
index b0e6e1ad..00000000
--- a/.stestr.conf
+++ /dev/null
@@ -1,3 +0,0 @@
-[DEFAULT]
-test_path=./oslo_messaging/tests
-top_path=./
--
2.17.1

View File

@ -1,2 +0,0 @@
0001-rabbit-increase-heartbeat-rate-to-decrease-poll-inte.patch
0002-Disable-tests.patch

View File

@ -1,8 +0,0 @@
This repo is for https://opendev.org/openstack/python-pankoclient
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,56 +0,0 @@
From baa05e47ddc62bed291680d9ee37d37bb8bc594c Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Wed, 27 Oct 2021 14:29:42 +0000
Subject: [PATCH] Add wheel package
Add python3-pankclient-wheel.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 19 +++++++++++++++++++
debian/rules | 2 +-
2 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/debian/control b/debian/control
index 007fc55..24470c7 100644
--- a/debian/control
+++ b/debian/control
@@ -83,3 +83,22 @@ Description: Client library for OpenStack panko server - Python 3.x
command-line tool (openstack event).
.
This package provides the Python 3.x module.
+
+Package: python3-pankoclient-wheel
+Architecture: all
+Depends:
+ python3-wheel,
+ ${misc:Depends},
+ ${python3:Depends},
+Description: Client library for OpenStack panko server - Python 3.x
+ The Panko project is an event storage service that provides the ability to
+ store and querying event data generated by Ceilometer with potentially other
+ sources.
+ .
+ Panko is a component of the Telemetry project.
+ .
+ This is a client library for Panko built on the Panko API. It provides a
+ Python API (the pankoclient module) and a OSC (the openstackclient CLI)
+ command-line tool (openstack event).
+ .
+ This package contains the Python wheel.
diff --git a/debian/rules b/debian/rules
index 612ce0e..d770e0f 100755
--- a/debian/rules
+++ b/debian/rules
@@ -12,7 +12,7 @@ override_dh_auto_build:
echo "Do nothing..."
override_dh_auto_install:
- pkgos-dh_auto_install --no-py2
+ pkgos-dh_auto_install --no-py2 --wheel
override_dh_auto_test:
ifeq (,$(findstring nocheck, $(DEB_BUILD_OPTIONS)))
--
2.30.2

View File

@ -1,29 +0,0 @@
From 88f1f5f98b92555db1ccdd92f26b57dcd678636f Mon Sep 17 00:00:00 2001
From: Charles Short <charles.short@windriver.com>
Date: Mon, 29 Nov 2021 20:55:20 +0000
Subject: [PATCH] Remove python-openstackclient
Remove build-Depends-Indep for python-openstackclient as it is
not being used and it is causing problems with the build-pkgs
tool.
Signed-off-by: Charles Short <charles.short@windriver.com>
---
debian/control | 1 -
1 file changed, 1 deletion(-)
diff --git a/debian/control b/debian/control
index 0647124..4d3dda6 100644
--- a/debian/control
+++ b/debian/control
@@ -17,7 +17,6 @@ Build-Depends-Indep:
python3-coverage,
python3-hacking,
python3-keystoneauth1,
- python3-openstackclient,
python3-openstackdocstheme,
python3-osc-lib,
python3-oslo.i18n,
--
2.30.2

View File

@ -1,2 +0,0 @@
0001-Add-wheel-package.patch
remove-openstackclient.patch

View File

@ -1,12 +0,0 @@
---
debname: python-pankoclient
debver: 1.1.0-2
dl_path:
name: python-pankoclient-debian-1.1.0-2.tar.gz
url: https://salsa.debian.org/openstack-team/clients/python-pankoclient/-/archive/debian/1.1.0-2/python-pankoclient-debian-1.1.0-2.tar.gz
md5sum: 4b623a6b3ad649b29e05fc83f6f03762
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-pankoclient

View File

@ -1,8 +0,0 @@
This repo is for https://salsa.debian.org/openstack-team/python/python-wsme
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,12 +0,0 @@
---
debname: python-wsme
debver: 0.10.0-3
dl_path:
name: python-wsme-debian-0.10.0-3.tar.gz
url: https://salsa.debian.org/openstack-team/python/python-wsme/-/archive/debian/0.10.0-3/python-wsme-debian-0.10.0-3.tar.gz
md5sum: 3c859550514ccde770f371a02f8fdf22
revision:
dist: $STX_DIST
GITREVCOUNT:
BASE_SRCREV: 27acda9a6b4885a50064cebc0858892e71aa37ce
SRC_DIR: ${MY_REPO}/stx/openstack-armada-app/upstream/openstack/python-wsme

View File

@ -1,41 +0,0 @@
From aa17c7c08c024e3a5b810c269e0c956a9e7d95de Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Thu, 30 Sep 2021 15:13:38 -0300
Subject: [PATCH] change ClientSideError logging verbosity
Regression introduced in 16.10. Reverts the following
upstream commit since WSME is used by SysInv-api to return ClientSideErrors,
and in the case of CLI commands, no log history for such errors would be
available.
Reverting commit 94cd1751c7b028898a38fda0689cfce15e2a96e2
Author: Chris Dent <chdent@redhat.com>
Date: Thu Apr 9 14:04:32 2015 +0100
Change client-side error logging to debug
A client-side error (that is something akin to a 4xx HTTP response
code) is something that is common, it is not something that should
cause WARNING level log messages. This change switches to using
DEBUG so that it is easier to filter out the noisy messages.
---
wsme/api.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/wsme/api.py b/wsme/api.py
index b25360a..fffba02 100644
--- a/wsme/api.py
+++ b/wsme/api.py
@@ -220,7 +220,7 @@ def format_exception(excinfo, debug=False):
faultcode = getattr(error, 'faultcode', 'Client')
r = dict(faultcode=faultcode,
faultstring=faultstring)
- log.debug("Client-side error: %s" % r['faultstring'])
+ log.warning("Client-side error: %s" % r['faultstring'])
r['debuginfo'] = None
return r
else:
--
2.17.1

View File

@ -1 +0,0 @@
0001-change-ClientSideError-logging-verbosity.patch

View File

@ -1,8 +0,0 @@
This repo is for https://salsa.debian.org/openstack-team/third-party/rabbitmq-server
Changes to this repo are needed for StarlingX and those changes are
not yet merged.
Rather than clone and diverge the repo, the repo is extracted at a particular
git SHA, and patches are applied on top.
As those patches are merged, the SHA can be updated and the local patches removed.

View File

@ -1,27 +0,0 @@
From d684a3b6c57273a78e64c77798c6f6f9eb606862 Mon Sep 17 00:00:00 2001
From: Fabricio Henrique Ramos <fabriciohenrique.ramos@windriver.com>
Date: Mon, 27 Sep 2021 11:24:06 -0300
Subject: [PATCH] WRS: Allow-rabbitmqctl-to-run-as-root-and-set-root-home.patch
---
debian/rabbitmq-script-wrapper | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/debian/rabbitmq-script-wrapper b/debian/rabbitmq-script-wrapper
index a622ae2..418d4a1 100755
--- a/debian/rabbitmq-script-wrapper
+++ b/debian/rabbitmq-script-wrapper
@@ -37,7 +37,9 @@ elif [ `id -u` = `id -u rabbitmq` -o "$SCRIPT" = "rabbitmq-plugins" ] ; then
fi
/usr/lib/rabbitmq/bin/${SCRIPT} "$@"
elif [ `id -u` = 0 ] ; then
- su rabbitmq -s /bin/sh -c "/usr/lib/rabbitmq/bin/${SCRIPT} ${CMDLINE}"
+ # WRS. Allow to run as root
+ export HOME=${HOME:-/root}
+ /bin/sh -c "/usr/lib/rabbitmq/bin/${SCRIPT} ${CMDLINE}"
else
/usr/lib/rabbitmq/bin/${SCRIPT}
echo
--
2.17.1

View File

@ -1 +0,0 @@
0001-WRS-Allow-rabbitmqctl-to-run-as-root-and-set-root-ho.patch

Some files were not shown because too many files have changed in this diff Show More