CVE-2022-23302, CVE-2022-23305, and CVE-2022-23307.
Though it does not contain a vulnerable configuration of log4j, to avoid
needing to prove that and false positives of security scanners, this
commit is the result of running the following commands:
zip -q -d monasca_agent/collector/checks/libs/jmxfetch-0.3.0-jar-with-dependencies.jar org/apache/logging/log4j/core/lookup/JndiLookup.class org/apache/log4j/net/JMSAppender.class org/apache/log4j/jdbc/JDBCAppender.class org/apache/log4j/net/JMSSink.class org/apache/log4j/chainsaw"*"
unzip monasca_agent/collector/checks/libs/jmxterm-1.0-DATADOG-uber.jar WORLDS-INF/lib/log4j.jar
zip -q -d WORLDS-INF/lib/log4j.jar org/apache/logging/log4j/core/lookup/JndiLookup.class org/apache/log4j/net/JMSAppender.class org/apache/log4j/jdbc/JDBCAppender.class org/apache/log4j/net/JMSSink.class org/apache/log4j/chainsaw"*"
zip monasca_agent/collector/checks/libs/jmxterm-1.0-DATADOG-uber.jar WORLDS-INF/lib/log4j.jar
Change-Id: Id47ba9397e7fef1ac8622abb2a1691a260f4bc9c
This reverts commit 9ff337881b.
Reason for revert:
The oslo_utils.fnmatch module was added to solve an issue in py2.7 but
it is no longer required because py2.7 is no longer supported.
The module was deprecated since oslo.utils 4.9.1[1] and the stdlib's
fnmatch module should be used instead.
[1] 4c893c92f551c9dd2a7cfbe7ae8171ad8139df0b
Change-Id: I84c0d2b2705e34b9853d42d03a398ecbe4f95330
With switching the SOAP library backing oslo.vmware [1], the internal
representation of ManagedObjectReference's attributes changes. To be able
to make the switch without interruption, we introduced helper functions
in oslo.vmware. This commit uses one of those - get_moref_value()
- to make the access to the "value" attribute compatible with both
backing libraries.
[1] https://specs.openstack.org/openstack/oslo-specs/specs/victoria/oslo-vmware-soap-library-switch.html
Change-Id: I50f0aa5d8865323515d15d1c1c5f10683bbac090
- In PY3 the output of subprocess.Popen is a type bytes, but
strip() method requires a type str.
Story: 2008017
Task: 40669
Change-Id: I297c30044df9a94baa645a1a27de10bb49038440
The 'libvirt_type' and 'libvirt_uri' options are currently set by oslo configuration library.
However, in 'monasca_agent/collector/virt/libvirt/inspector.py' file these options cannot be
provided by a configuration file. Changing this to retrieve both options from
'conf.d/libvirt.yaml' file.
Change-Id: I1918fda471e951f42db0d302e371108b664e936c
The imp module is deprecated[1] since version 3.4, use importlib to
instead
[1]: https://docs.python.org/3/library/imp.html
Change-Id: I10f2c8c165aebddc8fd39601a0a23231ff89cdf7
Add error handler that prevent crash of forwarded
when agent is not able to connect to keystone
Story: 2007674
Task: 39781
Change-Id: If6366e5b94f9cbe3f21ce9dbeb26d28e3a36ae88
When running with Py3 we compare a byte string to a unicode string
when parsing StatsD metrics. This patch adds some unit tests to
reproduce the bug and decodes the bytestring to make the existing
comparisons valid under Py3. When backporting to Train we can use
Oslo encodeutils. Clearly we could have more unit tests, but
this makes a start.
Change-Id: I6341f96f5c186428d2d829cabf618a6f84f40ce2
Story: 2007684
Task: 39796
This patch adds some extra debug options to the ping check. It writes
the STDOUT of ping subprocess to a temporary file which can be logged in
the debug log level.
Change-Id: Ife9a1d409a8326fb9ff07b1b04508cd11f899d10
The repo is Python 3 now, so update hacking to version 3.0 which
supports Python 3.
Fix problems found.
Update local hacking checks for new flake8.
Change-Id: I6396403d0a62f5403fc5b7fb04b6ce790c332c84
Mysql log gets an "Aborted connection" warning every 30sec
because the connection is not being closed properly.
Story: 2007233
Task: 38509
Change-Id: Ied6c19f8f7ac9b81f61b84efa2f6c7a8e40c3056
This is useful, for example when monitoring Slab memory leaks. To support
gathering this metric a minimum version of psutil 5.4.4 is required
(released on Apr 13th 2018).
Story: 2006815
Task: 37375
Change-Id: Ibe8def9e2a7c967a34236889aa03b287065abcdc
Since the Luminous release of Ceph, the plugin no longer exports metrics
such as object storage daemon stats, placement groups and pool stats.
Check for the installed version of the Ceph command and parse results
according to version.
Include test data for Jewel and Luminous Ceph clusters.
Story: 2005032
Task: 29515
Change-Id: I0aef0db25f49545c715b07880edd57135e3beafe
Co-Authored-By: Bharat Kunwar <bharat@stackhpc.com>
Co-Authored-By: Doug Szumski <doug@stackhpc.com>
Currently we don't have any capability to monitor the internal TLS/SSL
certificates. i.e. SSL certificates used by MySQL for replication, RabbitMQ for
distribution, etc. The cert_check plugin is not adequate for this purpose
becaue it can only check on certficates over HTTPS endpoints. Furthermore,
checking on these internal certificates over the network is cumbersome
because the agent plugin would have to speak specific protocols.
This patch adds a cert_file_check plugin to detect the certificate expiry
(in days from now) for the given X.509 certificate file in PEM format.
Similar to cert_check plugin, this plugin will a metric
'cert_file.cert_expire_days' which contains the number of days from now the
given certificate will be expired. If the certificate has already expired,
this will be a negative number.
Change-Id: Id95cc7115823f972e234417223ab5906b57447cc
Story: 2006753
A powerful metric to watch for a swift cluster is the
number of handoff partitions on a drive on a storage node.
A build up of handoff nodes on a particular server could
indicate a disk problem somewhere in the cluster. A bottleneck
somewhere. Or better, when would be a good time to rebalance
the ring (as you'd want to do it when existing backend data
movement is at a minimum.
So it turns out to be a great visualisation of the health of
a cluster.
That's what this check plugin does. Each instance check takes
the following values:
ring: <path to a Swift ring file>
devices: <path to the directory of mountpoints>
granularity: <either server or device>
To be able to determine primary vs handoff partitions on a drive
the swift ring needs to be consulted. If a storage node stores
more then 1 ring, and an instance would be defined for each.
You give swift a bunch of disks. These disks are placed in what
swift calls the 'devices' location. That is a directory where a
mount point for each mounted swift drive is located.
Finally, you can decide on the granularity, which defaults to
`server` if not defined. Only 2 metrics are created from this
check:
swift.partitions.primary_count
swift.partitions.handoff_count
But with the hostname dimension a ring dimension will also be set.
Allowing the graphing of the handoff vs partitions of each ring.
When the granularity is set to device, then an additional
dimension to the metric is added, the device name (the name of
the devices mount point). This allows the graphing and monitoring
of each device in a server if a finer granularity is required.
Because we need to consult the Swift ring there is a runtime
requirement on the Python Swift module being installed. But
this isn't required for the unit tests. Making it a runtime
dependency means when the check is loaded it'll log an error
and then exit if it can't import the swift module.
This is the second of two Swift check plugins I've been working on.
For more details see my blog post[1]
[1] - https://oliver.net.au/?p=358
Change-Id: Ie91add9af39f2ab0e5b575390c0c6355563c0bfc
Swift outputs alot of statsd metrics that you can point directly
at monasca-agents. However there is another swift endpoint,
recon, that is used to gather more metrics.
The Swift recon (or reconnaissance) API is an endpoint each of the
storage node servers make available via a REST API. This API can
either be hit manually or via the swift-recon tool.
This patch adds a check plugin that hits the recon REST API and
and send metrics to monasca.
This is the first of two Swift check plugins I'm working on.
For more details see my blog post[1]
[1] - https://oliver.net.au/?p=358
Change-Id: I503d74936f6f37fb261c1592845968319695475a
The change concerns Elasticsearch 7.3+. The change was made
due to internal changes in Elastisearch, as compared to the
version we're using now.
Story: 2006376
Task: 36167
Change-Id: Ia60e6173ba28433c6b020998157456f3e2bcc184
The api documentation is now published on docs.openstack.org instead
of developer.openstack.org. Update all links that are changed to the
new location.
Note that redirects will be set up as well but let's point now to the
new location.
For details, see:
http://lists.openstack.org/pipermail/openstack-discuss/2019-July/007828.html
Change-Id: I744e2f029df54410394341e60a46b56658e4175c
Even though there was a py36 test enabled in the gate, the tox.ini
configuration was not actually invoking the unit tests. This
change sets up the environment to allow tests to run.
As a result, a number of Python3 errors are uncovered and fixed.
Notably:
Python 3 does not have contextlib.nested, so reformatting using ,
file() is not in Python 3, so use io.open() instead
Use six.assertCountEqual(self, in tests
safe_decode:
subprocess.check_output returns in byte encoding, while default text
type str. safe_decode does the right thing by making sure string are not
bytes in python2 and python3
No ascci encoding:
python3 defaults to UTF-8 encoding, which is merely an extension to
ascii (default for python2).
test_json_plugin.py:
the file being opened in binary(wb) mode so python is expecting the
string in bytes.
Some of the refactoring should be revisited after we drop Python 2
support.
Change-Id: I62b46a2509c39201ca015ca7c269b2ea70c376c8
Story: 2005047
Task: 29547
flush() should not produce a metric if value is None (causes
exception in monasca-api).
Fix the exception message in flush() to use the correct data
structure.
Change-Id: I62d31270db9e70f16d2a38a73856009c52c098e6
Story: 2004276
Task: 27825
Use the six library to get monasca-agent to work with
python2.7 and python3.
Story: 2004148
Task: 27621
Change-Id: I0de315967dd5a745741fda0c53ce8cc85cda8cc5
Signed-off-by: Chuck Short <chucks@redhat.com>
Due to the queue module being renamed in Python 3 we need to support
both the new and the old name whilst people are still using Python 2.
Story: 2003130
Task: 23251
Change-Id: I9075183e199530f1953c2cd988ec28b3d0580257