Retire Packaging Deb project repos
This commit is part of a series to retire the Packaging Deb project. Step 2 is to remove all content from the project repos, replacing it with a README notification where to find ongoing work, and how to recover the repo if needed at some future point (as in https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project). Change-Id: If5041c4af99df6cc45685c6a5bb4ac7feb786622
This commit is contained in:
parent
6b4cf30a98
commit
f48f077c57
|
@ -1,19 +0,0 @@
|
|||
# Compiled python files
|
||||
*.py[co]
|
||||
|
||||
# Editors
|
||||
.*.sw[klmop]
|
||||
*~
|
||||
|
||||
# Packages/installer info
|
||||
*.egg[s]
|
||||
*.egg-info
|
||||
dist
|
||||
sdist
|
||||
|
||||
# Other
|
||||
.testrepository
|
||||
AUTHORS
|
||||
ChangeLog
|
||||
.tox
|
||||
doc/build
|
|
@ -1,4 +0,0 @@
|
|||
[gerrit]
|
||||
host=review.openstack.org
|
||||
port=29418
|
||||
project=openstack/pyghmi.git
|
|
@ -1,8 +0,0 @@
|
|||
[DEFAULT]
|
||||
test_command=OS_STDOUT_CAPTURE=${OS_STDOUT_CAPTURE:-1} \
|
||||
OS_STDERR_CAPTURE=${OS_STDERR_CAPTURE:-1} \
|
||||
OS_TEST_TIMEOUT=${OS_TEST_TIMEOUT:-60} \
|
||||
${PYTHON:-python} -m subunit.run discover -t ./ . $LISTOPT $IDOPTION
|
||||
|
||||
test_id_option=--load-list $IDFILE
|
||||
test_list_option=--list
|
202
LICENSE
202
LICENSE
|
@ -1,202 +0,0 @@
|
|||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
|
@ -1,6 +0,0 @@
|
|||
include AUTHORS
|
||||
include ChangeLog
|
||||
exclude .gitignore
|
||||
exclude .gitreview
|
||||
|
||||
global-exclude *.pyc
|
16
README
16
README
|
@ -1,4 +1,14 @@
|
|||
This is a pure python implementation of IPMI protocol.
|
||||
This project is no longer maintained.
|
||||
|
||||
pyghmicons and pyghmiutil are example scripts to show how one may incorporate
|
||||
this library into python code
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
For ongoing work on maintaining OpenStack packages in the Debian
|
||||
distribution, please see the Debian OpenStack packaging team at
|
||||
https://wiki.debian.org/OpenStack/.
|
||||
|
||||
For any further questions, please email
|
||||
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||
Freenode.
|
||||
|
|
55
README.md
55
README.md
|
@ -1,55 +0,0 @@
|
|||
# pyghmi
|
||||
|
||||
Pyghmi is a pure Python (mostly IPMI) server management library.
|
||||
|
||||
## Building and installing
|
||||
|
||||
(These instructions have been tested on CentOS 7)
|
||||
|
||||
Clone the repository, generate the RPM and install it:
|
||||
```bash
|
||||
$ git clone https://github.com/openstack/pyghmi.git
|
||||
$ cd pyghmi/
|
||||
$ python setup.py bdist_rpm
|
||||
$ sudo rpm -ivh dist/pyghmi-*.noarch.rpm
|
||||
```
|
||||
|
||||
## Using
|
||||
|
||||
There are a few use examples in the `bin` folder:
|
||||
|
||||
- `fakebmc`: simply fakes a BMC that supports a few IPMI commands (useful for
|
||||
testing)
|
||||
- `pyghmicons`: a remote console based on SOL redirection over IPMI
|
||||
- `pyghmiutil`: an IPMI client that supports a few direct uses of pyghmi (also
|
||||
useful for testing and prototyping new features)
|
||||
- `virshbmc`: a BMC emulation wrapper using libvirt
|
||||
|
||||
|
||||
## Extending
|
||||
|
||||
If you plan on adding support for new features, you'll most likely be interested
|
||||
in adding your methods to `pyghmi/ipmi/command.py`. See methods such as
|
||||
`get_users` and `set_power` for examples of how to use internal mechanisms to
|
||||
implement new features. And please, always document new methods.
|
||||
|
||||
Sometimes you may want to implement OEM-specific code. For example, retrieving
|
||||
firmware version information is not a part of standard IPMI, but some servers
|
||||
are known to support it via custom OEM commands. If this is the case, follow
|
||||
these steps:
|
||||
- Add your generic retrieval function (stub) to the `OEMHandler` class in
|
||||
`pyghmi/ipmi/oem/generic.py`. And please, document its intent, parameters and
|
||||
expected return values.
|
||||
- Implement the specific methods that your server supports in subdirectories in
|
||||
the `oem` folder (consider the `lenovo` submodule as an example). A OEM folder
|
||||
will contain at least one class inheriting from `OEMHandler`, and optionally
|
||||
helpers for running and parsing custom OEM commands.
|
||||
- Register mapping policies in `pyghmi/ipmi/oem/lookup.py` so pyghmi knows how
|
||||
to associate a BMC session with the specific OEM code you implemented.
|
||||
|
||||
A good way of testing the new feature is using `bin/pyghmiutil`. Just add an
|
||||
extension for the new feature you just implemented (as a new command) and call
|
||||
it from the command line:
|
||||
```
|
||||
$ IPMIPASSWORD=passw0rd bin/pyghmiutil [BMC IP address] username my_new_feature_command
|
||||
```
|
92
bin/fakebmc
92
bin/fakebmc
|
@ -1,92 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
__author__ = 'jjohnson2@lenovo.com'
|
||||
|
||||
#this is a quick sample of how to write something that acts like a bmc
|
||||
#to play:
|
||||
# run fakebmc
|
||||
## ipmitool -I lanplus -U admin -P password -H 127.0.0.1 power status
|
||||
# Chassis Power is off
|
||||
# # ipmitool -I lanplus -U admin -P password -H 127.0.0.1 power on
|
||||
# Chassis Power Control: Up/On
|
||||
# # ipmitool -I lanplus -U admin -P password -H 127.0.0.1 power status
|
||||
# Chassis Power is on
|
||||
# # ipmitool -I lanplus -U admin -P password -H 127.0.0.1 mc reset cold
|
||||
# Sent cold reset command to MC
|
||||
# (fakebmc exits)
|
||||
import argparse
|
||||
import pyghmi.ipmi.bmc as bmc
|
||||
import sys
|
||||
|
||||
|
||||
class FakeBmc(bmc.Bmc):
|
||||
def __init__(self, authdata, port):
|
||||
super(FakeBmc, self).__init__(authdata, port)
|
||||
self.powerstate = 'off'
|
||||
self.bootdevice = 'default'
|
||||
|
||||
def get_boot_device(self):
|
||||
return self.bootdevice
|
||||
|
||||
def set_boot_device(self, bootdevice):
|
||||
self.bootdevice = bootdevice
|
||||
|
||||
def cold_reset(self):
|
||||
# Reset of the BMC, not managed system, here we will exit the demo
|
||||
print 'shutting down in response to BMC cold reset request'
|
||||
sys.exit(0)
|
||||
|
||||
def get_power_state(self):
|
||||
return self.powerstate
|
||||
|
||||
def power_off(self):
|
||||
# this should be power down without waiting for clean shutdown
|
||||
self.powerstate = 'off'
|
||||
print 'abruptly remove power'
|
||||
|
||||
def power_on(self):
|
||||
self.powerstate = 'on'
|
||||
print 'powered on'
|
||||
|
||||
def power_reset(self):
|
||||
pass
|
||||
|
||||
def power_shutdown(self):
|
||||
# should attempt a clean shutdown
|
||||
print 'politely shut down the system'
|
||||
self.powerstate = 'off'
|
||||
|
||||
def is_active(self):
|
||||
return self.powerstate == 'on'
|
||||
|
||||
def iohandler(self, data):
|
||||
print(data)
|
||||
if self.sol:
|
||||
self.sol.send_data(data)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(
|
||||
prog='fakebmc',
|
||||
description='Pretend to be a BMC',
|
||||
)
|
||||
parser.add_argument('--port',
|
||||
dest='port',
|
||||
type=int,
|
||||
default=623,
|
||||
help='Port to listen on; defaults to 623')
|
||||
args = parser.parse_args()
|
||||
mybmc = FakeBmc({'admin': 'password'}, port=args.port)
|
||||
mybmc.listen()
|
|
@ -1,85 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright 2013 IBM Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
@author: Jarrod Johnson <jbjohnso@us.ibm.com>
|
||||
"""
|
||||
|
||||
"""A simple little script to exemplify/test ipmi.console module
|
||||
"""
|
||||
import fcntl
|
||||
import os
|
||||
import select
|
||||
import sys
|
||||
import termios
|
||||
import tty
|
||||
|
||||
from pyghmi.ipmi import console
|
||||
import threading
|
||||
|
||||
tcattr = termios.tcgetattr(sys.stdin)
|
||||
newtcattr = tcattr
|
||||
#TODO(jbjohnso): add our exit handler
|
||||
newtcattr[-1][termios.VINTR] = 0
|
||||
newtcattr[-1][termios.VSUSP] = 0
|
||||
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, newtcattr)
|
||||
|
||||
tty.setraw(sys.stdin.fileno())
|
||||
currfl = fcntl.fcntl(sys.stdin.fileno(), fcntl.F_GETFL)
|
||||
fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, currfl | os.O_NONBLOCK)
|
||||
|
||||
sol = None
|
||||
|
||||
|
||||
def _doinput():
|
||||
while True:
|
||||
select.select((sys.stdin,), (), (), 600)
|
||||
try:
|
||||
data = sys.stdin.read()
|
||||
except (IOError, OSError) as e:
|
||||
if e.errno == 11:
|
||||
continue
|
||||
sol.send_data(data)
|
||||
|
||||
|
||||
def _print(data):
|
||||
bailout = False
|
||||
if type(data) not in (str, unicode):
|
||||
bailout = True
|
||||
data = repr(data)
|
||||
sys.stdout.write(data)
|
||||
sys.stdout.flush()
|
||||
if bailout:
|
||||
raise Exception(data)
|
||||
|
||||
try:
|
||||
if sys.argv[3] is None:
|
||||
passwd = os.environ['IPMIPASSWORD']
|
||||
else:
|
||||
passwd_file = sys.argv[3]
|
||||
with open(passwd_file, "r") as f:
|
||||
passwd = f.read()
|
||||
|
||||
sol = console.Console(bmc=sys.argv[1], userid=sys.argv[2], password=passwd,
|
||||
iohandler=_print, force=True)
|
||||
inputthread = threading.Thread(target=_doinput)
|
||||
inputthread.daemon = True
|
||||
inputthread.start()
|
||||
sol.main_loop()
|
||||
except Exception:
|
||||
currfl = fcntl.fcntl(sys.stdin.fileno(), fcntl.F_GETFL)
|
||||
fcntl.fcntl(sys.stdin.fileno(), fcntl.F_SETFL, currfl ^ os.O_NONBLOCK)
|
||||
termios.tcsetattr(sys.stdin, termios.TCSANOW, tcattr)
|
||||
sys.exit(0)
|
|
@ -1,86 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright 2013 IBM Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
"""
|
||||
@author: Jarrod Johnson <jbjohnso@us.ibm.com>
|
||||
"""
|
||||
|
||||
"""This is an example of using the library in a synchronous fashion. For now,
|
||||
it isn't conceived as a general utility to actually use, just help developers
|
||||
understand how the ipmi_command class workes.
|
||||
"""
|
||||
import os
|
||||
import string
|
||||
import sys
|
||||
|
||||
from pyghmi.ipmi import command
|
||||
|
||||
if (len(sys.argv) < 3) or 'IPMIPASSWORD' not in os.environ:
|
||||
print "Usage:"
|
||||
print " IPMIPASSWORD=password %s bmc username <cmd> <optarg>" % sys.argv[0]
|
||||
sys.exit(1)
|
||||
|
||||
password = os.environ['IPMIPASSWORD']
|
||||
os.environ['IPMIPASSWORD'] = ""
|
||||
bmc = sys.argv[1]
|
||||
userid = sys.argv[2]
|
||||
args = None
|
||||
if len(sys.argv) >= 5:
|
||||
args = sys.argv[4:]
|
||||
|
||||
ipmicmd = None
|
||||
|
||||
|
||||
def docommand(result, ipmisession):
|
||||
cmmand = sys.argv[3]
|
||||
print "Logged into %s" % ipmisession.bmc
|
||||
if 'error' in result:
|
||||
print result['error']
|
||||
return
|
||||
if cmmand == 'power':
|
||||
if args:
|
||||
print ipmisession.set_power(args[0], wait=True)
|
||||
else:
|
||||
value = ipmisession.get_power()
|
||||
print "%s: %s" % (ipmisession.bmc, value['powerstate'])
|
||||
elif cmmand == 'bootdev':
|
||||
if args:
|
||||
print ipmisession.set_bootdev(args[0])
|
||||
else:
|
||||
print ipmisession.get_bootdev()
|
||||
elif cmmand == 'sensors':
|
||||
for reading in ipmisession.get_sensor_data():
|
||||
print repr(reading)
|
||||
elif cmmand == 'health':
|
||||
print repr(ipmisession.get_health())
|
||||
elif cmmand == 'inventory':
|
||||
for item in ipmisession.get_inventory():
|
||||
print repr(item)
|
||||
elif cmmand == 'leds':
|
||||
for led in ipmisession.get_leds():
|
||||
print repr(led)
|
||||
elif cmmand == 'graphical':
|
||||
print ipmisession.get_graphical_console()
|
||||
elif cmmand == 'net':
|
||||
print ipmisession.get_net_configuration()
|
||||
elif cmmand == 'raw':
|
||||
print ipmisession.raw_command(netfn=int(args[0]),
|
||||
command=int(args[1]),
|
||||
data=map(lambda x: int(x, 16), args[2:]))
|
||||
|
||||
bmcs = string.split(bmc, ",")
|
||||
for bmc in bmcs:
|
||||
ipmicmd = command.Command(bmc=bmc, userid=userid, password=password,
|
||||
onlogon=docommand)
|
||||
ipmicmd.eventloop()
|
158
bin/virshbmc
158
bin/virshbmc
|
@ -1,158 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Written by pmartini2, but mostly a clone of fakebmc, written by jjohnson2
|
||||
__author__ = 'pmartini2@bloomberg.net'
|
||||
|
||||
# This is a simple, but working proof of concept of using pyghmi.ipmi.bmc to
|
||||
# control a VM
|
||||
|
||||
import argparse
|
||||
import libvirt
|
||||
import pyghmi.ipmi.bmc as bmc
|
||||
import sys
|
||||
import threading
|
||||
|
||||
|
||||
def lifecycle_callback(connection, domain, event, detail, console):
|
||||
console.state = console.domain.state(0)
|
||||
|
||||
|
||||
def error_handler(unused, error):
|
||||
if (error[0] == libvirt.VIR_ERR_RPC and
|
||||
error[1] == libvirt.VIR_FROM_STREAMS):
|
||||
return
|
||||
|
||||
|
||||
def stream_callback(stream, events, console):
|
||||
try:
|
||||
data = console.stream.recv(1024)
|
||||
except Exception:
|
||||
return
|
||||
if console.sol:
|
||||
console.sol.send_data(data)
|
||||
|
||||
|
||||
class LibvirtBmc(bmc.Bmc):
|
||||
"""A class to provide an IPMI interface to the VirtualBox APIs."""
|
||||
|
||||
def __init__(self, authdata, hypervisor, domain, port):
|
||||
super(LibvirtBmc, self).__init__(authdata, port)
|
||||
# Rely on libvirt to throw on bad data
|
||||
self.conn = libvirt.open(hypervisor)
|
||||
self.name = domain
|
||||
self.domain = self.conn.lookupByName(domain)
|
||||
self.state = self.domain.state(0)
|
||||
self.stream = None
|
||||
self.run_console = False
|
||||
self.conn.domainEventRegister(lifecycle_callback, self)
|
||||
self.sol_thread = None
|
||||
|
||||
def cold_reset(self):
|
||||
# Reset of the BMC, not managed system, here we will exit the demo
|
||||
print 'shutting down in response to BMC cold reset request'
|
||||
sys.exit(0)
|
||||
|
||||
def get_power_state(self):
|
||||
if self.domain.isActive():
|
||||
return 'on'
|
||||
else:
|
||||
return 'off'
|
||||
|
||||
def power_off(self):
|
||||
if not self.domain.isActive():
|
||||
return 0xd5 # Not valid in this state
|
||||
self.domain.destroy()
|
||||
|
||||
def power_on(self):
|
||||
if self.domain.isActive():
|
||||
return 0xd5 # Not valid in this state
|
||||
self.domain.create()
|
||||
|
||||
def power_reset(self):
|
||||
if not self.domain.isActive():
|
||||
return 0xd5 # Not valid in this state
|
||||
self.domain.reset()
|
||||
|
||||
def power_shutdown(self):
|
||||
if not self.domain.isActive():
|
||||
return 0xd5 # Not valid in this state
|
||||
self.domain.shutdown()
|
||||
|
||||
def is_active(self):
|
||||
return self.domain.isActive()
|
||||
|
||||
def check_console(self):
|
||||
if (self.state[0] == libvirt.VIR_DOMAIN_RUNNING or
|
||||
self.state[0] == libvirt.VIR_DOMAIN_PAUSED):
|
||||
if self.stream is None:
|
||||
self.stream = self.conn.newStream(libvirt.VIR_STREAM_NONBLOCK)
|
||||
self.domain.openConsole(None, self.stream, 0)
|
||||
self.stream.eventAddCallback(libvirt.VIR_STREAM_EVENT_READABLE,
|
||||
stream_callback, self)
|
||||
else:
|
||||
if self.stream:
|
||||
self.stream.eventRemoveCallback()
|
||||
self.stream = None
|
||||
|
||||
return self.run_console
|
||||
|
||||
def activate_payload(self, request, session):
|
||||
super(LibvirtBmc, self).activate_payload(request, session)
|
||||
self.run_console = True
|
||||
self.sol_thread = threading.Thread(target=self.loop)
|
||||
self.sol_thread.start()
|
||||
|
||||
def deactivate_payload(self, request, session):
|
||||
self.run_console = False
|
||||
self.sol_thread.join()
|
||||
super(LibvirtBmc, self).deactivate_payload(request, session)
|
||||
|
||||
def iohandler(self, data):
|
||||
if self.stream:
|
||||
self.stream.send(data)
|
||||
|
||||
def loop(self):
|
||||
while self.check_console():
|
||||
libvirt.virEventRunDefaultImpl()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
parser = argparse.ArgumentParser(
|
||||
prog='virshbmc',
|
||||
description='Pretend to be a BMC and proxy to virsh',
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||
)
|
||||
parser.add_argument('--port',
|
||||
dest='port',
|
||||
type=int,
|
||||
default=623,
|
||||
help='(UDP) port to listen on')
|
||||
parser.add_argument('--connect',
|
||||
dest='hypervisor',
|
||||
default='qemu:///system',
|
||||
help='The hypervisor to connect to')
|
||||
parser.add_argument('--domain',
|
||||
dest='domain',
|
||||
required=True,
|
||||
help='The name of the domain to manage')
|
||||
args = parser.parse_args()
|
||||
|
||||
libvirt.virEventRegisterDefaultImpl()
|
||||
libvirt.registerErrorHandler(error_handler, None)
|
||||
|
||||
mybmc = LibvirtBmc({'admin': 'password'},
|
||||
hypervisor=args.hypervisor,
|
||||
domain=args.domain,
|
||||
port=args.port)
|
||||
mybmc.listen()
|
7
buildrpm
7
buildrpm
|
@ -1,7 +0,0 @@
|
|||
cd `dirname $0`
|
||||
VERSION=`python setup.py --version`
|
||||
python setup.py sdist
|
||||
cp dist/pyghmi-$VERSION.tar.gz ~/rpmbuild/SOURCES
|
||||
rpmbuild -bs python-pyghmi.spec
|
||||
rm $1/python-pyghmi-*rpm
|
||||
cp ~/rpmbuild/SRPMS/python-pyghmi-$VERSION-1.src.rpm $1/
|
|
@ -1,223 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# pyghmi documentation build configuration file, created by
|
||||
# sphinx-quickstart on Tue Jun 18 09:15:24 2013.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to
|
||||
# its containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from pyghmi.version import version_info
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
sys.path.insert(0, os.path.abspath('../../'))
|
||||
sys.path.insert(0, os.path.abspath('../'))
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
# -- General configuration ---------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
# needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be
|
||||
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo']
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
# source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'pyghmi'
|
||||
copyright = u'2013, Jarrod Johnson <jbjohnso@us.ibm.com>'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = version_info.release_string()
|
||||
# The short X.Y version.
|
||||
version = version_info.version_string()
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
# language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
# today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
# today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ['_build']
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all documents
|
||||
# default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
# add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
# add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
# show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
# modindex_common_prefix = []
|
||||
|
||||
|
||||
# -- Options for HTML output -------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'default'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
# html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
# html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
# html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
# html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
# html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
# html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
# html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
# html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
# html_sidebars = {}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
# html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
# html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
# html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
# html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
# html_show_sourcelink = True
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
# html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
# html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
# html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
# html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'pyghmidoc'
|
||||
|
||||
# -- Options for LaTeX output -------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
# latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
# latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass [howto/manual])
|
||||
latex_documents = [
|
||||
('index', 'pyghmi.tex', u'pyghmi Documentation',
|
||||
u'Jarrod Johnson \\textless{}jbjohnso@us.ibm.com\\textgreater{}',
|
||||
'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
# latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
# latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
# latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
# latex_show_urls = False
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
# latex_preamble = ''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
# latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
# latex_domain_indices = True
|
||||
|
||||
|
||||
# -- Options for manual page output -------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('index', 'pyghmi', u'pyghmi Documentation',
|
||||
[u'Jarrod Johnson <jbjohnso@us.ibm.com>'], 1)
|
||||
]
|
|
@ -1,25 +0,0 @@
|
|||
.. pyghmi documentation master file, created by
|
||||
sphinx-quickstart on Tue Jun 18 09:15:24 2013.
|
||||
You can adapt this file completely to your liking, but it should at least
|
||||
contain the root `toctree` directive.
|
||||
|
||||
Welcome to pyghmi's documentation!
|
||||
=======================================
|
||||
|
||||
Contents:
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
.. automodule:: pyghmi.ipmi.command
|
||||
|
||||
.. autoclass:: pyghmi.ipmi.command
|
||||
:members:
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
|
@ -1,20 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2014 IBM Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
class Health:
|
||||
Ok = 0
|
||||
Warning, Critical, Failed = [2**x for x in range(0, 3)]
|
|
@ -1,50 +0,0 @@
|
|||
|
||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2013 IBM Corporation
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# The Exceptions that Pyghmi can throw
|
||||
|
||||
|
||||
class PyghmiException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class IpmiException(PyghmiException):
|
||||
def __init__(self, text='', code=0):
|
||||
super(IpmiException, self).__init__(text)
|
||||
self.ipmicode = code
|
||||
|
||||
|
||||
class UnrecognizedCertificate(Exception):
|
||||
def __init__(self, text='', certdata=None):
|
||||
super(UnrecognizedCertificate, self).__init__(text)
|
||||
self.certdata = certdata
|
||||
|
||||
|
||||
class InvalidParameterValue(PyghmiException):
|
||||
pass
|
||||
|
||||
|
||||
class BmcErrorException(IpmiException):
|
||||
# This denotes when library detects an invalid BMC behavior
|
||||
pass
|
||||
|
||||
|
||||
class UnsupportedFunctionality(PyghmiException):
|
||||
# Indicates when functionality is requested that is not supported by
|
||||
# current endpoint
|
||||
pass
|
|
@ -1,190 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pyghmi.ipmi.command as ipmicommand
|
||||
import pyghmi.ipmi.console as console
|
||||
import pyghmi.ipmi.private.serversession as serversession
|
||||
import pyghmi.ipmi.private.session as ipmisession
|
||||
import traceback
|
||||
|
||||
__author__ = 'jjohnson2@lenovo.com'
|
||||
|
||||
|
||||
class Bmc(serversession.IpmiServer):
|
||||
|
||||
activated = False
|
||||
sol = None
|
||||
iohandler = None
|
||||
|
||||
def cold_reset(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def power_off(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def power_on(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def power_cycle(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def power_reset(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def pulse_diag(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def power_shutdown(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def get_power_state(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def is_active(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def activate_payload(self, request, session):
|
||||
if self.iohandler is None:
|
||||
session.send_ipmi_response(code=0x81)
|
||||
elif not self.is_active():
|
||||
session.send_ipmi_response(code=0x81)
|
||||
elif self.activated:
|
||||
session.send_ipmi_response(code=0x80)
|
||||
else:
|
||||
self.activated = True
|
||||
session.send_ipmi_response(
|
||||
data=[0, 0, 0, 0, 1, 0, 1, 0, 2, 0x6f, 0xff, 0xff])
|
||||
self.sol = console.ServerConsole(session, self.iohandler)
|
||||
|
||||
def deactivate_payload(self, request, session):
|
||||
if self.iohandler is None:
|
||||
session.send_ipmi_response(code=0x81)
|
||||
elif not self.activated:
|
||||
session.send_ipmi_response(code=0x80)
|
||||
else:
|
||||
session.send_ipmi_response()
|
||||
self.sol.close()
|
||||
self.activated = False
|
||||
self.sol = None
|
||||
|
||||
@staticmethod
|
||||
def handle_missing_command(session):
|
||||
session.send_ipmi_response(code=0xc1)
|
||||
|
||||
def get_chassis_status(self, session):
|
||||
try:
|
||||
powerstate = self.get_power_state()
|
||||
except NotImplementedError:
|
||||
return session.send_ipmi_response(code=0xc1)
|
||||
if powerstate in ipmicommand.power_states:
|
||||
powerstate = ipmicommand.power_states[powerstate]
|
||||
if powerstate not in (0, 1):
|
||||
raise Exception('BMC implementation mistake')
|
||||
statusdata = [powerstate, 0, 0]
|
||||
session.send_ipmi_response(data=statusdata)
|
||||
|
||||
def control_chassis(self, request, session):
|
||||
rc = 0
|
||||
try:
|
||||
directive = request['data'][0]
|
||||
if directive == 0:
|
||||
rc = self.power_off()
|
||||
elif directive == 1:
|
||||
rc = self.power_on()
|
||||
elif directive == 2:
|
||||
rc = self.power_cycle()
|
||||
elif directive == 3:
|
||||
rc = self.power_reset()
|
||||
elif directive == 4:
|
||||
# i.e. Pulse a diagnostic interrupt(NMI) directly
|
||||
rc = self.pulse_diag()
|
||||
elif directive == 5:
|
||||
rc = self.power_shutdown()
|
||||
if rc is None:
|
||||
rc = 0
|
||||
session.send_ipmi_response(code=rc)
|
||||
except NotImplementedError:
|
||||
session.send_ipmi_response(code=0xcc)
|
||||
|
||||
def get_boot_device(self):
|
||||
raise NotImplementedError
|
||||
|
||||
def get_system_boot_options(self, request, session):
|
||||
if request['data'][0] == 5: # boot flags
|
||||
try:
|
||||
bootdevice = self.get_boot_device()
|
||||
except NotImplementedError:
|
||||
session.send_ipmi_response(data=[1, 5, 0, 0, 0, 0, 0])
|
||||
if (type(bootdevice) != int and
|
||||
bootdevice in ipmicommand.boot_devices):
|
||||
bootdevice = ipmicommand.boot_devices[bootdevice]
|
||||
paramdata = [1, 5, 0b10000000, bootdevice, 0, 0, 0]
|
||||
return session.send_ipmi_response(data=paramdata)
|
||||
else:
|
||||
session.send_ipmi_response(code=0x80)
|
||||
|
||||
def set_boot_device(self, bootdevice):
|
||||
raise NotImplementedError
|
||||
|
||||
def set_system_boot_options(self, request, session):
|
||||
if request['data'][0] in (0, 3, 4):
|
||||
# for now, just smile and nod at boot flag bit clearing
|
||||
# implementing it is a burden and implementing it does more to
|
||||
# confuse users than serve a useful purpose
|
||||
session.send_ipmi_response()
|
||||
elif request['data'][0] == 5:
|
||||
bootdevice = (request['data'][2] >> 2) & 0b1111
|
||||
try:
|
||||
bootdevice = ipmicommand.boot_devices[bootdevice]
|
||||
except KeyError:
|
||||
session.send_ipmi_response(code=0xcc)
|
||||
return
|
||||
self.set_boot_device(bootdevice)
|
||||
session.send_ipmi_response()
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
def handle_raw_request(self, request, session):
|
||||
try:
|
||||
if request['netfn'] == 6:
|
||||
if request['command'] == 1: # get device id
|
||||
return self.send_device_id(session)
|
||||
elif request['command'] == 2: # cold reset
|
||||
return session.send_ipmi_response(code=self.cold_reset())
|
||||
elif request['command'] == 0x48: # activate payload
|
||||
return self.activate_payload(request, session)
|
||||
elif request['command'] == 0x49: # deactivate payload
|
||||
return self.deactivate_payload(request, session)
|
||||
elif request['netfn'] == 0:
|
||||
if request['command'] == 1: # get chassis status
|
||||
return self.get_chassis_status(session)
|
||||
elif request['command'] == 2: # chassis control
|
||||
return self.control_chassis(request, session)
|
||||
elif request['command'] == 8: # set boot options
|
||||
return self.set_system_boot_options(request, session)
|
||||
elif request['command'] == 9: # get boot options
|
||||
return self.get_system_boot_options(request, session)
|
||||
session.send_ipmi_response(code=0xc1)
|
||||
except NotImplementedError:
|
||||
session.send_ipmi_response(code=0xc1)
|
||||
except Exception:
|
||||
session._send_ipmi_net_payload(code=0xff)
|
||||
traceback.print_exc()
|
||||
|
||||
@classmethod
|
||||
def listen(cls, timeout=30):
|
||||
while True:
|
||||
ipmisession.Session.wait_for_rsp(timeout)
|
File diff suppressed because it is too large
Load Diff
|
@ -1,518 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2014 IBM Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# This represents the low layer message framing portion of IPMI
|
||||
|
||||
import pyghmi.exceptions as exc
|
||||
import struct
|
||||
import threading
|
||||
|
||||
from pyghmi.ipmi.private import constants
|
||||
from pyghmi.ipmi.private import session
|
||||
from pyghmi.ipmi.private import util
|
||||
|
||||
|
||||
PENDINGOUTPUT = threading.RLock()
|
||||
|
||||
|
||||
class Console(object):
|
||||
"""IPMI SOL class.
|
||||
|
||||
This object represents an SOL channel, multiplexing SOL data with
|
||||
commands issued by ipmi.command.
|
||||
|
||||
:param bmc: hostname or ip address of BMC
|
||||
:param userid: username to use to connect
|
||||
:param password: password to connect to the BMC
|
||||
:param iohandler: Either a function to call with bytes, a filehandle to
|
||||
use for input and output, or a tuple of (input, output)
|
||||
handles
|
||||
:param kg: optional parameter for BMCs configured to require it
|
||||
"""
|
||||
|
||||
# TODO(jbjohnso): still need an exit and a data callin function
|
||||
def __init__(self, bmc, userid, password,
|
||||
iohandler, port=623,
|
||||
force=False, kg=None):
|
||||
self.keepaliveid = None
|
||||
self.connected = False
|
||||
self.broken = False
|
||||
self.out_handler = iohandler
|
||||
self.remseq = 0
|
||||
self.myseq = 0
|
||||
self.lastsize = 0
|
||||
self.retriedpayload = 0
|
||||
self.pendingoutput = []
|
||||
self.awaitingack = False
|
||||
self.activated = False
|
||||
self.force_session = force
|
||||
self.ipmi_session = session.Session(bmc=bmc,
|
||||
userid=userid,
|
||||
password=password,
|
||||
port=port,
|
||||
kg=kg,
|
||||
onlogon=self._got_session)
|
||||
# induce one iteration of the loop, now that we would be
|
||||
# prepared for it in theory
|
||||
session.Session.wait_for_rsp(0)
|
||||
|
||||
def _got_session(self, response):
|
||||
"""Private function to navigate SOL payload activation
|
||||
"""
|
||||
if 'error' in response:
|
||||
self._print_error(response['error'])
|
||||
return
|
||||
# Send activate sol payload directive
|
||||
# netfn= 6 (application)
|
||||
# command = 0x48 (activate payload)
|
||||
# data = (1, sol payload type
|
||||
# 1, first instance
|
||||
# 0b11000000, -encrypt, authenticate,
|
||||
# disable serial/modem alerts, CTS fine
|
||||
# 0, 0, 0 reserved
|
||||
response = self.ipmi_session.raw_command(netfn=0x6, command=0x48,
|
||||
data=(1, 1, 192, 0, 0, 0))
|
||||
# given that these are specific to the command,
|
||||
# it's probably best if one can grep the error
|
||||
# here instead of in constants
|
||||
sol_activate_codes = {
|
||||
0x81: 'SOL is disabled',
|
||||
0x82: 'Maximum SOL session count reached',
|
||||
0x83: 'Cannot activate payload with encryption',
|
||||
0x84: 'Cannot activate payload without encryption',
|
||||
}
|
||||
if 'code' in response and response['code']:
|
||||
if response['code'] in constants.ipmi_completion_codes:
|
||||
self._print_error(
|
||||
constants.ipmi_completion_codes[response['code']])
|
||||
return
|
||||
elif response['code'] == 0x80:
|
||||
if self.force_session and not self.retriedpayload:
|
||||
self.retriedpayload = 1
|
||||
sessrsp = self.ipmi_session.raw_command(
|
||||
netfn=0x6,
|
||||
command=0x49,
|
||||
data=(1, 1, 0, 0, 0, 0))
|
||||
self._got_session(sessrsp)
|
||||
return
|
||||
else:
|
||||
self._print_error('SOL Session active for another client')
|
||||
return
|
||||
elif response['code'] in sol_activate_codes:
|
||||
self._print_error(sol_activate_codes[response['code']])
|
||||
return
|
||||
else:
|
||||
self._print_error(
|
||||
'SOL encountered Unrecognized error code %d' %
|
||||
response['code'])
|
||||
return
|
||||
if 'error' in response:
|
||||
self._print_error(response['error'])
|
||||
return
|
||||
self.activated = True
|
||||
# data[0:3] is reserved except for the test mode, which we don't use
|
||||
data = response['data']
|
||||
self.maxoutcount = (data[5] << 8) + data[4]
|
||||
# BMC tells us this is the maximum allowed size
|
||||
# data[6:7] is the promise of how small packets are going to be, but we
|
||||
# don't have any reason to worry about it
|
||||
if (data[8] + (data[9] << 8)) not in (623, 28418):
|
||||
# TODO(jbjohnso): support atypical SOL port number
|
||||
raise NotImplementedError("Non-standard SOL Port Number")
|
||||
# ignore data[10:11] for now, the vlan detail, shouldn't matter to this
|
||||
# code anyway...
|
||||
# NOTE(jbjohnso):
|
||||
# We will use a special purpose keepalive
|
||||
if self.ipmi_session.sol_handler is not None:
|
||||
# If there is erroneously another SOL handler already, notify
|
||||
# it of newly established session
|
||||
self.ipmi_session.sol_handler({'error': 'Session Disconnected'})
|
||||
self.keepaliveid = self.ipmi_session.register_keepalive(
|
||||
cmd={'netfn': 6, 'command': 0x4b, 'data': (1, 1)},
|
||||
callback=self._got_payload_instance_info)
|
||||
self.ipmi_session.sol_handler = self._got_sol_payload
|
||||
self.connected = True
|
||||
# self._sendpendingoutput() checks len(self._sendpendingoutput)
|
||||
self._sendpendingoutput()
|
||||
|
||||
def _got_payload_instance_info(self, response):
|
||||
if 'error' in response:
|
||||
self.activated = False
|
||||
self._print_error(response['error'])
|
||||
return
|
||||
currowner = struct.unpack(
|
||||
"<I", struct.pack('4B', *response['data'][:4]))
|
||||
if currowner[0] != self.ipmi_session.sessionid:
|
||||
# the session is deactivated or active for something else
|
||||
self.activated = False
|
||||
self._print_error('SOL deactivated')
|
||||
return
|
||||
# ok, still here, that means session is alive, but another
|
||||
# common issue is firmware messing with mux on reboot
|
||||
# this would be a nice thing to check, but the serial channel
|
||||
# number is needed and there isn't an obvious means to reliably
|
||||
# discern which channel or even *if* the serial port in question
|
||||
# correlates at all to an ipmi channel to check mux
|
||||
|
||||
@util.protect(PENDINGOUTPUT)
|
||||
def _addpendingdata(self, data):
|
||||
if isinstance(data, dict):
|
||||
self.pendingoutput.append(data)
|
||||
else: # it is a text situation
|
||||
if (len(self.pendingoutput) == 0 or
|
||||
isinstance(self.pendingoutput[-1], dict)):
|
||||
self.pendingoutput.append(data)
|
||||
else:
|
||||
self.pendingoutput[-1] += data
|
||||
|
||||
def _got_cons_input(self, handle):
|
||||
"""Callback for handle events detected by ipmi session
|
||||
"""
|
||||
self._addpendingdata(handle.read())
|
||||
if not self.awaitingack:
|
||||
self._sendpendingoutput()
|
||||
|
||||
def close(self):
|
||||
"""Shut down an SOL session,
|
||||
"""
|
||||
if self.ipmi_session:
|
||||
self.ipmi_session.unregister_keepalive(self.keepaliveid)
|
||||
if self.activated:
|
||||
try:
|
||||
self.ipmi_session.raw_command(netfn=6, command=0x49,
|
||||
data=(1, 1, 0, 0, 0, 0))
|
||||
except exc.IpmiException:
|
||||
# if underlying ipmi session is not working, then
|
||||
# run with the implicit success
|
||||
pass
|
||||
|
||||
def send_data(self, data):
|
||||
if self.broken:
|
||||
return
|
||||
self._addpendingdata(data)
|
||||
if not self.connected:
|
||||
return
|
||||
if not self.awaitingack:
|
||||
self._sendpendingoutput()
|
||||
|
||||
def send_break(self):
|
||||
self._addpendingdata({'break': 1})
|
||||
if not self.connected:
|
||||
return
|
||||
if not self.awaitingack:
|
||||
self._sendpendingoutput()
|
||||
|
||||
@classmethod
|
||||
def wait_for_rsp(cls, timeout):
|
||||
"""Delay for no longer than timeout for next response.
|
||||
|
||||
This acts like a sleep that exits on activity.
|
||||
|
||||
:param timeout: Maximum number of seconds before returning
|
||||
"""
|
||||
return session.Session.wait_for_rsp(timeout=timeout)
|
||||
|
||||
@util.protect(PENDINGOUTPUT)
|
||||
def _sendpendingoutput(self):
|
||||
if len(self.pendingoutput) == 0:
|
||||
return
|
||||
if isinstance(self.pendingoutput[0], dict):
|
||||
if 'break' in self.pendingoutput[0]:
|
||||
self._sendoutput("", sendbreak=True)
|
||||
else:
|
||||
raise ValueError
|
||||
del self.pendingoutput[0]
|
||||
return
|
||||
if len(self.pendingoutput[0]) > self.maxoutcount:
|
||||
chunk = self.pendingoutput[0][:self.maxoutcount]
|
||||
self.pendingoutput[0] = self.pendingoutput[0][self.maxoutcount:]
|
||||
else:
|
||||
chunk = self.pendingoutput[0]
|
||||
del self.pendingoutput[0]
|
||||
self._sendoutput(chunk)
|
||||
|
||||
def _sendoutput(self, output, sendbreak=False):
|
||||
self.myseq += 1
|
||||
self.myseq &= 0xf
|
||||
if self.myseq == 0:
|
||||
self.myseq = 1
|
||||
# currently we don't try to combine ack with outgoing data
|
||||
# so we use 0 for ack sequence number and accepted character
|
||||
# count
|
||||
breakbyte = 0
|
||||
if sendbreak:
|
||||
breakbyte = 0b10000
|
||||
payload = struct.pack("BBBB", self.myseq, 0, 0, breakbyte)
|
||||
payload += output
|
||||
self.lasttextsize = len(output)
|
||||
needskeepalive = False
|
||||
if self.lasttextsize == 0:
|
||||
needskeepalive = True
|
||||
self.awaitingack = True
|
||||
payload = struct.unpack("%dB" % len(payload), payload)
|
||||
self.lastpayload = payload
|
||||
self.send_payload(payload, needskeepalive=needskeepalive)
|
||||
|
||||
def send_payload(self, payload, payload_type=1, retry=True,
|
||||
needskeepalive=False):
|
||||
while not (self.connected or self.broken):
|
||||
session.Session.wait_for_rsp(timeout=10)
|
||||
if not self.ipmi_session.logged:
|
||||
raise exc.IpmiException('Session no longer connected')
|
||||
self.ipmi_session.send_payload(payload,
|
||||
payload_type=payload_type,
|
||||
retry=retry,
|
||||
needskeepalive=needskeepalive)
|
||||
|
||||
def _print_info(self, info):
|
||||
self._print_data({'info': info})
|
||||
|
||||
def _print_error(self, error):
|
||||
self.broken = True
|
||||
if self.ipmi_session:
|
||||
self.ipmi_session.unregister_keepalive(self.keepaliveid)
|
||||
if (self.ipmi_session.sol_handler and
|
||||
self.ipmi_session.sol_handler.__self__ is self):
|
||||
self.ipmi_session.sol_handler = None
|
||||
self.ipmi_session = None
|
||||
if type(error) == dict:
|
||||
self._print_data(error)
|
||||
else:
|
||||
self._print_data({'error': error})
|
||||
|
||||
def _print_data(self, data):
|
||||
"""Convey received data back to caller in the format of their choice.
|
||||
|
||||
Caller may elect to provide this class filehandle(s) or else give a
|
||||
callback function that this class will use to convey data back to
|
||||
caller.
|
||||
"""
|
||||
self.out_handler(data)
|
||||
|
||||
def _got_sol_payload(self, payload):
|
||||
"""SOL payload callback
|
||||
"""
|
||||
# TODO(jbjohnso) test cases to throw some likely scenarios at functions
|
||||
# for example, retry with new data, retry with no new data
|
||||
# retry with unexpected sequence number
|
||||
if type(payload) == dict: # we received an error condition
|
||||
self.activated = False
|
||||
self._print_error(payload)
|
||||
return
|
||||
newseq = payload[0] & 0b1111
|
||||
ackseq = payload[1] & 0b1111
|
||||
ackcount = payload[2]
|
||||
nacked = payload[3] & 0b1000000
|
||||
poweredoff = payload[3] & 0b100000
|
||||
deactivated = payload[3] & 0b10000
|
||||
breakdetected = payload[3] & 0b100
|
||||
# for now, ignore overrun. I assume partial NACK for this reason or
|
||||
# for no reason would be treated the same, new payload with partial
|
||||
# data.
|
||||
remdata = ""
|
||||
remdatalen = 0
|
||||
if newseq != 0: # this packet at least has some data to send to us..
|
||||
if len(payload) > 4:
|
||||
remdatalen = len(payload[4:]) # store remote len before dupe
|
||||
# retry logic, we must ack *this* many even if it is
|
||||
# a retry packet with new partial data
|
||||
remdata = struct.pack("%dB" % remdatalen, *payload[4:])
|
||||
if newseq == self.remseq: # it is a retry, but could have new data
|
||||
if remdatalen > self.lastsize:
|
||||
remdata = remdata[4 + self.lastsize:]
|
||||
else: # no new data...
|
||||
remdata = ""
|
||||
else: # TODO(jbjohnso) what if remote sequence number is wrong??
|
||||
self.remseq = newseq
|
||||
self.lastsize = remdatalen
|
||||
if remdata: # Do not subject callers to empty data
|
||||
self._print_data(remdata)
|
||||
ackpayload = (0, self.remseq, remdatalen, 0)
|
||||
# Why not put pending data into the ack? because it's rare
|
||||
# and might be hard to decide what to do in the context of
|
||||
# retry situation
|
||||
try:
|
||||
self.send_payload(ackpayload, retry=False)
|
||||
except exc.IpmiException:
|
||||
# if the session is broken, then close the SOL session
|
||||
self.close()
|
||||
if self.myseq != 0 and ackseq == self.myseq: # the bmc has something
|
||||
# to say about last xmit
|
||||
self.awaitingack = False
|
||||
if nacked and not breakdetected: # the BMC was in some way unhappy
|
||||
if poweredoff:
|
||||
self._print_info("Remote system is powered down")
|
||||
if deactivated:
|
||||
self.activated = False
|
||||
self._print_error("Remote IPMI console disconnected")
|
||||
else: # retry all or part of packet, but in a new form
|
||||
# also add pending output for efficiency and ease
|
||||
newtext = self.lastpayload[4 + ackcount:]
|
||||
newtext = struct.pack("B"*len(newtext), *newtext)
|
||||
with util.protect(PENDINGOUTPUT):
|
||||
if (self.pendingoutput and
|
||||
not isinstance(self.pendingoutput[0], dict)):
|
||||
self.pendingoutput[0] = \
|
||||
newtext + self.pendingoutput[0]
|
||||
else:
|
||||
self.pendingoutput = [newtext] + self.pendingoutput
|
||||
# self._sendpendingoutput() checks len(self._sendpendingoutput)
|
||||
self._sendpendingoutput()
|
||||
elif ackseq != 0 and self.awaitingack:
|
||||
# if an ack packet came in, but did not match what we
|
||||
# expected, retry our payload now.
|
||||
# the situation that was triggered was a senseless retry
|
||||
# when data came in while we xmitted. In theory, a BMC
|
||||
# should handle a retry correctly, but some do not, so
|
||||
# try to mitigate by avoiding overeager retries
|
||||
# occasional retry of a packet
|
||||
# sooner than timeout suggests is evidently a big deal
|
||||
self.send_payload(payload=self.lastpayload)
|
||||
|
||||
def main_loop(self):
|
||||
"""Process all events until no more sessions exist.
|
||||
|
||||
If a caller is a simple little utility, provide a function to
|
||||
eternally run the event loop. More complicated usage would be expected
|
||||
to provide their own event loop behavior, though this could be used
|
||||
within the greenthread implementation of caller's choice if desired.
|
||||
"""
|
||||
# wait_for_rsp promises to return a false value when no sessions are
|
||||
# alive anymore
|
||||
# TODO(jbjohnso): wait_for_rsp is not returning a true value for our
|
||||
# own session
|
||||
while (1):
|
||||
session.Session.wait_for_rsp(timeout=600)
|
||||
|
||||
|
||||
class ServerConsole(Console):
|
||||
"""IPMI SOL class.
|
||||
|
||||
This object represents an SOL channel, multiplexing SOL data with
|
||||
commands issued by ipmi.command.
|
||||
|
||||
:param session: IPMI session
|
||||
:param iohandler: I/O handler
|
||||
"""
|
||||
|
||||
def __init__(self, _session, iohandler, force=False):
|
||||
self.keepaliveid = None
|
||||
self.connected = True
|
||||
self.broken = False
|
||||
self.out_handler = iohandler
|
||||
self.remseq = 0
|
||||
self.myseq = 0
|
||||
self.lastsize = 0
|
||||
self.retriedpayload = 0
|
||||
self.pendingoutput = []
|
||||
self.awaitingack = False
|
||||
self.activated = True
|
||||
self.force_session = force
|
||||
self.ipmi_session = _session
|
||||
self.ipmi_session.sol_handler = self._got_sol_payload
|
||||
self.maxoutcount = 256
|
||||
self.poweredon = True
|
||||
|
||||
session.Session.wait_for_rsp(0)
|
||||
|
||||
def _got_sol_payload(self, payload):
|
||||
"""SOL payload callback
|
||||
"""
|
||||
# TODO(jbjohnso) test cases to throw some likely scenarios at functions
|
||||
# for example, retry with new data, retry with no new data
|
||||
# retry with unexpected sequence number
|
||||
if type(payload) == dict: # we received an error condition
|
||||
self.activated = False
|
||||
self._print_error(payload)
|
||||
return
|
||||
newseq = payload[0] & 0b1111
|
||||
ackseq = payload[1] & 0b1111
|
||||
ackcount = payload[2]
|
||||
nacked = payload[3] & 0b1000000
|
||||
breakdetected = payload[3] & 0b10000
|
||||
# for now, ignore overrun. I assume partial NACK for this reason or
|
||||
# for no reason would be treated the same, new payload with partial
|
||||
# data.
|
||||
remdata = ""
|
||||
remdatalen = 0
|
||||
flag = 0
|
||||
if not self.poweredon:
|
||||
flag |= 0b1100000
|
||||
if not self.activated:
|
||||
flag |= 0b1010000
|
||||
if newseq != 0: # this packet at least has some data to send to us..
|
||||
if len(payload) > 4:
|
||||
remdatalen = len(payload[4:]) # store remote len before dupe
|
||||
# retry logic, we must ack *this* many even if it is
|
||||
# a retry packet with new partial data
|
||||
remdata = struct.pack("%dB" % remdatalen, *payload[4:])
|
||||
if newseq == self.remseq: # it is a retry, but could have new data
|
||||
if remdatalen > self.lastsize:
|
||||
remdata = remdata[4 + self.lastsize:]
|
||||
else: # no new data...
|
||||
remdata = ""
|
||||
else: # TODO(jbjohnso) what if remote sequence number is wrong??
|
||||
self.remseq = newseq
|
||||
self.lastsize = remdatalen
|
||||
ackpayload = (0, self.remseq, remdatalen, flag)
|
||||
# Why not put pending data into the ack? because it's rare
|
||||
# and might be hard to decide what to do in the context of
|
||||
# retry situation
|
||||
try:
|
||||
self.send_payload(ackpayload, retry=False)
|
||||
except exc.IpmiException:
|
||||
# if the session is broken, then close the SOL session
|
||||
self.close()
|
||||
if remdata: # Do not subject callers to empty data
|
||||
self._print_data(remdata)
|
||||
if self.myseq != 0 and ackseq == self.myseq: # the bmc has something
|
||||
# to say about last xmit
|
||||
self.awaitingack = False
|
||||
if nacked and not breakdetected: # the BMC was in some way unhappy
|
||||
newtext = self.lastpayload[4 + ackcount:]
|
||||
newtext = struct.pack("B"*len(newtext), *newtext)
|
||||
with util.protect(PENDINGOUTPUT):
|
||||
if (self.pendingoutput and
|
||||
not isinstance(self.pendingoutput[0], dict)):
|
||||
self.pendingoutput[0] = newtext + self.pendingoutput[0]
|
||||
else:
|
||||
self.pendingoutput = [newtext] + self.pendingoutput
|
||||
# self._sendpendingoutput() checks len(self._sendpendingoutput)
|
||||
self._sendpendingoutput()
|
||||
elif ackseq != 0 and self.awaitingack:
|
||||
# if an ack packet came in, but did not match what we
|
||||
# expected, retry our payload now.
|
||||
# the situation that was triggered was a senseless retry
|
||||
# when data came in while we xmitted. In theory, a BMC
|
||||
# should handle a retry correctly, but some do not, so
|
||||
# try to mitigate by avoiding overeager retries
|
||||
# occasional retry of a packet
|
||||
# sooner than timeout suggests is evidently a big deal
|
||||
self.send_payload(payload=self.lastpayload)
|
||||
|
||||
def send_payload(self, payload, payload_type=1, retry=True,
|
||||
needskeepalive=False):
|
||||
while not (self.connected or self.broken):
|
||||
session.Session.wait_for_rsp(timeout=10)
|
||||
self.ipmi_session.send_payload(payload,
|
||||
payload_type=payload_type,
|
||||
retry=retry,
|
||||
needskeepalive=needskeepalive)
|
||||
|
||||
def close(self):
|
||||
"""Shut down an SOL session,
|
||||
"""
|
||||
self.activated = False
|
|
@ -1,580 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2016 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# __author__ = 'jjohnson2@lenovo.com'
|
||||
|
||||
import pyghmi.constants as pygconst
|
||||
import pyghmi.exceptions as pygexc
|
||||
import pyghmi.ipmi.private.constants as ipmiconst
|
||||
import struct
|
||||
import time
|
||||
|
||||
try:
|
||||
range = xrange
|
||||
except NameError:
|
||||
pass
|
||||
try:
|
||||
buffer
|
||||
except NameError:
|
||||
buffer = memoryview
|
||||
|
||||
|
||||
psucfg_errors = {
|
||||
0: 'Vendor mismatch',
|
||||
1: 'Revision mismatch',
|
||||
2: 'Processor missing', # e.g. pluggable CPU VRMs...
|
||||
3: 'Insufficient power',
|
||||
4: 'Voltage mismatch',
|
||||
}
|
||||
|
||||
firmware_progress = {
|
||||
0: 'Unspecified',
|
||||
1: 'Memory initialization',
|
||||
2: 'Disk initialization',
|
||||
3: 'Non-primary Processor initialization',
|
||||
4: 'User authentication',
|
||||
5: 'In setup',
|
||||
6: 'USB initialization',
|
||||
7: 'PCI initialization',
|
||||
8: 'Option ROM initialization',
|
||||
9: 'Video initialization',
|
||||
0xa: 'Cache initialization',
|
||||
0xb: 'SMBus initialization',
|
||||
0xc: 'Keyboard initialization',
|
||||
0xd: 'Embedded controller initialization',
|
||||
0xe: 'Docking station attachment',
|
||||
0xf: 'Docking station enabled',
|
||||
0x10: 'Docking station ejection',
|
||||
0x11: 'Docking station disabled',
|
||||
0x12: 'Waking OS',
|
||||
0x13: 'Starting OS boot',
|
||||
0x14: 'Baseboard initialization',
|
||||
0x16: 'Floppy initialization',
|
||||
0x17: 'Keyboard test',
|
||||
0x18: 'Pointing device test',
|
||||
0x19: 'Primary processor initialization',
|
||||
}
|
||||
|
||||
firmware_errors = {
|
||||
0: 'Unspecified',
|
||||
1: 'No memory installed',
|
||||
2: 'All memory failed',
|
||||
3: 'Unrecoverable disk failure',
|
||||
4: 'Unrecoverable board failure',
|
||||
5: 'Unrecoverable diskette failure',
|
||||
6: 'Unrecoverable storage controller failure',
|
||||
7: 'Unrecoverable keyboard failure', # Keyboard error, press
|
||||
# any key to continue..
|
||||
8: 'Removable boot media not found',
|
||||
9: 'Video adapter failure',
|
||||
0xa: 'No video device',
|
||||
0xb: 'Firmware corruption detected',
|
||||
0xc: 'CPU voltage mismatch',
|
||||
0xd: 'CPU speed mismatch',
|
||||
}
|
||||
|
||||
auxlog_actions = {
|
||||
0: 'entry added',
|
||||
1: 'entry added (could not map to standard)',
|
||||
2: 'entry added with corresponding standard events',
|
||||
3: 'log cleared',
|
||||
4: 'log disabled',
|
||||
5: 'log enabled',
|
||||
}
|
||||
|
||||
restart_causes = {
|
||||
0: 'Unknown',
|
||||
1: 'Remote request',
|
||||
2: 'Reset button',
|
||||
3: 'Power button',
|
||||
4: 'Watchdog',
|
||||
5: 'OEM',
|
||||
6: 'Power restored',
|
||||
7: 'Power restored',
|
||||
8: 'Reset due to event',
|
||||
9: 'Cycle due to event',
|
||||
0xa: 'OS reset',
|
||||
0xb: 'Timer wake',
|
||||
}
|
||||
|
||||
slot_types = {
|
||||
0: 'PCI',
|
||||
1: 'Drive Array',
|
||||
2: 'External connector',
|
||||
3: 'Docking',
|
||||
4: 'Other',
|
||||
5: 'Entity ID',
|
||||
6: 'AdvancedTCA',
|
||||
7: 'Memory',
|
||||
8: 'Fan',
|
||||
9: 'PCIe',
|
||||
10: 'SCSI',
|
||||
11: 'SATA/SAS',
|
||||
}
|
||||
|
||||
power_states = {
|
||||
0: 'S0',
|
||||
1: 'S1',
|
||||
2: 'S2',
|
||||
3: 'S3',
|
||||
4: 'S4',
|
||||
5: 'S5',
|
||||
6: 'S4 or S5',
|
||||
7: 'G3',
|
||||
8: 'S1, S2, or S3',
|
||||
9: 'G1',
|
||||
0xa: 'S5',
|
||||
0xb: 'on',
|
||||
0xc: 'off',
|
||||
}
|
||||
|
||||
watchdog_boot_phases = {
|
||||
1: 'Firmware',
|
||||
2: 'Firmware',
|
||||
3: 'OS Load',
|
||||
4: 'OS',
|
||||
5: 'OEM',
|
||||
}
|
||||
|
||||
version_changes = {
|
||||
1: 'Device ID',
|
||||
2: 'Management controller firmware',
|
||||
3: 'Management controller revision',
|
||||
4: 'Management conroller manufacturer',
|
||||
5: 'IPMI version',
|
||||
6: 'Management controller firmware',
|
||||
7: 'Management controller boot block',
|
||||
8: 'Management controller firmware',
|
||||
9: 'System Firmware (UEFI/BIOS)',
|
||||
0xa: 'SMBIOS',
|
||||
0xb: 'OS',
|
||||
0xc: 'OS Loader',
|
||||
0xd: 'Diagnostics',
|
||||
0xe: 'Management agent',
|
||||
0xf: 'Management application',
|
||||
0x10: 'Management middleware',
|
||||
0x11: 'FPGA',
|
||||
0x12: 'FRU',
|
||||
0x13: 'FRU',
|
||||
0x14: 'Equivalent FRU',
|
||||
0x15: 'Updated FRU',
|
||||
0x16: 'Older FRU',
|
||||
0x17: 'Hardware (switch/jumper)',
|
||||
}
|
||||
|
||||
fru_states = {
|
||||
0: 'Normal',
|
||||
1: 'Externally requested',
|
||||
2: 'Latch',
|
||||
3: 'Hot swap',
|
||||
4: 'Internal action',
|
||||
5: 'Lost communication',
|
||||
6: 'Lost communication',
|
||||
7: 'Unexpected removal',
|
||||
8: 'Operator',
|
||||
9: 'Unable to compute IPMB address',
|
||||
0xa: 'Unexpected deactivation',
|
||||
}
|
||||
|
||||
|
||||
def decode_eventdata(sensor_type, offset, eventdata, sdr):
|
||||
"""Decode extra event data from an alert or log
|
||||
|
||||
Provide a textual summary of eventdata per descriptions in
|
||||
Table 42-3 of the specification. This is for sensor specific
|
||||
offset events only.
|
||||
|
||||
:param sensor_type: The sensor type number from the event
|
||||
:param offset: Sensor specific offset
|
||||
:param eventdata: The three bytes from the log or alert
|
||||
"""
|
||||
if sensor_type == 5 and offset == 4: # link loss, indicates which port
|
||||
return 'Port {0}'.format(eventdata[1])
|
||||
elif sensor_type == 8 and offset == 6: # PSU cfg error
|
||||
errtype = eventdata[2] & 0b1111
|
||||
return psucfg_errors.get(errtype, 'Unknown')
|
||||
elif sensor_type == 0xc and offset == 8: # Memory spare
|
||||
return 'Module {0}'.format(eventdata[2])
|
||||
elif sensor_type == 0xf:
|
||||
if offset == 0: # firmware error
|
||||
return firmware_errors.get(eventdata[1], 'Unknown')
|
||||
elif offset in (1, 2):
|
||||
return firmware_progress.get(eventdata[1], 'Unknown')
|
||||
elif sensor_type == 0x10:
|
||||
if offset == 0: # Correctable error logging on a specific memory part
|
||||
return 'Module {0}'.format(eventdata[1])
|
||||
elif offset == 1:
|
||||
return 'Reading type {0:02X}h, offset {1:02X}h'.format(
|
||||
eventdata[1], eventdata[2] & 0b1111)
|
||||
elif offset == 5:
|
||||
return '{0}%'.format(eventdata[2])
|
||||
elif offset == 6:
|
||||
return 'Processor {0}'.format(eventdata[1])
|
||||
elif sensor_type == 0x12:
|
||||
if offset == 3:
|
||||
action = (eventdata[1] & 0b1111000) >> 4
|
||||
return auxlog_actions.get(action, 'Unknown')
|
||||
elif offset == 4:
|
||||
sysactions = []
|
||||
if eventdata[1] & 0b1 << 5:
|
||||
sysactions.append('NMI')
|
||||
if eventdata[1] & 0b1 << 4:
|
||||
sysactions.append('OEM action')
|
||||
if eventdata[1] & 0b1 << 3:
|
||||
sysactions.append('Power Cycle')
|
||||
if eventdata[1] & 0b1 << 2:
|
||||
sysactions.append('Reset')
|
||||
if eventdata[1] & 0b1 << 1:
|
||||
sysactions.append('Power Down')
|
||||
if eventdata[1] & 0b1:
|
||||
sysactions.append('Alert')
|
||||
return ','.join(sysactions)
|
||||
elif offset == 5: # Clock change event, either before or after
|
||||
if eventdata[1] & 0b10000000:
|
||||
return 'After'
|
||||
else:
|
||||
return 'Before'
|
||||
elif sensor_type == 0x19 and offset == 0:
|
||||
return 'Requested {0] while {1}'.format(eventdata[1], eventdata[2])
|
||||
elif sensor_type == 0x1d and offset == 7:
|
||||
return restart_causes.get(eventdata[1], 'Unknown')
|
||||
elif sensor_type == 0x21 and offset == 0x9:
|
||||
return '{0} {1}'.format(slot_types.get(eventdata[1], 'Unknown'),
|
||||
eventdata[2])
|
||||
|
||||
elif sensor_type == 0x23:
|
||||
phase = eventdata[1] & 0b1111
|
||||
return watchdog_boot_phases.get(phase, 'Unknown')
|
||||
elif sensor_type == 0x28:
|
||||
if offset == 4:
|
||||
return 'Sensor {0}'.format(eventdata[1])
|
||||
elif offset == 5:
|
||||
islogical = (eventdata[1] & 0b10000000)
|
||||
if islogical:
|
||||
if eventdata[2] in sdr.fru:
|
||||
return sdr.fru[eventdata[2]].fru_name
|
||||
else:
|
||||
return 'FRU {0}'.format(eventdata[2])
|
||||
elif sensor_type == 0x2a and offset == 3:
|
||||
return 'User {0}'.format(eventdata[1])
|
||||
elif sensor_type == 0x2b:
|
||||
return version_changes.get(eventdata[1], 'Unknown')
|
||||
elif sensor_type == 0x2c:
|
||||
cause = (eventdata[1] & 0b11110000) >> 4
|
||||
cause = fru_states.get(cause, 'Unknown')
|
||||
oldstate = eventdata[1] & 0b1111
|
||||
if oldstate != offset:
|
||||
try:
|
||||
cause += '(change from {0})'.format(
|
||||
ipmiconst.sensor_type_offsets[0x2c][oldstate]['desc'])
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
|
||||
def _fix_sel_time(records, ipmicmd):
|
||||
timefetched = False
|
||||
rsp = None
|
||||
while not timefetched:
|
||||
try:
|
||||
rsp = ipmicmd.xraw_command(netfn=0xa, command=0x48)
|
||||
timefetched = True
|
||||
except pygexc.IpmiException as pi:
|
||||
if pi.ipmicode == 0x81:
|
||||
continue
|
||||
raise
|
||||
# The specification declares an epoch and all that, but we really don't
|
||||
# care. We instead just focus on differences from the 'present'
|
||||
nowtime = struct.unpack_from('<I', rsp['data'])[0]
|
||||
correctednowtime = nowtime
|
||||
if nowtime < 0x20000000:
|
||||
correctearly = True
|
||||
inpreinit = True
|
||||
else:
|
||||
correctearly = False
|
||||
inpreinit = False
|
||||
newtimestamp = 0
|
||||
lasttimestamp = None
|
||||
trimindexes = []
|
||||
correctionenabled = True
|
||||
for index in reversed(range(len(records))):
|
||||
record = records[index]
|
||||
if 'timecode' not in record or record['timecode'] == 0xffffffff:
|
||||
continue
|
||||
if ('event' in record and record['event'] == 'Clock time change' and
|
||||
record['event_data'] == 'After'):
|
||||
if (lasttimestamp is not None and
|
||||
record['timecode'] > lasttimestamp):
|
||||
# if the timestamp did something impossible, declare the rest
|
||||
# of history not meaningfully correctable
|
||||
correctionenabled = False
|
||||
newtimestamp = 0
|
||||
continue
|
||||
newtimestamp = record['timecode']
|
||||
trimindexes.append(index)
|
||||
elif ('event' in record and record['event'] == 'Clock time change' and
|
||||
record['event_data'] == 'Before'):
|
||||
if not correctionenabled:
|
||||
continue
|
||||
if newtimestamp:
|
||||
if record['timecode'] < 0x20000000:
|
||||
correctearly = True
|
||||
nowtime = correctednowtime
|
||||
# we want time that occurred before this point to get the delta
|
||||
# added to it to catch up
|
||||
correctednowtime += newtimestamp - record['timecode']
|
||||
newtimestamp = 0
|
||||
trimindexes.append(index)
|
||||
else:
|
||||
# clean up after potentially broken time sync pairs
|
||||
newtimestamp = 0
|
||||
if record['timecode'] < 0x20000000: # uptime timestamp
|
||||
if not correctearly or not correctionenabled:
|
||||
correctednowtime = nowtime
|
||||
continue
|
||||
if (lasttimestamp is not None and
|
||||
record['timecode'] > lasttimestamp):
|
||||
# Time has gone backwards in pre-init, no hope for
|
||||
# accurate time
|
||||
correctearly = False
|
||||
correctionenabled = False
|
||||
correctednowtime = nowtime
|
||||
continue
|
||||
inpreinit = True
|
||||
lasttimestamp = record['timecode']
|
||||
age = correctednowtime - record['timecode']
|
||||
record['timestamp'] = time.strftime(
|
||||
'%Y-%m-%dT%H:%M:%S', time.localtime(time.time() - age))
|
||||
else:
|
||||
# We are in 'normal' time, assume we cannot go to
|
||||
# pre-init time and do corrections unless time sync events
|
||||
# guide us in safely
|
||||
if (lasttimestamp is not None and
|
||||
record['timecode'] > lasttimestamp):
|
||||
# Time has gone backwards, without a clock sync
|
||||
# give up any attempt to correct from this point back...
|
||||
correctionenabled = False
|
||||
if inpreinit:
|
||||
inpreinit = False
|
||||
# We were in pre-init, now in real time, reset the
|
||||
# time correction factor to the last stored
|
||||
# 'wall clock' correction
|
||||
correctednowtime = nowtime
|
||||
correctearly = False
|
||||
lasttimestamp = record['timecode']
|
||||
if not correctionenabled or correctednowtime < 0x20000000:
|
||||
# We can't correct time when the correction factor is
|
||||
# rooted in a pre-init timestamp, just convert
|
||||
record['timestamp'] = time.strftime(
|
||||
'%Y-%m-%dT%H:%M:%S', time.localtime(
|
||||
record['timecode']))
|
||||
else:
|
||||
age = correctednowtime - record['timecode']
|
||||
record['timestamp'] = time.strftime(
|
||||
'%Y-%m-%dT%H:%M:%S', time.localtime(
|
||||
time.time() - age))
|
||||
for index in trimindexes:
|
||||
del records[index]
|
||||
|
||||
|
||||
class EventHandler(object):
|
||||
"""IPMI Event Processor
|
||||
|
||||
This class provides facilities for processing alerts and event log
|
||||
data. This can be used to aid in pulling historical event data
|
||||
from a BMC or as part of a trap handler to translate the traps into
|
||||
manageable data.
|
||||
|
||||
:param sdr: An SDR object (per pyghmi.ipmi.sdr) matching the target BMC SDR
|
||||
"""
|
||||
def __init__(self, sdr, ipmicmd):
|
||||
self._sdr = sdr
|
||||
self._ipmicmd = ipmicmd
|
||||
|
||||
def _populate_event(self, deassertion, event, event_data, event_type,
|
||||
sensor_type, sensorid):
|
||||
event['component_id'] = sensorid
|
||||
try:
|
||||
event['component'] = self._sdr.sensors[sensorid].name
|
||||
except KeyError:
|
||||
if sensorid == 0:
|
||||
event['component'] = None
|
||||
else:
|
||||
event['component'] = 'Sensor {0}'.format(sensorid)
|
||||
event['deassertion'] = deassertion
|
||||
event['event_data_bytes'] = event_data
|
||||
byte2type = (event_data[0] & 0b11000000) >> 6
|
||||
byte3type = (event_data[0] & 0b110000) >> 4
|
||||
if byte2type == 1:
|
||||
event['triggered_value'] = event_data[1]
|
||||
evtoffset = event_data[0] & 0b1111
|
||||
event['event_type_byte'] = event_type
|
||||
if event_type <= 0xc:
|
||||
event['component_type_id'] = sensor_type
|
||||
event['event_id'] = '{0}.{1}'.format(event_type, evtoffset)
|
||||
# use generic offset decode for event description
|
||||
event['component_type'] = ipmiconst.sensor_type_codes.get(
|
||||
sensor_type, '')
|
||||
evreading = ipmiconst.generic_type_offsets.get(
|
||||
event_type, {}).get(evtoffset, {})
|
||||
if event['deassertion']:
|
||||
event['event'] = evreading.get('deassertion_desc', '')
|
||||
event['severity'] = evreading.get(
|
||||
'deassertion_severity', pygconst.Health.Ok)
|
||||
else:
|
||||
event['event'] = evreading.get('desc', '')
|
||||
event['severity'] = evreading.get(
|
||||
'severity', pygconst.Health.Ok)
|
||||
elif event_type == 0x6f:
|
||||
event['component_type_id'] = sensor_type
|
||||
event['event_id'] = '{0}.{1}'.format(event_type, evtoffset)
|
||||
event['component_type'] = ipmiconst.sensor_type_codes.get(
|
||||
sensor_type, '')
|
||||
evreading = ipmiconst.sensor_type_offsets.get(
|
||||
sensor_type, {}).get(evtoffset, {})
|
||||
if event['deassertion']:
|
||||
event['event'] = evreading.get('deassertion_desc', '')
|
||||
event['severity'] = evreading.get(
|
||||
'deassertion_severity', pygconst.Health.Ok)
|
||||
else:
|
||||
event['event'] = evreading.get('desc', '')
|
||||
event['severity'] = evreading.get(
|
||||
'severity', pygconst.Health.Ok)
|
||||
if event_type == 1: # threshold
|
||||
if byte3type == 1:
|
||||
event['threshold_value'] = event_data[2]
|
||||
if 3 in (byte2type, byte3type) or event_type == 0x6f:
|
||||
# sensor specific decode, see sdr module...
|
||||
# 2 - 0xc: generic discrete, 0x6f, sensor specific
|
||||
additionaldata = decode_eventdata(
|
||||
sensor_type, evtoffset, event_data, self._sdr)
|
||||
if additionaldata:
|
||||
event['event_data'] = additionaldata
|
||||
|
||||
def decode_pet(self, specifictrap, petdata):
|
||||
if isinstance(specifictrap, int):
|
||||
specifictrap = struct.unpack('4B', struct.pack('>I', specifictrap))
|
||||
if len(specifictrap) != 4:
|
||||
raise pygexc.InvalidParameterValue(
|
||||
'specifictrap should be integer number or 4 byte array')
|
||||
specifictrap = bytearray(specifictrap)
|
||||
sensor_type = specifictrap[1]
|
||||
event_type = specifictrap[2]
|
||||
# Event Offset is in first event data byte, so no need to fetch it here
|
||||
# evtoffset = specifictrap[3] & 0b1111
|
||||
deassertion = (specifictrap[3] & 0b10000000) == 0b10000000
|
||||
# alertseverity = petdata[26]
|
||||
sensorid = petdata[28]
|
||||
event_data = petdata[31:34]
|
||||
event = {}
|
||||
seqnum = struct.unpack_from('>H', buffer(petdata[16:18]))[0]
|
||||
ltimestamp = struct.unpack_from('>I', buffer(petdata[18:22]))[0]
|
||||
petack = bytearray(struct.pack('<HIBBBBBB', seqnum, ltimestamp,
|
||||
petdata[25], petdata[27], sensorid,
|
||||
*event_data))
|
||||
try:
|
||||
self._ipmicmd.xraw_command(netfn=4, command=0x17, data=petack)
|
||||
except pygexc.IpmiException: # Ignore failure to ack for now
|
||||
pass
|
||||
self._populate_event(deassertion, event, event_data, event_type,
|
||||
sensor_type, sensorid)
|
||||
event['timecode'] = ltimestamp
|
||||
_fix_sel_time((event,), self._ipmicmd)
|
||||
return event
|
||||
|
||||
def _decode_standard_event(self, eventdata, event):
|
||||
# Ignore the generator id for now..
|
||||
if eventdata[2] not in (3, 4):
|
||||
raise pygexc.PyghmiException(
|
||||
'Unrecognized Event message version {0}'.format(eventdata[2]))
|
||||
sensor_type = eventdata[3]
|
||||
sensorid = eventdata[4]
|
||||
event_data = eventdata[6:]
|
||||
deassertion = (eventdata[5] & 0b10000000 == 0b10000000)
|
||||
event_type = eventdata[5] & 0b1111111
|
||||
self._populate_event(deassertion, event, event_data, event_type,
|
||||
sensor_type, sensorid)
|
||||
|
||||
def _sel_decode(self, origselentry):
|
||||
selentry = bytearray(origselentry)
|
||||
event = {}
|
||||
event['record_id'] = struct.unpack_from('<H', origselentry[:2])[0]
|
||||
if selentry[2] == 2 or (0xc0 <= selentry[2] <= 0xdf):
|
||||
# Either standard, or at least the timestamp is standard
|
||||
event['timecode'] = struct.unpack_from('<I', buffer(selentry[3:7])
|
||||
)[0]
|
||||
if selentry[2] == 2: # ipmi defined standard format
|
||||
self._decode_standard_event(selentry[7:], event)
|
||||
elif 0xc0 <= selentry[2] <= 0xdf:
|
||||
event['oemid'] = selentry[7:10]
|
||||
event['oemdata'] = selentry[10:]
|
||||
elif selentry[2] >= 0xe0:
|
||||
# In this class of OEM message, all bytes are OEM, interpretation
|
||||
# is wholly left up to the OEM layer, using the OEM ID of the BMC
|
||||
event['oemdata'] = selentry[3:]
|
||||
self._ipmicmd._oem.process_event(event, self._ipmicmd, selentry)
|
||||
if 'event_type_byte' in event:
|
||||
del event['event_type_byte']
|
||||
if 'event_data_bytes' in event:
|
||||
del event['event_data_bytes']
|
||||
return event
|
||||
|
||||
def _fetch_entries(self, ipmicmd, startat, targetlist, rsvid=0):
|
||||
curr = startat
|
||||
endat = curr
|
||||
while curr != 0xffff:
|
||||
endat = curr
|
||||
reqdata = bytearray(struct.pack('<HHH', rsvid, curr, 0xff00))
|
||||
try:
|
||||
rsp = ipmicmd.xraw_command(
|
||||
netfn=0xa, command=0x43, data=reqdata)
|
||||
except pygexc.IpmiException as pi:
|
||||
if pi.ipmicode == 203:
|
||||
break
|
||||
else:
|
||||
raise
|
||||
curr = struct.unpack_from('<H', buffer(rsp['data'][:2]))[0]
|
||||
targetlist.append(self._sel_decode(rsp['data'][2:]))
|
||||
return endat
|
||||
|
||||
def fetch_sel(self, ipmicmd, clear=False):
|
||||
"""Fetch SEL entries
|
||||
|
||||
Return an iterable of SEL entries. If clearing is requested,
|
||||
the fetch and clear will be done as an atomic operation, assuring
|
||||
no entries are dropped.
|
||||
|
||||
:param ipmicmd: The Command object to use to interrogate
|
||||
:param clear: Whether to clear the entries upon retrieval.
|
||||
"""
|
||||
records = []
|
||||
# First we do a fetch all without reservation, reducing the risk
|
||||
# of having a long lived reservation that gets canceled in the middle
|
||||
endat = self._fetch_entries(ipmicmd, 0, records)
|
||||
if clear and records: # don't bother clearing if there were no records
|
||||
# To do clear, we make a reservation first...
|
||||
rsp = ipmicmd.xraw_command(netfn=0xa, command=0x42)
|
||||
rsvid = struct.unpack_from('<H', rsp['data'])[0]
|
||||
# Then we refetch the tail with reservation (check for change)
|
||||
del records[-1] # remove the record that's about to be duplicated
|
||||
self._fetch_entries(ipmicmd, endat, records, rsvid)
|
||||
# finally clear the SEL
|
||||
# 0XAA means start initiate, 0x524c43 is 'RCL' or 'CLR' backwards
|
||||
clrdata = bytearray(struct.pack('<HI', rsvid, 0xAA524C43))
|
||||
ipmicmd.xraw_command(netfn=0xa, command=0x47, data=clrdata)
|
||||
# Now to fixup the record timestamps... first we need to get the BMC
|
||||
# opinion of current time
|
||||
_fix_sel_time(records, ipmicmd)
|
||||
return records
|
|
@ -1,338 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf8
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This module provides access to SDR offered by a BMC
|
||||
# This data is common between 'sensors' and 'inventory' modules since SDR
|
||||
# is both used to enumerate sensors for sensor commands and FRU ids for FRU
|
||||
# commands
|
||||
|
||||
# For now, we will not offer persistent SDR caching as we do in xCAT's IPMI
|
||||
# code. Will see if it is adequate to advocate for high object reuse in a
|
||||
# persistent process for the moment.
|
||||
|
||||
# Focus is at least initially on the aspects that make the most sense for a
|
||||
# remote client to care about. For example, smbus information is being
|
||||
# skipped for now
|
||||
|
||||
# This file handles parsing of fru format records as presented by IPMI
|
||||
# devices. This format is documented in the 'Platform Management FRU
|
||||
# Information Storage Definition (Document Revision 1.2)
|
||||
|
||||
import pyghmi.exceptions as iexc
|
||||
import pyghmi.ipmi.private.spd as spd
|
||||
import struct
|
||||
import time
|
||||
|
||||
fruepoch = 820454400 # 1/1/1996, 0:00
|
||||
|
||||
# This is from SMBIOS specification Table 16
|
||||
enclosure_types = {
|
||||
1: 'Other',
|
||||
2: 'Unknown',
|
||||
3: 'Desktop',
|
||||
4: 'Low Profile Desktop',
|
||||
5: 'Pizza Box',
|
||||
6: 'Mini Tower',
|
||||
7: 'Tower',
|
||||
8: 'Portable',
|
||||
9: 'Laptop',
|
||||
0xa: 'Notebook',
|
||||
0xb: 'Hand Held',
|
||||
0xc: 'Docking Station',
|
||||
0xd: 'All in One',
|
||||
0xe: 'Sub Notebook',
|
||||
0xf: 'Space-saving',
|
||||
0x10: 'Lunch Box',
|
||||
0x11: 'Main Server Chassis',
|
||||
0x12: 'Expansion Chassis',
|
||||
0x13: 'SubChassis',
|
||||
0x14: 'Bus Expansion Chassis',
|
||||
0x15: 'Peripheral Chassis',
|
||||
0x16: 'RAID Chassis',
|
||||
0x17: 'Rack Mount Chassis',
|
||||
0x18: 'Sealed-case PC',
|
||||
0x19: 'Multi-system Chassis',
|
||||
0x1a: 'Compact PCI',
|
||||
0x1b: 'Advanced TCA',
|
||||
0x1c: 'Blade',
|
||||
0x1d: 'Blade Enclosure',
|
||||
}
|
||||
|
||||
|
||||
def unpack6bitascii(inputdata):
|
||||
# This is a text encoding scheme that seems unique
|
||||
# to IPMI FRU. It seems to be relatively rare in practice
|
||||
result = ''
|
||||
while len(inputdata) > 0:
|
||||
currchunk = inputdata[:3]
|
||||
del inputdata[:3]
|
||||
currchar = currchunk[0] & 0b111111
|
||||
result += chr(0x20 + currchar)
|
||||
currchar = (currchunk[0] & 0b11000000) >> 6
|
||||
currchar |= (currchunk[1] & 0b1111) << 2
|
||||
result += chr(0x20 + currchar)
|
||||
currchar = (currchunk[1] & 0b11110000) >> 4
|
||||
currchar |= (currchunk[2] & 0b11) << 4
|
||||
result += chr(0x20 + currchar)
|
||||
currchar = (currchunk[2] & 0b11111100) >> 2
|
||||
result += chr(0x20 + currchar)
|
||||
return result
|
||||
|
||||
|
||||
def decode_fru_date(datebytes):
|
||||
# Returns ISO
|
||||
datebytes.append(0)
|
||||
minutesfromepoch = struct.unpack('<I', struct.pack('4B', *datebytes))[0]
|
||||
# Some data in the field has had some data less than 800
|
||||
# At this juncture, it's far more likely for this noise
|
||||
# to be incorrect than anything in particular
|
||||
if minutesfromepoch < 800:
|
||||
return None
|
||||
return time.strftime('%Y-%m-%dT%H:%M',
|
||||
time.gmtime((minutesfromepoch * 60) + fruepoch))
|
||||
|
||||
|
||||
class FRU(object):
|
||||
"""An object representing structure
|
||||
|
||||
FRU (Field Replaceable Unit) is the usual format for inventory in IPMI
|
||||
devices. This covers most standards compliant inventory data
|
||||
as well as presenting less well defined fields in a structured way.
|
||||
|
||||
:param rawdata: A binary string/bytearray of raw data from BMC or dump
|
||||
:param ipmicmd: An ipmi command object to fetch data live
|
||||
:param fruid: The identifier number of the FRU
|
||||
:param sdr: The sdr locator entry to help clarify how to parse data
|
||||
"""
|
||||
|
||||
def __init__(self, rawdata=None, ipmicmd=None, fruid=0, sdr=None):
|
||||
self.rawfru = rawdata
|
||||
self.databytes = None
|
||||
self.info = None
|
||||
self.sdr = sdr
|
||||
if self.rawfru is not None:
|
||||
self.parsedata()
|
||||
elif ipmicmd is not None:
|
||||
self.ipmicmd = ipmicmd
|
||||
# Use the ipmicmd to fetch the data
|
||||
try:
|
||||
self.fetch_fru(fruid)
|
||||
except iexc.IpmiException as ie:
|
||||
if ie.ipmicode in (203, 129):
|
||||
return
|
||||
raise
|
||||
self.parsedata()
|
||||
else:
|
||||
raise TypeError('Either rawdata or ipmicmd must be specified')
|
||||
|
||||
def fetch_fru(self, fruid):
|
||||
response = self.ipmicmd.raw_command(
|
||||
netfn=0xa, command=0x10, data=[fruid])
|
||||
if 'error' in response:
|
||||
raise iexc.IpmiException(response['error'], code=response['code'])
|
||||
frusize = response['data'][0] | (response['data'][1] << 8)
|
||||
# In our case, we don't need to think too hard about whether
|
||||
# the FRU is word or byte, we just process what we get back in the
|
||||
# payload
|
||||
chunksize = 240
|
||||
# Selected as it is accomodated by most tested things
|
||||
# and many tested things broke after going much
|
||||
# bigger
|
||||
if chunksize > frusize:
|
||||
chunksize = frusize
|
||||
offset = 0
|
||||
self.rawfru = bytearray([])
|
||||
while chunksize:
|
||||
response = self.ipmicmd.raw_command(
|
||||
netfn=0xa, command=0x11, data=[fruid, offset & 0xff,
|
||||
offset >> 8, chunksize])
|
||||
if response['code'] in (201, 202):
|
||||
# if it was too big, back off and try smaller
|
||||
# Try just over half to mitigate the chance of
|
||||
# one request becoming three rather than just two
|
||||
if chunksize == 3:
|
||||
raise iexc.IpmiException(response['error'])
|
||||
chunksize //= 2
|
||||
chunksize += 2
|
||||
continue
|
||||
elif 'error' in response:
|
||||
raise iexc.IpmiException(response['error'], response['code'])
|
||||
self.rawfru.extend(response['data'][1:])
|
||||
offset += response['data'][0]
|
||||
if response['data'][0] == 0:
|
||||
break
|
||||
if offset + chunksize > frusize:
|
||||
chunksize = frusize - offset
|
||||
|
||||
def parsedata(self):
|
||||
self.info = {}
|
||||
rawdata = self.rawfru
|
||||
self.databytes = bytearray(rawdata)
|
||||
if self.sdr is not None:
|
||||
frutype = self.sdr.fru_type_and_modifier >> 8
|
||||
frusubtype = self.sdr.fru_type_and_modifier & 0xff
|
||||
if frutype > 0x10 or frutype < 0x8 or frusubtype not in (0, 1, 2):
|
||||
return
|
||||
# TODO(jjohnson2): strict mode to detect pyghmi and BMC
|
||||
# gaps
|
||||
# raise iexc.PyghmiException(
|
||||
# 'Unsupported FRU device: {0:x}h, {1:x}h'.format(frutype,
|
||||
# frusubtype
|
||||
# ))
|
||||
elif frusubtype == 1:
|
||||
self.myspd = spd.SPD(self.databytes)
|
||||
self.info = self.myspd.info
|
||||
return
|
||||
if self.databytes[0] != 1:
|
||||
return
|
||||
# TODO(jjohnson2): strict mode to flag potential BMC errors
|
||||
# raise iexc.BmcErrorException("Invalid/Unsupported FRU format")
|
||||
# Ignore the internal use even if present.
|
||||
self._parse_chassis()
|
||||
self._parse_board()
|
||||
self._parse_prod()
|
||||
# TODO(jjohnson2): Multi Record area
|
||||
|
||||
def _decode_tlv(self, offset, lang=0):
|
||||
currtlv = self.databytes[offset]
|
||||
currlen = currtlv & 0b111111
|
||||
currtype = (currtlv & 0b11000000) >> 6
|
||||
retinfo = self.databytes[offset + 1:offset + currlen + 1]
|
||||
newoffset = offset + currlen + 1
|
||||
if currlen == 0:
|
||||
return None, newoffset
|
||||
if currtype == 0:
|
||||
# return it as a bytearray, not much to be done for it
|
||||
return retinfo, newoffset
|
||||
elif currtype == 3: # text string
|
||||
# Sometimes BMCs have FRU data with 0xff termination
|
||||
# contrary to spec, but can be tolerated
|
||||
# also in case something null terminates, handle that too
|
||||
# strictly speaking, \xff should be a y with diaeresis, but
|
||||
# erring on the side of that not being very relevant in practice
|
||||
# to fru info, particularly the last values
|
||||
retinfo = retinfo.rstrip('\xff\x00 ')
|
||||
if lang in (0, 25):
|
||||
try:
|
||||
retinfo = retinfo.decode('iso-8859-1')
|
||||
except (UnicodeError, LookupError):
|
||||
pass
|
||||
else:
|
||||
try:
|
||||
retinfo = retinfo.decode('utf-16le')
|
||||
except (UnicodeDecodeError, LookupError):
|
||||
pass
|
||||
# Some things lie about being text. Do the best we can by
|
||||
# removing trailing spaces and nulls like makes sense for text
|
||||
# and rely on vendors to workaround deviations in their OEM
|
||||
# module
|
||||
retinfo = retinfo.rstrip('\x00 ')
|
||||
return retinfo, newoffset
|
||||
elif currtype == 1: # BCD 'plus'
|
||||
retdata = ''
|
||||
for byte in retinfo:
|
||||
byte = hex(byte).replace('0x', '').replace('a', ' ').replace(
|
||||
'b', '-').replace('c', '.')
|
||||
retdata += byte
|
||||
retdata = retdata.strip()
|
||||
return retdata, newoffset
|
||||
elif currtype == 2: # 6-bit ascii
|
||||
retinfo = unpack6bitascii(retinfo).strip()
|
||||
return retinfo, newoffset
|
||||
|
||||
def _parse_chassis(self):
|
||||
offset = 8 * self.databytes[2]
|
||||
if offset == 0:
|
||||
return
|
||||
if self.databytes[offset] & 0b1111 != 1:
|
||||
raise iexc.BmcErrorException("Invallid/Unsupported chassis area")
|
||||
inf = self.info
|
||||
# ignore length field, just process the data
|
||||
inf['Chassis type'] = enclosure_types[self.databytes[offset + 2]]
|
||||
inf['Chassis part number'], offset = self._decode_tlv(offset + 3)
|
||||
inf['Chassis serial number'], offset = self._decode_tlv(offset)
|
||||
inf['chassis_extra'] = []
|
||||
self.extract_extra(inf['chassis_extra'], offset)
|
||||
|
||||
def extract_extra(self, target, offset, language=0):
|
||||
try:
|
||||
while self.databytes[offset] != 0xc1:
|
||||
fielddata, offset = self._decode_tlv(offset, language)
|
||||
target.append(fielddata)
|
||||
except IndexError:
|
||||
# If we overrun the end due to malformed FRU,
|
||||
# return at least what decoded right
|
||||
return
|
||||
|
||||
def _parse_board(self):
|
||||
offset = 8 * self.databytes[3]
|
||||
if offset == 0:
|
||||
return
|
||||
if self.databytes[offset] & 0b1111 != 1:
|
||||
raise iexc.BmcErrorException("Invalid/Unsupported board info area")
|
||||
inf = self.info
|
||||
language = self.databytes[offset + 2]
|
||||
inf['Board manufacture date'] = decode_fru_date(
|
||||
self.databytes[offset + 3:offset + 6])
|
||||
inf['Board manufacturer'], offset = self._decode_tlv(offset + 6)
|
||||
inf['Board product name'], offset = self._decode_tlv(offset, language)
|
||||
inf['Board serial number'], offset = self._decode_tlv(offset, language)
|
||||
inf['Board model'], offset = self._decode_tlv(offset, language)
|
||||
_, offset = self._decode_tlv(offset, language) # decode but discard
|
||||
inf['board_extra'] = []
|
||||
self.extract_extra(inf['board_extra'], offset, language)
|
||||
|
||||
def _parse_prod(self):
|
||||
offset = 8 * self.databytes[4]
|
||||
if offset == 0:
|
||||
return
|
||||
inf = self.info
|
||||
language = self.databytes[offset + 2]
|
||||
inf['Manufacturer'], offset = self._decode_tlv(offset + 3,
|
||||
language)
|
||||
inf['Product name'], offset = self._decode_tlv(offset, language)
|
||||
inf['Model'], offset = self._decode_tlv(offset, language)
|
||||
inf['Hardware Version'], offset = self._decode_tlv(offset, language)
|
||||
inf['Serial Number'], offset = self._decode_tlv(offset, language)
|
||||
inf['Asset Number'], offset = self._decode_tlv(offset, language)
|
||||
_, offset = self._decode_tlv(offset, language)
|
||||
inf['product_extra'] = []
|
||||
self.extract_extra(inf['product_extra'], offset, language)
|
||||
|
||||
def __repr__(self):
|
||||
return repr(self.info)
|
||||
# retdata = 'Chassis data\n'
|
||||
# retdata += ' Type: ' + repr(self.chassis_type) + '\n'
|
||||
# retdata += ' Part Number: ' + repr(self.chassis_part_number) + '\n'
|
||||
# retdata += ' Serial Number: ' + repr(self.chassis_serial) + '\n'
|
||||
# retdata += ' Extra: ' + repr(self.chassis_extra) + '\n'
|
||||
# retdata += 'Board data\n'
|
||||
# retdata += ' Manufacturer: ' + repr(self.board_manufacturer) + '\n'
|
||||
# retdata += ' Date: ' + repr(self.board_mfg_date) + '\n'
|
||||
# retdata += ' Product' + repr(self.board_product) + '\n'
|
||||
# retdata += ' Serial: ' + repr(self.board_serial) + '\n'
|
||||
# retdata += ' Model: ' + repr(self.board_model) + '\n'
|
||||
# retdata += ' Extra: ' + repr(self.board_extra) + '\n'
|
||||
# retdata += 'Product data\n'
|
||||
# retdata += ' Manufacturer: ' + repr(self.product_manufacturer)+'\n'
|
||||
# retdata += ' Name: ' + repr(self.product_name) + '\n'
|
||||
# retdata += ' Model: ' + repr(self.product_model) + '\n'
|
||||
# retdata += ' Version: ' + repr(self.product_version) + '\n'
|
||||
# retdata += ' Serial: ' + repr(self.product_serial) + '\n'
|
||||
# retdata += ' Asset: ' + repr(self.product_asset) + '\n'
|
||||
# retdata += ' Extra: ' + repr(self.product_extra) + '\n'
|
||||
# return retdata
|
|
@ -1,245 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pyghmi.exceptions as exc
|
||||
|
||||
|
||||
class OEMHandler(object):
|
||||
"""Handler class for OEM capabilities.
|
||||
|
||||
Any vendor wishing to implement OEM extensions should look at this
|
||||
base class for an appropriate interface. If one does not exist, this
|
||||
base class should be extended. At initialization an OEM is given
|
||||
a dictionary with product_id, device_id, manufacturer_id, and
|
||||
device_revision as keys in a dictionary, along with an ipmi Command object
|
||||
"""
|
||||
def __init__(self, oemid, ipmicmd):
|
||||
pass
|
||||
|
||||
def get_video_launchdata(self):
|
||||
return {}
|
||||
|
||||
def process_event(self, event, ipmicmd, seldata):
|
||||
"""Modify an event according with OEM understanding.
|
||||
|
||||
Given an event, allow an OEM module to augment it. For example,
|
||||
event data fields can have OEM bytes. Other times an OEM may wish
|
||||
to apply some transform to some field to suit their conventions.
|
||||
"""
|
||||
event['oem_handler'] = None
|
||||
evdata = event['event_data_bytes']
|
||||
if evdata[0] & 0b11000000 == 0b10000000:
|
||||
event['oem_byte2'] = evdata[1]
|
||||
if evdata[0] & 0b110000 == 0b100000:
|
||||
event['oem_byte3'] = evdata[2]
|
||||
|
||||
def get_oem_inventory_descriptions(self):
|
||||
"""Get descriptions of available additional inventory items
|
||||
|
||||
OEM implementation may provide additional records not indicated
|
||||
by FRU locator SDR records. An implementation is expected to
|
||||
implement this function to list component names that would map to
|
||||
OEM behavior beyond the specification. It should return an iterable
|
||||
of names
|
||||
"""
|
||||
return ()
|
||||
|
||||
def get_sensor_reading(self, sensorname):
|
||||
"""Get an OEM sensor
|
||||
|
||||
If software wants to model some OEM behavior as a 'sensor' without
|
||||
doing SDR, this hook provides that ability. It should mimic
|
||||
the behavior of 'get_sensor_reading' in command.py.
|
||||
"""
|
||||
raise Exception('Sensor not found: ' + sensorname)
|
||||
|
||||
def get_sensor_descriptions(self):
|
||||
"""Get list of OEM sensor names and types
|
||||
|
||||
Iterate over dicts describing a label and type for OEM 'sensors'. This
|
||||
should mimic the behavior of the get_sensor_descriptions function
|
||||
in command.py.
|
||||
"""
|
||||
return ()
|
||||
|
||||
def get_sensor_data(self):
|
||||
"""Get OEM sensor data
|
||||
|
||||
Iterate through all OEM 'sensors' and return data as if they were
|
||||
normal sensors. This should mimic the behavior of the get_sensor_data
|
||||
function in command.py.
|
||||
"""
|
||||
return ()
|
||||
|
||||
def get_oem_inventory(self):
|
||||
"""Get tuples of component names and inventory data.
|
||||
|
||||
This returns an iterable of tuples. The first member of each tuple
|
||||
is a string description of the inventory item. The second member
|
||||
is a dict of inventory information about the component.
|
||||
"""
|
||||
for desc in self.get_oem_inventory_descriptions():
|
||||
yield (desc, self.get_inventory_of_component(desc))
|
||||
|
||||
def get_inventory_of_component(self, component):
|
||||
"""Get inventory detail of an OEM defined component
|
||||
|
||||
Given a string that may be an OEM component, return the detail of that
|
||||
component. If the component does not exist, returns None
|
||||
"""
|
||||
return None
|
||||
|
||||
def get_leds(self):
|
||||
"""Get tuples of LED categories.
|
||||
|
||||
Each category contains a category name and a dicionary of LED names
|
||||
with their status as values.
|
||||
"""
|
||||
return ()
|
||||
|
||||
def get_ntp_enabled(self):
|
||||
"""Get whether ntp is enabled or not
|
||||
|
||||
:returns: True if enabled, False if disabled, None if unsupported
|
||||
"""
|
||||
return None
|
||||
|
||||
def set_ntp_enabled(self, enabled):
|
||||
"""Set whether NTP should be enabled
|
||||
|
||||
:returns: True on success
|
||||
"""
|
||||
return None
|
||||
|
||||
def get_ntp_servers(self):
|
||||
"""Get current set of configured NTP servers
|
||||
|
||||
:returns iterable of configured NTP servers:
|
||||
"""
|
||||
return ()
|
||||
|
||||
def set_ntp_server(self, server, index=0):
|
||||
"""Set an ntp server
|
||||
|
||||
:param server: Destination address of server to reach
|
||||
:param index: Index of server to configure, primary assumed if not
|
||||
specified
|
||||
:returns: True if success
|
||||
"""
|
||||
return None
|
||||
|
||||
def process_fru(self, fru):
|
||||
"""Modify a fru entry with OEM understanding.
|
||||
|
||||
Given a fru, clarify 'extra' fields according to OEM rules and
|
||||
return the transformed data structure. If OEM processes, it is
|
||||
expected that it sets 'oem_parser' to the name of the module. For
|
||||
clients passing through data, it is suggested to pass through
|
||||
board/product/chassis_extra_data arrays if 'oem_parser' is None,
|
||||
and mask those fields if not None. It is expected that OEMs leave
|
||||
the fields intact so that if client code hard codes around the
|
||||
ordered lists that their expectations are not broken by an update.
|
||||
"""
|
||||
# In the generic case, just pass through
|
||||
if fru is None:
|
||||
return fru
|
||||
fru['oem_parser'] = None
|
||||
return fru
|
||||
|
||||
def get_oem_firmware(self, bmcver):
|
||||
"""Get Firmware information.
|
||||
"""
|
||||
# Here the bmc version is passed into the OEM handler, to allow
|
||||
# the handler to enrich the data. For the generic case, just
|
||||
# provide the generic BMC version, which is all that is possible
|
||||
yield ('BMC Version', {'version': bmcver})
|
||||
|
||||
def get_oem_capping_enabled(self):
|
||||
"""Get PSU based power capping status
|
||||
|
||||
:return: True if enabled and False if disabled
|
||||
"""
|
||||
return ()
|
||||
|
||||
def set_oem_capping_enabled(self, enable):
|
||||
"""Set PSU based power capping
|
||||
|
||||
:param enable: True for enable and False for disable
|
||||
"""
|
||||
return ()
|
||||
|
||||
def get_oem_remote_kvm_available(self):
|
||||
"""Get remote KVM availability
|
||||
"""
|
||||
return False
|
||||
|
||||
def get_oem_domain_name(self):
|
||||
"""Get Domain name
|
||||
"""
|
||||
return ()
|
||||
|
||||
def set_oem_domain_name(self, name):
|
||||
"""Set Domain name
|
||||
|
||||
:param name: domain name to be set
|
||||
"""
|
||||
return ()
|
||||
|
||||
def update_firmware(self, filename, data=None, progress=None):
|
||||
raise exc.UnsupportedFunctionality(
|
||||
'Firmware update not supported on this platform')
|
||||
|
||||
def get_graphical_console(self):
|
||||
"""Get graphical console launcher"""
|
||||
return ()
|
||||
|
||||
def add_extra_net_configuration(self, netdata):
|
||||
"""Add additional network configuration data
|
||||
|
||||
Given a standard netdata struct, add details as relevant from
|
||||
OEM commands, modifying the passed dictionary
|
||||
:param netdata: Dictionary to store additional network data
|
||||
"""
|
||||
return
|
||||
|
||||
def detach_remote_media(self):
|
||||
raise exc.UnsupportedFunctionality()
|
||||
|
||||
def attach_remote_media(self, imagename, username, password):
|
||||
raise exc.UnsupportedFunctionality()
|
||||
|
||||
def set_identify(self, on, duration):
|
||||
"""Provide an OEM override for set_identify
|
||||
|
||||
Some systems may require an override for set identify.
|
||||
|
||||
"""
|
||||
raise exc.UnsupportedFunctionality()
|
||||
|
||||
def set_alert_ipv6_destination(self, ip, destination, channel):
|
||||
"""Set an IPv6 alert destination
|
||||
|
||||
If and only if an implementation does not support standard
|
||||
IPv6 but has an OEM implementation, override this to process
|
||||
the data.
|
||||
|
||||
:param ip: IPv6 address to set
|
||||
:param destination: Destination number
|
||||
:param channel: Channel number to apply
|
||||
|
||||
:returns True if standard parameter set should be suppressed
|
||||
"""
|
||||
return False
|
|
@ -1,48 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
cpu_fields = (
|
||||
EntryField("index", "B"),
|
||||
EntryField("Cores", "B"),
|
||||
EntryField("Threads", "B"),
|
||||
EntryField("Manufacturer", "13s"),
|
||||
EntryField("Family", "30s"),
|
||||
EntryField("Model", "30s"),
|
||||
EntryField("Stepping", "3s"),
|
||||
EntryField("Maximum Frequency", "<I",
|
||||
valuefunc=lambda v: str(v) + " MHz"),
|
||||
EntryField("Reserved", "h", include=False))
|
||||
|
||||
|
||||
def parse_cpu_info(raw):
|
||||
return parse_inventory_category_entry(raw, cpu_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"cpu": {
|
||||
"idstr": "CPU {0}",
|
||||
"parser": parse_cpu_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc1, 0x01, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,54 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
dimm_fields = (
|
||||
EntryField("index", "B"),
|
||||
EntryField("manufacture_location", "B"),
|
||||
EntryField("channel_number", "B"),
|
||||
EntryField("module_type", "10s"),
|
||||
EntryField("ddr_voltage", "10s"),
|
||||
EntryField("speed", "<h",
|
||||
valuefunc=lambda v: str(v) + " MHz"),
|
||||
EntryField("capacity_mb", "<h",
|
||||
valuefunc=lambda v: v*1024),
|
||||
EntryField("manufacturer", "30s"),
|
||||
EntryField("serial", ">I",
|
||||
valuefunc=lambda v: hex(v)[2:]),
|
||||
EntryField("model", "21s"),
|
||||
EntryField("reserved", "h", include=False)
|
||||
)
|
||||
|
||||
|
||||
def parse_dimm_info(raw):
|
||||
return parse_inventory_category_entry(raw, dimm_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"dimm": {
|
||||
"idstr": "DIMM {0}",
|
||||
"parser": parse_dimm_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc1, 0x02, 0x00)
|
||||
},
|
||||
"workaround_bmc_bug": True
|
||||
}
|
||||
}
|
|
@ -1,73 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
drive_fields = (
|
||||
EntryField("index", "B"),
|
||||
EntryField("VendorID", "64s"),
|
||||
EntryField("Size", "I",
|
||||
valuefunc=lambda v: str(v) + " MB"),
|
||||
EntryField("MediaType", "B", mapper={
|
||||
0x00: "HDD",
|
||||
0x01: "SSD"
|
||||
}),
|
||||
EntryField("InterfaceType", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "ParallelSCSI",
|
||||
0x02: "SAS",
|
||||
0x03: "SATA",
|
||||
0x04: "FC"
|
||||
}),
|
||||
EntryField("FormFactor", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "2.5in",
|
||||
0x02: "3.5in"
|
||||
}),
|
||||
EntryField("LinkSpeed", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "1.5 Gb/s",
|
||||
0x02: "3.0 Gb/s",
|
||||
0x03: "6.0 Gb/s",
|
||||
0x04: "12.0 Gb/s"
|
||||
}),
|
||||
EntryField("SlotNumber", "B"),
|
||||
EntryField("DeviceState", "B", mapper={
|
||||
0x00: "active",
|
||||
0x01: "stopped",
|
||||
0xff: "transitioning"
|
||||
}),
|
||||
# There seems to be an undocumented byte at the end
|
||||
EntryField("Reserved", "B", include=False))
|
||||
|
||||
|
||||
def parse_drive_info(raw):
|
||||
return parse_inventory_category_entry(raw, drive_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"drive": {
|
||||
"idstr": "Drive {0}",
|
||||
"parser": parse_drive_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc1, 0x04, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,57 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
firmware_fields = (
|
||||
EntryField("Revision", "B"),
|
||||
EntryField("Bios", "16s"),
|
||||
EntryField("Operational ME", "10s"),
|
||||
EntryField("Recovery ME", "10s"),
|
||||
EntryField("RAID 1", "16s"),
|
||||
EntryField("RAID 2", "16s"),
|
||||
EntryField("Mezz 1", "16s"),
|
||||
EntryField("Mezz 2", "16s"),
|
||||
EntryField("BMC", "16s"),
|
||||
EntryField("LEPT", "16s"),
|
||||
EntryField("PSU 1", "16s"),
|
||||
EntryField("PSU 2", "16s"),
|
||||
EntryField("CPLD", "16s"),
|
||||
EntryField("LIND", "16s"),
|
||||
EntryField("WIND", "16s"),
|
||||
EntryField("DIAG", "16s"))
|
||||
|
||||
|
||||
def parse_firmware_info(raw):
|
||||
bytes_read, data = parse_inventory_category_entry(raw, firmware_fields)
|
||||
del data['Revision']
|
||||
for key in data:
|
||||
yield(key, {'version': data[key]})
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"firmware": {
|
||||
"idstr": "FW Version",
|
||||
"parser": parse_firmware_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc7, 0x00, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,844 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015-2016 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import base64
|
||||
import binascii
|
||||
import traceback
|
||||
import urllib
|
||||
|
||||
import pyghmi.constants as pygconst
|
||||
import pyghmi.exceptions as pygexc
|
||||
import pyghmi.ipmi.oem.generic as generic
|
||||
import pyghmi.ipmi.private.constants as ipmiconst
|
||||
import pyghmi.ipmi.private.util as util
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo import cpu
|
||||
from pyghmi.ipmi.oem.lenovo import dimm
|
||||
from pyghmi.ipmi.oem.lenovo import drive
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo import firmware
|
||||
from pyghmi.ipmi.oem.lenovo import imm
|
||||
from pyghmi.ipmi.oem.lenovo import inventory
|
||||
from pyghmi.ipmi.oem.lenovo import nextscale
|
||||
from pyghmi.ipmi.oem.lenovo import pci
|
||||
from pyghmi.ipmi.oem.lenovo import psu
|
||||
from pyghmi.ipmi.oem.lenovo import raid_controller
|
||||
from pyghmi.ipmi.oem.lenovo import raid_drive
|
||||
|
||||
|
||||
import pyghmi.util.webclient as wc
|
||||
|
||||
import socket
|
||||
import struct
|
||||
import weakref
|
||||
try:
|
||||
range = xrange
|
||||
except NameError:
|
||||
pass
|
||||
try:
|
||||
buffer
|
||||
except NameError:
|
||||
buffer = memoryview
|
||||
|
||||
inventory.register_inventory_category(cpu)
|
||||
inventory.register_inventory_category(dimm)
|
||||
inventory.register_inventory_category(pci)
|
||||
inventory.register_inventory_category(drive)
|
||||
inventory.register_inventory_category(psu)
|
||||
inventory.register_inventory_category(raid_drive)
|
||||
inventory.register_inventory_category(raid_controller)
|
||||
|
||||
|
||||
firmware_types = {
|
||||
1: 'Management Controller',
|
||||
2: 'UEFI/BIOS',
|
||||
3: 'CPLD',
|
||||
4: 'Power Supply',
|
||||
5: 'Storage Adapter',
|
||||
6: 'Add-in Adapter',
|
||||
}
|
||||
|
||||
firmware_event = {
|
||||
0: ('Update failed', pygconst.Health.Failed),
|
||||
1: ('Update succeeded', pygconst.Health.Ok),
|
||||
2: ('Update aborted', pygconst.Health.Ok),
|
||||
3: ('Unknown', pygconst.Health.Warning),
|
||||
}
|
||||
|
||||
me_status = {
|
||||
0: ('Recovery GPIO forced', pygconst.Health.Warning),
|
||||
1: ('ME Image corrupt', pygconst.Health.Critical),
|
||||
2: ('Flash erase error', pygconst.Health.Critical),
|
||||
3: ('Unspecified flash state', pygconst.Health.Warning),
|
||||
4: ('ME watchdog timeout', pygconst.Health.Critical),
|
||||
5: ('ME platform reboot', pygconst.Health.Critical),
|
||||
6: ('ME update', pygconst.Health.Ok),
|
||||
7: ('Manufacturing error', pygconst.Health.Critical),
|
||||
8: ('ME Flash storage integrity error', pygconst.Health.Critical),
|
||||
9: ('ME firmware exception', pygconst.Health.Critical), # event data 3..
|
||||
0xa: ('ME firmware worn', pygconst.Health.Warning),
|
||||
0xc: ('Invalid SCMP state', pygconst.Health.Warning),
|
||||
0xd: ('PECI over DMI failure', pygconst.Health.Warning),
|
||||
0xe: ('MCTP interface failure', pygconst.Health.Warning),
|
||||
0xf: ('Auto configuration completed', pygconst.Health.Ok),
|
||||
}
|
||||
|
||||
me_flash_status = {
|
||||
0: ('ME flash corrupted', pygconst.Health.Critical),
|
||||
1: ('ME flash erase limit reached', pygconst.Health.Critical),
|
||||
2: ('ME flash write limit reached', pygconst.Health.Critical),
|
||||
3: ('ME flash write enabled', pygconst.Health.Ok),
|
||||
}
|
||||
|
||||
leds = {
|
||||
"BMC_UID": 0x00,
|
||||
"BMC_HEARTBEAT": 0x01,
|
||||
"SYSTEM_FAULT": 0x02,
|
||||
"PSU1_FAULT": 0x03,
|
||||
"PSU2_FAULT": 0x04,
|
||||
"LED_FAN_FAULT_1": 0x10,
|
||||
"LED_FAN_FAULT_2": 0x11,
|
||||
"LED_FAN_FAULT_3": 0x12,
|
||||
"LED_FAN_FAULT_4": 0x13,
|
||||
"LED_FAN_FAULT_5": 0x14,
|
||||
"LED_FAN_FAULT_6": 0x15,
|
||||
"LED_FAN_FAULT_7": 0x16,
|
||||
"LED_FAN_FAULT_8": 0x17
|
||||
}
|
||||
|
||||
led_status = {
|
||||
0x00: "Off",
|
||||
0xFF: "On"
|
||||
}
|
||||
led_status_default = "Blink"
|
||||
mac_format = '{0:02x}:{1:02x}:{2:02x}:{3:02x}:{4:02x}:{5:02x}'
|
||||
|
||||
|
||||
def _megarac_abbrev_image(name):
|
||||
# MegaRAC platform in some places needs an abbreviated filename
|
||||
# Their scheme in such a scenario is a max of 20. Truncation is
|
||||
# acheived by taking the first sixteen, then skipping ahead to the last
|
||||
# 4 (presumably to try to keep '.iso' or '.img' in the name).
|
||||
if len(name) <= 20:
|
||||
return name
|
||||
return name[:16] + name[-4:]
|
||||
|
||||
|
||||
class OEMHandler(generic.OEMHandler):
|
||||
# noinspection PyUnusedLocal
|
||||
def __init__(self, oemid, ipmicmd):
|
||||
# will need to retain data to differentiate
|
||||
# variations. For example System X versus Thinkserver
|
||||
self.oemid = oemid
|
||||
self._fpc_variant = None
|
||||
self.ipmicmd = weakref.proxy(ipmicmd)
|
||||
self._has_megarac = None
|
||||
self.oem_inventory_info = None
|
||||
self._mrethidx = None
|
||||
self._hasimm = None
|
||||
self._hasxcc = None
|
||||
if self.has_xcc:
|
||||
self.immhandler = imm.XCCClient(ipmicmd)
|
||||
elif self.has_imm:
|
||||
self.immhandler = imm.IMMClient(ipmicmd)
|
||||
|
||||
@property
|
||||
def _megarac_eth_index(self):
|
||||
if self._mrethidx is None:
|
||||
chan = self.ipmicmd.get_network_channel()
|
||||
rsp = self.ipmicmd.xraw_command(0x32, command=0x62, data=(chan,))
|
||||
self._mrethidx = rsp['data'][0]
|
||||
return self._mrethidx
|
||||
|
||||
def get_video_launchdata(self):
|
||||
if self.has_tsm:
|
||||
return self.get_tsm_launchdata()
|
||||
|
||||
def get_tsm_launchdata(self):
|
||||
pass
|
||||
|
||||
def process_event(self, event, ipmicmd, seldata):
|
||||
if 'oemdata' in event:
|
||||
oemtype = seldata[2]
|
||||
oemdata = event['oemdata']
|
||||
if oemtype == 0xd0: # firmware update
|
||||
event['component'] = firmware_types.get(oemdata[0], None)
|
||||
event['component_type'] = ipmiconst.sensor_type_codes[0x2b]
|
||||
slotnumber = (oemdata[1] & 0b11111000) >> 3
|
||||
if slotnumber:
|
||||
event['component'] += ' {0}'.format(slotnumber)
|
||||
event['event'], event['severity'] = \
|
||||
firmware_event[oemdata[1] & 0b111]
|
||||
event['event_data'] = '{0}.{1}'.format(oemdata[2], oemdata[3])
|
||||
elif oemtype == 0xd1: # BIOS recovery
|
||||
event['severity'] = pygconst.Health.Warning
|
||||
event['component'] = 'BIOS/UEFI'
|
||||
event['component_type'] = ipmiconst.sensor_type_codes[0xf]
|
||||
status = oemdata[0]
|
||||
method = (status & 0b11110000) >> 4
|
||||
status = (status & 0b1111)
|
||||
if method == 1:
|
||||
event['event'] = 'Automatic recovery'
|
||||
elif method == 2:
|
||||
event['event'] = 'Manual recovery'
|
||||
if status == 0:
|
||||
event['event'] += '- Failed'
|
||||
event['severity'] = pygconst.Health.Failed
|
||||
if oemdata[1] == 0x1:
|
||||
event['event'] += '- BIOS recovery image not found'
|
||||
event['event_data'] = '{0}.{1}'.format(oemdata[2], oemdata[3])
|
||||
elif oemtype == 0xd2: # eMMC status
|
||||
if oemdata[0] == 1:
|
||||
event['component'] = 'eMMC'
|
||||
event['component_type'] = ipmiconst.sensor_type_codes[0xc]
|
||||
if oemdata[0] == 1:
|
||||
event['event'] = 'eMMC Format error'
|
||||
event['severity'] = pygconst.Health.Failed
|
||||
elif oemtype == 0xd3:
|
||||
if oemdata[0] == 1:
|
||||
event['event'] = 'User privilege modification'
|
||||
event['severity'] = pygconst.Health.Ok
|
||||
event['component'] = 'User Privilege'
|
||||
event['component_type'] = ipmiconst.sensor_type_codes[6]
|
||||
event['event_data'] = \
|
||||
'User {0} on channel {1} had privilege changed ' \
|
||||
'from {2} to {3}'.format(
|
||||
oemdata[2], oemdata[1], oemdata[3] & 0b1111,
|
||||
(oemdata[3] & 0b11110000) >> 4
|
||||
)
|
||||
else:
|
||||
event['event'] = 'OEM event: {0}'.format(
|
||||
' '.join(format(x, '02x') for x in event['oemdata']))
|
||||
del event['oemdata']
|
||||
return
|
||||
evdata = event['event_data_bytes']
|
||||
if event['event_type_byte'] == 0x75: # ME event
|
||||
event['component'] = 'ME Firmware'
|
||||
event['component_type'] = ipmiconst.sensor_type_codes[0xf]
|
||||
event['event'], event['severity'] = me_status.get(
|
||||
evdata[1], ('Unknown', pygconst.Health.Warning))
|
||||
if evdata[1] == 3:
|
||||
event['event'], event['severity'] = me_flash_status.get(
|
||||
evdata[2], ('Unknown state', pygconst.Health.Warning))
|
||||
elif evdata[1] == 9:
|
||||
event['event'] += ' (0x{0:2x})'.format(evdata[2])
|
||||
elif evdata[1] == 0xf and evdata[2] & 0b10000000:
|
||||
event['event'] = 'Auto configuration failed'
|
||||
event['severity'] = pygconst.Health.Critical
|
||||
# For HDD bay events, the event data 2 is the bay, modify
|
||||
# the description to be more specific
|
||||
if (event['event_type_byte'] == 0x6f and
|
||||
(evdata[0] & 0b11000000) == 0b10000000 and
|
||||
event['component_type_id'] == 13):
|
||||
event['component'] += ' {0}'.format(evdata[1] & 0b11111)
|
||||
|
||||
def get_ntp_enabled(self):
|
||||
if self.has_tsm:
|
||||
ntpres = self.ipmicmd.xraw_command(netfn=0x32, command=0xa7)
|
||||
return ntpres['data'][0] == '\x01'
|
||||
return None
|
||||
|
||||
def get_ntp_servers(self):
|
||||
if self.has_tsm:
|
||||
srvs = []
|
||||
ntpres = self.ipmicmd.xraw_command(netfn=0x32, command=0xa7)
|
||||
srvs.append(ntpres['data'][1:129].rstrip('\x00'))
|
||||
srvs.append(ntpres['data'][129:257].rstrip('\x00'))
|
||||
return srvs
|
||||
return None
|
||||
|
||||
def set_ntp_enabled(self, enabled):
|
||||
if self.has_tsm:
|
||||
if enabled:
|
||||
self.ipmicmd.xraw_command(
|
||||
netfn=0x32, command=0xa8, data=(3, 1), timeout=15)
|
||||
else:
|
||||
self.ipmicmd.xraw_command(
|
||||
netfn=0x32, command=0xa8, data=(3, 0), timeout=15)
|
||||
return True
|
||||
return None
|
||||
|
||||
def set_ntp_server(self, server, index=0):
|
||||
if self.has_tsm:
|
||||
if not (0 <= index <= 1):
|
||||
raise pygexc.InvalidParameterValue("Index must be 0 or 1")
|
||||
cmddata = bytearray((1 + index, ))
|
||||
cmddata += server.ljust(128, '\x00')
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0xa8, data=cmddata)
|
||||
return True
|
||||
return None
|
||||
|
||||
@property
|
||||
def is_fpc(self):
|
||||
"""True if the target is a Lenovo nextscale fan power controller
|
||||
"""
|
||||
fpc_id = (19046, 32, 1063)
|
||||
smm_id = (19046, 32, 1180)
|
||||
currid = (self.oemid['manufacturer_id'], self.oemid['device_id'],
|
||||
self.oemid['product_id'])
|
||||
if currid == fpc_id:
|
||||
self._fpc_variant = 6
|
||||
elif currid == smm_id:
|
||||
self._fpc_variant = 2
|
||||
return self._fpc_variant
|
||||
|
||||
@property
|
||||
def is_sd350(self):
|
||||
return (19046, 32, 13616) == (self.oemid['manufacturer_id'],
|
||||
self.oemid['device_id'],
|
||||
self.oemid['product_id'])
|
||||
|
||||
@property
|
||||
def has_tsm(self):
|
||||
"""True if this particular server have a TSM based service processor
|
||||
"""
|
||||
if (self.oemid['manufacturer_id'] == 19046 and
|
||||
self.oemid['device_id'] == 32):
|
||||
try:
|
||||
self.ipmicmd.xraw_command(netfn=0x3a, command=0xf)
|
||||
except pygexc.IpmiException as ie:
|
||||
if ie.ipmicode == 193:
|
||||
return False
|
||||
raise
|
||||
return True
|
||||
return False
|
||||
|
||||
def get_oem_inventory_descriptions(self):
|
||||
if self.has_tsm:
|
||||
# Thinkserver with TSM
|
||||
if not self.oem_inventory_info:
|
||||
self._collect_tsm_inventory()
|
||||
return iter(self.oem_inventory_info)
|
||||
elif self.has_imm:
|
||||
return self.immhandler.get_hw_descriptions()
|
||||
return ()
|
||||
|
||||
def get_oem_inventory(self):
|
||||
if self.has_tsm:
|
||||
self._collect_tsm_inventory()
|
||||
for compname in self.oem_inventory_info:
|
||||
yield (compname, self.oem_inventory_info[compname])
|
||||
elif self.has_imm:
|
||||
for inv in self.immhandler.get_hw_inventory():
|
||||
yield inv
|
||||
|
||||
def get_sensor_data(self):
|
||||
if self.is_fpc:
|
||||
for name in nextscale.get_sensor_names(self._fpc_variant):
|
||||
yield nextscale.get_sensor_reading(name, self.ipmicmd,
|
||||
self._fpc_variant)
|
||||
|
||||
def get_sensor_descriptions(self):
|
||||
if self.is_fpc:
|
||||
return nextscale.get_sensor_descriptions(self._fpc_variant)
|
||||
return ()
|
||||
|
||||
def get_sensor_reading(self, sensorname):
|
||||
if self.is_fpc:
|
||||
return nextscale.get_sensor_reading(sensorname, self.ipmicmd,
|
||||
self._fpc_variant)
|
||||
return ()
|
||||
|
||||
def get_inventory_of_component(self, component):
|
||||
if self.has_tsm:
|
||||
self._collect_tsm_inventory()
|
||||
return self.oem_inventory_info.get(component, None)
|
||||
if self.has_imm:
|
||||
return self.immhandler.get_component_inventory(component)
|
||||
|
||||
def _collect_tsm_inventory(self):
|
||||
self.oem_inventory_info = {}
|
||||
for catid, catspec in inventory.categories.items():
|
||||
if (catspec.get("workaround_bmc_bug", False)):
|
||||
rsp = None
|
||||
tmp_command = dict(catspec["command"])
|
||||
tmp_command["data"] = list(tmp_command["data"])
|
||||
count = 0
|
||||
for i in range(0x01, 0xff):
|
||||
tmp_command["data"][-1] = i
|
||||
try:
|
||||
partrsp = self.ipmicmd.xraw_command(**tmp_command)
|
||||
if rsp is None:
|
||||
rsp = partrsp
|
||||
rsp["data"] = list(rsp["data"])
|
||||
else:
|
||||
rsp["data"].extend(partrsp["data"][1:])
|
||||
count += 1
|
||||
except Exception:
|
||||
break
|
||||
# If we didn't get any response, assume we don't have
|
||||
# this category and go on to the next one
|
||||
if rsp is None:
|
||||
continue
|
||||
rsp["data"].insert(1, count)
|
||||
rsp["data"] = buffer(bytearray(rsp["data"]))
|
||||
else:
|
||||
try:
|
||||
rsp = self.ipmicmd.xraw_command(**catspec["command"])
|
||||
except pygexc.IpmiException:
|
||||
continue
|
||||
# Parse the response we got
|
||||
try:
|
||||
items = inventory.parse_inventory_category(
|
||||
catid, rsp,
|
||||
countable=catspec.get("countable", True)
|
||||
)
|
||||
except Exception:
|
||||
# If we can't parse an inventory category, ignore it
|
||||
print(traceback.print_exc())
|
||||
continue
|
||||
|
||||
for item in items:
|
||||
try:
|
||||
key = catspec["idstr"].format(item["index"])
|
||||
del item["index"]
|
||||
self.oem_inventory_info[key] = item
|
||||
except Exception:
|
||||
# If we can't parse an inventory item, ignore it
|
||||
print(traceback.print_exc())
|
||||
continue
|
||||
|
||||
def get_leds(self):
|
||||
if self.has_tsm:
|
||||
for (name, id_) in leds.items():
|
||||
try:
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x3A, command=0x02,
|
||||
data=(id_,))
|
||||
except pygexc.IpmiException:
|
||||
continue # Ignore LEDs we can't retrieve
|
||||
status = led_status.get(ord(rsp['data'][0]),
|
||||
led_status_default)
|
||||
yield (name, {'status': status})
|
||||
|
||||
def set_identify(self, on, duration):
|
||||
if on and not duration and self.is_sd350:
|
||||
self.ipmicmd.xraw_command(netfn=0x3a, command=6, data=(1, 1))
|
||||
else:
|
||||
raise pygexc.UnsupportedFunctionality()
|
||||
|
||||
def process_fru(self, fru):
|
||||
if fru is None:
|
||||
return fru
|
||||
if self.has_tsm:
|
||||
fru['oem_parser'] = 'lenovo'
|
||||
# Thinkserver lays out specific interpretation of the
|
||||
# board extra fields
|
||||
try:
|
||||
_, _, wwn1, wwn2, mac1, mac2 = fru['board_extra']
|
||||
if wwn1 not in ('0000000000000000', ''):
|
||||
fru['WWN 1'] = wwn1
|
||||
if wwn2 not in ('0000000000000000', ''):
|
||||
fru['WWN 2'] = wwn2
|
||||
if mac1 not in ('00:00:00:00:00:00', ''):
|
||||
fru['MAC Address 1'] = mac1
|
||||
if mac2 not in ('00:00:00:00:00:00', ''):
|
||||
fru['MAC Address 2'] = mac2
|
||||
except (AttributeError, KeyError):
|
||||
pass
|
||||
try:
|
||||
# The product_extra field is UUID as the system would present
|
||||
# in DMI. This is different than the two UUIDs that
|
||||
# it returns for get device and get system uuid...
|
||||
byteguid = fru['product_extra'][0]
|
||||
# It can present itself as claiming to be ASCII when it
|
||||
# is actually raw hex. As a result it triggers the mechanism
|
||||
# to strip \x00 from the end of text strings. Work around this
|
||||
# by padding with \x00 to the right if less than 16 long
|
||||
byteguid.extend('\x00' * (16 - len(byteguid)))
|
||||
if byteguid not in ('\x20' * 16, '\x00' * 16, '\xff' * 16):
|
||||
fru['UUID'] = util.decode_wireformat_uuid(byteguid)
|
||||
except (AttributeError, KeyError, IndexError):
|
||||
pass
|
||||
return fru
|
||||
elif self.has_imm:
|
||||
fru['oem_parser'] = 'lenovo'
|
||||
try:
|
||||
bextra = fru['board_extra']
|
||||
fru['FRU Number'] = bextra[0]
|
||||
fru['Revision'] = bextra[4]
|
||||
macs = bextra[6]
|
||||
macprefix = None
|
||||
idx = 0
|
||||
endidx = len(macs) - 5
|
||||
macprefix = None
|
||||
while idx < endidx:
|
||||
currmac = macs[idx:idx+6]
|
||||
if not isinstance(currmac, bytearray):
|
||||
# invalid vpd format, abort attempts to extract
|
||||
# mac in this way
|
||||
break
|
||||
if currmac == b'\x00\x00\x00\x00\x00\x00':
|
||||
break
|
||||
# VPD may veer off, detect and break off
|
||||
if macprefix is None:
|
||||
macprefix = currmac[:3]
|
||||
elif currmac[:3] != macprefix:
|
||||
break
|
||||
ms = mac_format.format(*currmac)
|
||||
ifidx = idx / 6 + 1
|
||||
fru['MAC Address {0}'.format(ifidx)] = ms
|
||||
idx = idx + 6
|
||||
except (AttributeError, KeyError, IndexError):
|
||||
pass
|
||||
return fru
|
||||
else:
|
||||
fru['oem_parser'] = None
|
||||
return fru
|
||||
|
||||
@property
|
||||
def has_xcc(self):
|
||||
if self._hasxcc is not None:
|
||||
return self._hasxcc
|
||||
try:
|
||||
bdata = self.ipmicmd.xraw_command(netfn=0x3a, command=0xc1)
|
||||
except pygexc.IpmiException:
|
||||
self._hasxcc = False
|
||||
self._hasimm = False
|
||||
return False
|
||||
if len(bdata['data'][:]) != 3:
|
||||
self._hasimm = False
|
||||
self._hasxcc = False
|
||||
return False
|
||||
rdata = bytearray(bdata['data'][:])
|
||||
self._hasxcc = rdata[1] & 16 == 16
|
||||
if self._hasxcc:
|
||||
# For now, have imm calls go to xcc, since they are providing same
|
||||
# interface. Longer term the hope is that all the Lenovo
|
||||
# stuff will branch at init, and not have conditionals
|
||||
# in all the functions
|
||||
self._hasimm = self._hasxcc
|
||||
return self._hasxcc
|
||||
|
||||
@property
|
||||
def has_imm(self):
|
||||
if self._hasimm is not None:
|
||||
return self._hasimm
|
||||
try:
|
||||
bdata = self.ipmicmd.xraw_command(netfn=0x3a, command=0xc1)
|
||||
except pygexc.IpmiException:
|
||||
self._hasimm = False
|
||||
return False
|
||||
if len(bdata['data'][:]) != 3:
|
||||
self._hasimm = False
|
||||
return False
|
||||
rdata = bytearray(bdata['data'][:])
|
||||
self._hasimm = (rdata[1] & 1 == 1) or (rdata[1] & 16 == 16)
|
||||
return self._hasimm
|
||||
|
||||
def get_oem_firmware(self, bmcver):
|
||||
if self.has_tsm:
|
||||
command = firmware.get_categories()["firmware"]
|
||||
rsp = self.ipmicmd.xraw_command(**command["command"])
|
||||
return command["parser"](rsp["data"])
|
||||
elif self.has_imm:
|
||||
return self.immhandler.get_firmware_inventory(bmcver)
|
||||
elif self.is_fpc:
|
||||
return nextscale.get_fpc_firmware(bmcver, self.ipmicmd,
|
||||
self._fpc_variant)
|
||||
return super(OEMHandler, self).get_oem_firmware(bmcver)
|
||||
|
||||
def get_oem_capping_enabled(self):
|
||||
if self.has_tsm:
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x3a, command=0x1b,
|
||||
data=(3,))
|
||||
# disabled
|
||||
if rsp['data'][0] == '\x00':
|
||||
return False
|
||||
# enabled
|
||||
else:
|
||||
return True
|
||||
|
||||
def set_oem_capping_enabled(self, enable):
|
||||
"""Set PSU based power capping
|
||||
|
||||
:param enable: True for enable and False for disable
|
||||
"""
|
||||
# 1 - Enable power capping(default)
|
||||
if enable:
|
||||
statecode = 1
|
||||
# 0 - Disable power capping
|
||||
else:
|
||||
statecode = 0
|
||||
if self.has_tsm:
|
||||
self.ipmicmd.xraw_command(netfn=0x3a, command=0x1a,
|
||||
data=(3, statecode))
|
||||
return True
|
||||
|
||||
def get_oem_remote_kvm_available(self):
|
||||
if self.has_tsm:
|
||||
rsp = self.ipmicmd.raw_command(netfn=0x3a, command=0x13)
|
||||
return rsp['data'][0] == 0
|
||||
return False
|
||||
|
||||
def _restart_dns(self):
|
||||
if self.has_tsm:
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x6c, data=(7, 0))
|
||||
|
||||
def get_oem_domain_name(self):
|
||||
if self.has_tsm:
|
||||
name = ''
|
||||
for i in range(1, 5):
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x32, command=0x6b,
|
||||
data=(4, i))
|
||||
name += rsp['data'][:]
|
||||
return name.rstrip('\x00')
|
||||
|
||||
def set_oem_domain_name(self, name):
|
||||
if self.has_tsm:
|
||||
# set the domain name length
|
||||
data = [3, 0, 0, 0, 0, len(name)]
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x6c, data=data)
|
||||
|
||||
# set the domain name content
|
||||
name = name.ljust(256, "\x00")
|
||||
for i in range(0, 4):
|
||||
data = [4, i+1]
|
||||
offset = i*64
|
||||
data.extend([ord(x) for x in name[offset:offset+64]])
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x6c, data=data)
|
||||
|
||||
self._restart_dns()
|
||||
return
|
||||
|
||||
""" Gets a remote console launcher for a Lenovo ThinkServer.
|
||||
|
||||
Returns a tuple: (content type, launcher) or None if the launcher could
|
||||
not be retrieved."""
|
||||
def _get_ts_remote_console(self, bmc, username, password):
|
||||
# We don't establish non-secure connections without checking
|
||||
# certificates
|
||||
if not self.ipmicmd.certverify:
|
||||
return
|
||||
conn = wc.SecureHTTPConnection(bmc, 443,
|
||||
verifycallback=self.ipmicmd.certverify)
|
||||
conn.connect()
|
||||
params = urllib.urlencode({
|
||||
'WEBVAR_USERNAME': username,
|
||||
'WEBVAR_PASSWORD': password
|
||||
})
|
||||
headers = {
|
||||
'Connection': 'keep-alive'
|
||||
}
|
||||
conn.request('POST', '/rpc/WEBSES/create.asp', params, headers)
|
||||
rsp = conn.getresponse()
|
||||
if rsp.status == 200:
|
||||
body = rsp.read().split('\n')
|
||||
session_line = None
|
||||
for line in body:
|
||||
if 'SESSION_COOKIE' in line:
|
||||
session_line = line
|
||||
if session_line is None:
|
||||
return
|
||||
|
||||
session_id = session_line.split('\'')[3]
|
||||
# Usually happens when maximum number of sessions is reached
|
||||
if session_id == 'Failure_Session_Creation':
|
||||
return
|
||||
|
||||
headers = {
|
||||
'Connection': 'keep-alive',
|
||||
'Cookie': 'SessionCookie=' + session_id,
|
||||
}
|
||||
conn.request(
|
||||
'GET',
|
||||
'/Java/jviewer.jnlp?EXTRNIP=' + bmc + '&JNLPSTR=JViewer',
|
||||
None, headers)
|
||||
rsp = conn.getresponse()
|
||||
if rsp.status == 200:
|
||||
return rsp.getheader('Content-Type'), base64.b64encode(
|
||||
rsp.read())
|
||||
conn.close()
|
||||
|
||||
def get_graphical_console(self):
|
||||
return self._get_ts_remote_console(self.ipmicmd.bmc,
|
||||
self.ipmicmd.ipmi_session.userid,
|
||||
self.ipmicmd.ipmi_session.password)
|
||||
|
||||
def add_extra_net_configuration(self, netdata):
|
||||
if self.has_tsm:
|
||||
ipv6_addr = self.ipmicmd.xraw_command(
|
||||
netfn=0x0c, command=0x02,
|
||||
data=(0x01, 0xc5, 0x00, 0x00))["data"][1:]
|
||||
if not ipv6_addr:
|
||||
return
|
||||
ipv6_prefix = ord(self.ipmicmd.xraw_command(
|
||||
netfn=0xc, command=0x02,
|
||||
data=(0x1, 0xc6, 0, 0))['data'][1])
|
||||
if hasattr(socket, 'inet_ntop'):
|
||||
ipv6str = socket.inet_ntop(socket.AF_INET6, ipv6_addr)
|
||||
else:
|
||||
# fall back to a dumber, but more universal formatter
|
||||
ipv6str = binascii.b2a_hex(ipv6_addr)
|
||||
ipv6str = ':'.join([ipv6str[x:x+4] for x in range(0, 32, 4)])
|
||||
netdata['ipv6_addresses'] = [
|
||||
'{0}/{1}'.format(ipv6str, ipv6_prefix)]
|
||||
|
||||
@property
|
||||
def has_megarac(self):
|
||||
# if there is functionality that is the same for tsm or generic
|
||||
# megarac, then this is appropriate. If there's a TSM specific
|
||||
# preferred, use has_tsm first
|
||||
if self._has_megarac is not None:
|
||||
return self._has_megarac
|
||||
self._has_megarac = False
|
||||
try:
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x32, command=0x7e)
|
||||
# We don't have a handy classify-only, so use get sel policy
|
||||
# rsp should have a length of one, and be either '\x00' or '\x01'
|
||||
if len(rsp['data'][:]) == 1 and rsp['data'][0] in ('\x00', '\x01'):
|
||||
self._has_megarac = True
|
||||
except pygexc.IpmiException as ie:
|
||||
if ie.ipmicode == 0:
|
||||
# if it's a generic IpmiException rather than an error code
|
||||
# from the BMC, then this is a deeper problem than just an
|
||||
# invalid command or command length or similar
|
||||
raise
|
||||
return self._has_megarac
|
||||
|
||||
def set_alert_ipv6_destination(self, ip, destination, channel):
|
||||
if self.has_megarac:
|
||||
ethidx = self._megarac_eth_index
|
||||
reqdata = bytearray([channel, 193, destination, ethidx, 0])
|
||||
parsedip = socket.inet_pton(socket.AF_INET6, ip)
|
||||
reqdata.extend(parsedip)
|
||||
reqdata.extend('\x00\x00\x00\x00\x00\x00')
|
||||
self.ipmicmd.xraw_command(netfn=0xc, command=1, data=reqdata)
|
||||
return True
|
||||
return False
|
||||
|
||||
def _set_short_ris_string(self, selector, value):
|
||||
data = (1, selector, 0) + struct.unpack('{0}B'.format(len(value)),
|
||||
value)
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f, data=data)
|
||||
|
||||
def _set_ris_string(self, selector, value):
|
||||
if len(value) > 256:
|
||||
raise pygexc.UnsupportedFunctionality(
|
||||
'Value exceeds 256 characters: {0}'.format(value))
|
||||
padded = value + (256 - len(value)) * '\x00'
|
||||
padded = list(struct.unpack('256B', padded))
|
||||
# 8 = RIS, 4 = hd, 2 = fd, 1 = cd
|
||||
try: # try and clear in-progress if left incomplete
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f,
|
||||
data=(1, selector, 0, 0))
|
||||
except pygexc.IpmiException:
|
||||
pass
|
||||
# set in-progress
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f,
|
||||
data=(1, selector, 0, 1))
|
||||
# now do the set
|
||||
for x in range(0, 256, 64):
|
||||
currdata = padded[x:x+64]
|
||||
currchunk = x // 64 + 1
|
||||
cmddata = [1, selector, currchunk] + currdata
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f, data=cmddata)
|
||||
# unset in-progress
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f,
|
||||
data=(1, selector, 0, 0))
|
||||
|
||||
def _megarac_fetch_image_shortnames(self):
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x32, command=0xd8,
|
||||
data=(7, 1, 0))
|
||||
imgnames = rsp['data'][1:]
|
||||
shortnames = []
|
||||
for idx in range(0, len(imgnames), 22):
|
||||
shortnames.append(imgnames[idx+2:idx+22].rstrip('\0'))
|
||||
return shortnames
|
||||
|
||||
def _megarac_media_waitforready(self, imagename):
|
||||
# first, we have, sadly, a 10 second grace period for some invisible
|
||||
# async activity to get far enough long to monitor
|
||||
self.ipmicmd.ipmi_session.pause(10)
|
||||
risenabled = '\x00'
|
||||
mountok = '\xff'
|
||||
while risenabled != '\x01':
|
||||
risenabled = self.ipmicmd.xraw_command(
|
||||
netfn=0x32, command=0x9e, data=(8, 10))['data'][2]
|
||||
while mountok == '\xff':
|
||||
mountok = self.ipmicmd.xraw_command(
|
||||
netfn=0x32, command=0x9e, data=(1, 8))['data'][2]
|
||||
targshortname = _megarac_abbrev_image(imagename)
|
||||
shortnames = self._megarac_fetch_image_shortnames()
|
||||
while targshortname not in shortnames:
|
||||
self.ipmicmd.wait_for_rsp(1)
|
||||
shortnames = self._megarac_fetch_image_shortnames()
|
||||
self.ipmicmd.ipmi_session.pause(10)
|
||||
try:
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0xa0, data=(1, 0))
|
||||
self.ipmicmd.ipmi_session.pause(5)
|
||||
except pygexc.IpmiException:
|
||||
pass
|
||||
|
||||
def _megarac_attach_media(self, proto, username, password, imagename,
|
||||
domain, path, host):
|
||||
# First we must ensure that the RIS is actually enabled
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f, data=(8, 10, 0, 1))
|
||||
if username is not None:
|
||||
self._set_ris_string(3, username)
|
||||
if password is not None:
|
||||
self._set_short_ris_string(4, password)
|
||||
if domain is not None:
|
||||
self._set_ris_string(6, domain)
|
||||
self._set_ris_string(1, path)
|
||||
ip = util.get_ipv4(host)[0]
|
||||
self._set_short_ris_string(2, ip)
|
||||
self._set_short_ris_string(5, proto)
|
||||
# now to restart RIS to have changes take effect...
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f, data=(8, 11))
|
||||
# now to kick off the requested mount
|
||||
self._megarac_media_waitforready(imagename)
|
||||
self._set_ris_string(0, imagename)
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0xa0,
|
||||
data=(1, 1))
|
||||
|
||||
def attach_remote_media(self, url, username, password):
|
||||
if self.has_imm:
|
||||
self.immhandler.attach_remote_media(url, username, password)
|
||||
elif self.has_megarac:
|
||||
proto, host, path = util.urlsplit(url)
|
||||
if proto == 'smb':
|
||||
proto = 'cifs'
|
||||
domain = None
|
||||
path, imagename = path.rsplit('/', 1)
|
||||
if username is not None and '@' in username:
|
||||
username, domain = username.split('@', 1)
|
||||
elif username is not None and '\\' in username:
|
||||
domain, username = username.split('\\', 1)
|
||||
try:
|
||||
self._megarac_attach_media(proto, username, password,
|
||||
imagename, domain, path, host)
|
||||
except pygexc.IpmiException as ie:
|
||||
if ie.ipmicode in (0x92, 0x99):
|
||||
# if starting from scratch, this can happen...
|
||||
self._megarac_attach_media(proto, username, password,
|
||||
imagename, domain, path, host)
|
||||
else:
|
||||
raise
|
||||
|
||||
def update_firmware(self, filename, data=None, progress=None):
|
||||
if self.has_xcc:
|
||||
return self.immhandler.update_firmware(
|
||||
filename, data=data, progress=progress)
|
||||
super(OEMHandler, self).update_firmware(filename, data=data,
|
||||
progress=progress)
|
||||
|
||||
def detach_remote_media(self):
|
||||
if self.has_imm:
|
||||
self.immhandler.detach_remote_media()
|
||||
elif self.has_megarac:
|
||||
self.ipmicmd.xraw_command(
|
||||
netfn=0x32, command=0x9f, data=(8, 10, 0, 0))
|
||||
self.ipmicmd.xraw_command(netfn=0x32, command=0x9f, data=(8, 11))
|
|
@ -1,606 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2016 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from datetime import datetime
|
||||
import json
|
||||
import pyghmi.ipmi.private.session as ipmisession
|
||||
import pyghmi.ipmi.private.util as util
|
||||
import pyghmi.util.webclient as webclient
|
||||
import random
|
||||
import threading
|
||||
import urllib
|
||||
import weakref
|
||||
|
||||
|
||||
class FileUploader(threading.Thread):
|
||||
|
||||
def __init__(self, webclient, url, filename, data):
|
||||
self.wc = webclient
|
||||
self.url = url
|
||||
self.filename = filename
|
||||
self.data = data
|
||||
super(FileUploader, self).__init__()
|
||||
|
||||
def run(self):
|
||||
self.rsp = self.wc.upload(self.url, self.filename, self.data)
|
||||
|
||||
|
||||
class IMMClient(object):
|
||||
logouturl = '/data/logout'
|
||||
bmcname = 'IMM'
|
||||
ADP_URL = '/designs/imm/dataproviders/imm_adapters.php'
|
||||
ADP_NAME = 'adapter.adapterName'
|
||||
ADP_FUN = 'adapter.functions'
|
||||
ADP_LABEL = 'adapter.connectorLabel'
|
||||
ADP_SLOTNO = 'adapter.slotNo'
|
||||
ADP_OOB = 'adapter.oobSupported'
|
||||
BUSNO = 'generic.busNo'
|
||||
PORTS = 'network.pPorts'
|
||||
DEVNO = 'generic.devNo'
|
||||
|
||||
def __init__(self, ipmicmd):
|
||||
self.ipmicmd = weakref.proxy(ipmicmd)
|
||||
self.imm = ipmicmd.bmc
|
||||
self.username = ipmicmd.ipmi_session.userid
|
||||
self.password = ipmicmd.ipmi_session.password
|
||||
self._wc = None # The webclient shall be initiated on demand
|
||||
self.datacache = {}
|
||||
|
||||
@staticmethod
|
||||
def _parse_builddate(strval):
|
||||
try:
|
||||
return datetime.strptime(strval, '%Y/%m/%d %H:%M:%S')
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
return datetime.strptime(strval, '%Y-%m-%d %H:%M:%S')
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
return datetime.strptime(strval, '%Y/%m/%d')
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
return datetime.strptime(strval, '%m/%d/%Y')
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
return datetime.strptime(strval, '%Y-%m-%d')
|
||||
except ValueError:
|
||||
pass
|
||||
try:
|
||||
return datetime.strptime(strval, '%m %d %Y')
|
||||
except ValueError:
|
||||
pass
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def parse_imm_buildinfo(cls, buildinfo):
|
||||
buildid = buildinfo[:9].rstrip(' \x00')
|
||||
bdt = ' '.join(buildinfo[9:].replace('\x00', ' ').split())
|
||||
bdate = cls._parse_builddate(bdt)
|
||||
return buildid, bdate
|
||||
|
||||
@classmethod
|
||||
def datefromprop(cls, propstr):
|
||||
if propstr is None:
|
||||
return None
|
||||
return cls._parse_builddate(propstr)
|
||||
|
||||
def get_property(self, propname):
|
||||
propname = propname.encode('utf-8')
|
||||
proplen = len(propname) | 0b10000000
|
||||
cmdlen = len(propname) + 1
|
||||
cdata = bytearray([0, 0, cmdlen, proplen]) + propname
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x3a, command=0xc4, data=cdata)
|
||||
rsp['data'] = bytearray(rsp['data'])
|
||||
if rsp['data'][0] != 0:
|
||||
return None
|
||||
propdata = rsp['data'][3:] # second two bytes are size, don't need it
|
||||
if propdata[0] & 0b10000000: # string, for now assume length valid
|
||||
return str(propdata[1:]).rstrip(' \x00')
|
||||
else:
|
||||
raise Exception('Unknown format for property: ' + repr(propdata))
|
||||
|
||||
def get_webclient(self):
|
||||
cv = self.ipmicmd.certverify
|
||||
wc = webclient.SecureHTTPConnection(self.imm, 443, verifycallback=cv)
|
||||
try:
|
||||
wc.connect()
|
||||
except Exception:
|
||||
return None
|
||||
adata = urllib.urlencode({'user': self.username,
|
||||
'password': self.password,
|
||||
'SessionTimeout': 60
|
||||
})
|
||||
headers = {'Connection': 'keep-alive',
|
||||
'Referer': 'https://{0}/designs/imm/index.php'.format(
|
||||
self.imm),
|
||||
'Content-Type': 'application/x-www-form-urlencoded'}
|
||||
wc.request('POST', '/data/login', adata, headers)
|
||||
rsp = wc.getresponse()
|
||||
if rsp.status == 200:
|
||||
rspdata = json.loads(rsp.read())
|
||||
if rspdata['authResult'] == '0' and rspdata['status'] == 'ok':
|
||||
if 'token2_name' in rspdata and 'token2_value' in rspdata:
|
||||
wc.set_header(rspdata['token2_name'],
|
||||
rspdata['token2_value'])
|
||||
return wc
|
||||
|
||||
@property
|
||||
def wc(self):
|
||||
if not self._wc:
|
||||
self._wc = self.get_webclient()
|
||||
return self._wc
|
||||
|
||||
def fetch_grouped_properties(self, groupinfo):
|
||||
retdata = {}
|
||||
for keyval in groupinfo:
|
||||
retdata[keyval] = self.get_property(groupinfo[keyval])
|
||||
if keyval == 'date':
|
||||
retdata[keyval] = self.datefromprop(retdata[keyval])
|
||||
returnit = False
|
||||
for keyval in list(retdata):
|
||||
if retdata[keyval] in (None, ''):
|
||||
del retdata[keyval]
|
||||
else:
|
||||
returnit = True
|
||||
if returnit:
|
||||
return retdata
|
||||
|
||||
def get_cached_data(self, attribute):
|
||||
try:
|
||||
kv = self.datacache[attribute]
|
||||
if kv[1] > util._monotonic_time() - 30:
|
||||
return kv[0]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
def attach_remote_media(self, url, user, password):
|
||||
url = url.replace(':', '\:')
|
||||
params = urllib.urlencode({
|
||||
'RP_VmAllocateMountUrl({0},{1},1,,)'.format(
|
||||
self.username, url): ''
|
||||
})
|
||||
result = self.wc.grab_json_response('/data?set', params)
|
||||
if result['return'] != 'Success':
|
||||
raise Exception(result['reason'])
|
||||
self.weblogout()
|
||||
|
||||
def detach_remote_media(self):
|
||||
mnt = self.wc.grab_json_response(
|
||||
'/designs/imm/dataproviders/imm_rp_images.php')
|
||||
removeurls = []
|
||||
for item in mnt['items']:
|
||||
if 'urls' in item:
|
||||
for url in item['urls']:
|
||||
removeurls.append(url['url'])
|
||||
for url in removeurls:
|
||||
url = url.replace(':', '\:')
|
||||
params = urllib.urlencode({
|
||||
'RP_VmAllocateUnMountUrl({0},{1},0,)'.format(
|
||||
self.username, url): ''})
|
||||
result = self.wc.grab_json_response('/data?set', params)
|
||||
if result['return'] != 'Success':
|
||||
raise Exception(result['reason'])
|
||||
self.weblogout()
|
||||
|
||||
def fetch_agentless_firmware(self):
|
||||
adapterdata = self.get_cached_data('lenovo_cached_adapters')
|
||||
if not adapterdata:
|
||||
if self.wc:
|
||||
adapterdata = self.wc.grab_json_response(self.ADP_URL)
|
||||
if adapterdata:
|
||||
self.datacache['lenovo_cached_adapters'] = (
|
||||
adapterdata, util._monotonic_time())
|
||||
if adapterdata and 'items' in adapterdata:
|
||||
for adata in adapterdata['items']:
|
||||
aname = adata[self.ADP_NAME]
|
||||
donenames = set([])
|
||||
for fundata in adata[self.ADP_FUN]:
|
||||
fdata = fundata.get('firmwares', ())
|
||||
for firm in fdata:
|
||||
fname = firm['firmwareName'].rstrip()
|
||||
if '.' in fname:
|
||||
fname = firm['description'].rstrip()
|
||||
if fname in donenames:
|
||||
# ignore redundant entry
|
||||
continue
|
||||
donenames.add(fname)
|
||||
bdata = {}
|
||||
if 'versionStr' in firm and firm['versionStr']:
|
||||
bdata['version'] = firm['versionStr']
|
||||
if ('releaseDate' in firm and
|
||||
firm['releaseDate'] and
|
||||
firm['releaseDate'] != 'N/A'):
|
||||
try:
|
||||
bdata['date'] = self._parse_builddate(
|
||||
firm['releaseDate'])
|
||||
except ValueError:
|
||||
pass
|
||||
yield ('{0} {1}'.format(aname, fname), bdata)
|
||||
storagedata = self.get_cached_data('lenovo_cached_storage')
|
||||
if not storagedata:
|
||||
if self.wc:
|
||||
storagedata = self.wc.grab_json_response(
|
||||
'/designs/imm/dataproviders/raid_alldevices.php')
|
||||
if storagedata:
|
||||
self.datacache['lenovo_cached_storage'] = (
|
||||
storagedata, util._monotonic_time())
|
||||
if storagedata and 'items' in storagedata:
|
||||
for adp in storagedata['items']:
|
||||
if 'storage.vpd.productName' not in adp:
|
||||
continue
|
||||
adpname = adp['storage.vpd.productName']
|
||||
if 'children' not in adp:
|
||||
adp['children'] = ()
|
||||
for diskent in adp['children']:
|
||||
bdata = {}
|
||||
diskname = '{0} Disk {1}'.format(
|
||||
adpname,
|
||||
diskent['storage.slotNo'])
|
||||
bdata['model'] = diskent[
|
||||
'storage.vpd.productName'].rstrip()
|
||||
bdata['version'] = diskent['storage.firmwares'][0][
|
||||
'versionStr']
|
||||
yield (diskname, bdata)
|
||||
self.weblogout()
|
||||
|
||||
def get_hw_inventory(self):
|
||||
hwmap = self.hardware_inventory_map()
|
||||
for key in hwmap:
|
||||
yield (key, hwmap[key])
|
||||
|
||||
def get_hw_descriptions(self):
|
||||
hwmap = self.hardware_inventory_map()
|
||||
for key in hwmap:
|
||||
yield key
|
||||
|
||||
def get_component_inventory(self, compname):
|
||||
hwmap = self.hardware_inventory_map()
|
||||
try:
|
||||
return hwmap[compname]
|
||||
except KeyError:
|
||||
return None
|
||||
|
||||
def weblogout(self):
|
||||
if self._wc:
|
||||
self._wc.grab_json_response(self.logouturl)
|
||||
self._wc = None
|
||||
|
||||
def hardware_inventory_map(self):
|
||||
hwmap = self.get_cached_data('lenovo_cached_hwmap')
|
||||
if hwmap:
|
||||
return hwmap
|
||||
hwmap = {}
|
||||
adapterdata = self.get_cached_data('lenovo_cached_adapters')
|
||||
if not adapterdata:
|
||||
if self.wc:
|
||||
adapterdata = self.wc.grab_json_response(self.ADP_URL)
|
||||
if adapterdata:
|
||||
self.datacache['lenovo_cached_adapters'] = (
|
||||
adapterdata, util._monotonic_time())
|
||||
if adapterdata and 'items' in adapterdata:
|
||||
for adata in adapterdata['items']:
|
||||
skipadapter = False
|
||||
if not adata[self.ADP_OOB]:
|
||||
continue
|
||||
aname = adata[self.ADP_NAME]
|
||||
clabel = adata[self.ADP_LABEL]
|
||||
if clabel == 'Unknown':
|
||||
continue
|
||||
if clabel != 'Onboard':
|
||||
aslot = adata[self.ADP_SLOTNO]
|
||||
if clabel == 'ML2':
|
||||
clabel = 'ML2 (Slot {0})'.format(aslot)
|
||||
else:
|
||||
clabel = 'Slot {0}'.format(aslot)
|
||||
bdata = {'location': clabel}
|
||||
for fundata in adata[self.ADP_FUN]:
|
||||
bdata['pcislot'] = '{0:02x}:{1:02x}'.format(
|
||||
fundata[self.BUSNO], fundata[self.DEVNO])
|
||||
serialdata = fundata.get('vpd.serialNo', None)
|
||||
if (serialdata and serialdata != 'N/A' and
|
||||
'---' not in serialdata):
|
||||
bdata['serial'] = serialdata
|
||||
partnum = fundata.get('vpd.partNo', None)
|
||||
if partnum and partnum != 'N/A':
|
||||
bdata['partnumber'] = partnum
|
||||
if self.PORTS in fundata:
|
||||
for portinfo in fundata[self.PORTS]:
|
||||
for lp in portinfo['logicalPorts']:
|
||||
ma = lp['networkAddr']
|
||||
ma = ':'.join(
|
||||
[ma[i:i+2] for i in range(
|
||||
0, len(ma), 2)]).lower()
|
||||
bdata['MAC Address {0}'.format(
|
||||
portinfo['portIndex'])] = ma
|
||||
elif clabel == 'Onboard': # skip the various non-nic
|
||||
skipadapter = True
|
||||
if not skipadapter:
|
||||
hwmap[aname] = bdata
|
||||
self.datacache['lenovo_cached_hwmap'] = (hwmap,
|
||||
util._monotonic_time())
|
||||
self.weblogout()
|
||||
return hwmap
|
||||
|
||||
def get_firmware_inventory(self, bmcver):
|
||||
# First we fetch the system firmware found in imm properties
|
||||
# then check for agentless, if agentless, get adapter info using
|
||||
# https, using the caller TLS verification scheme
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x3a, command=0x50)
|
||||
immverdata = self.parse_imm_buildinfo(rsp['data'])
|
||||
bdata = {
|
||||
'version': bmcver, 'build': immverdata[0], 'date': immverdata[1]}
|
||||
yield (self.bmcname, bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/dm/fw/imm2/backup_build_id',
|
||||
'version': '/v2/ibmc/dm/fw/imm2/backup_build_version',
|
||||
'date': '/v2/ibmc/dm/fw/imm2/backup_build_date'})
|
||||
if bdata:
|
||||
yield ('{0} Backup'.format(self.bmcname), bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/trusted_buildid',
|
||||
})
|
||||
if bdata:
|
||||
yield ('{0} Trusted Image'.format(self.bmcname), bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/bios/build_id',
|
||||
'version': '/v2/bios/build_version',
|
||||
'date': '/v2/bios/build_date'})
|
||||
if bdata:
|
||||
yield ('UEFI', bdata)
|
||||
else:
|
||||
yield ('UEFI', {'version': 'unknown'})
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/dm/fw/bios/backup_build_id',
|
||||
'version': '/v2/ibmc/dm/fw/bios/backup_build_version'})
|
||||
if bdata:
|
||||
yield ('UEFI Backup', bdata)
|
||||
# Note that the next pending could be pending for either primary
|
||||
# or backup, so can't promise where it will go
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/bios/pending_build_id'})
|
||||
if bdata:
|
||||
yield ('UEFI Pending Update', bdata)
|
||||
fpga = self.ipmicmd.xraw_command(netfn=0x3a, command=0x6b, data=(0,))
|
||||
fpga = '{0}.{1}.{2}'.format(*[ord(x) for x in fpga['data']])
|
||||
yield ('FPGA', {'version': fpga})
|
||||
for firm in self.fetch_agentless_firmware():
|
||||
yield firm
|
||||
|
||||
|
||||
class XCCClient(IMMClient):
|
||||
logouturl = '/api/providers/logout'
|
||||
bmcname = 'XCC'
|
||||
ADP_URL = '/api/dataset/imm_adapters?params=pci_GetAdapters'
|
||||
ADP_NAME = 'adapterName'
|
||||
ADP_FUN = 'functions'
|
||||
ADP_LABEL = 'connectorLabel'
|
||||
ADP_SLOTNO = 'slotNo'
|
||||
ADP_OOB = 'oobSupported'
|
||||
BUSNO = 'generic_busNo'
|
||||
PORTS = 'network_pPorts'
|
||||
DEVNO = 'generic_devNo'
|
||||
|
||||
def get_webclient(self):
|
||||
cv = self.ipmicmd.certverify
|
||||
wc = webclient.SecureHTTPConnection(self.imm, 443, verifycallback=cv)
|
||||
try:
|
||||
wc.connect()
|
||||
except Exception:
|
||||
return None
|
||||
adata = json.dumps({'username': self.username,
|
||||
'password': self.password
|
||||
})
|
||||
headers = {'Connection': 'keep-alive',
|
||||
'Content-Type': 'application/json'}
|
||||
wc.request('POST', '/api/login', adata, headers)
|
||||
rsp = wc.getresponse()
|
||||
if rsp.status == 200:
|
||||
rspdata = json.loads(rsp.read())
|
||||
wc.set_header('Content-Type', 'application/json')
|
||||
wc.set_header('Authorization', 'Bearer ' + rspdata['access_token'])
|
||||
if '_csrf_token' in wc.cookies:
|
||||
wc.set_header('X-XSRF-TOKEN', wc.cookies['_csrf_token'])
|
||||
return wc
|
||||
|
||||
def attach_remote_media(self, url, user, password):
|
||||
proto, host, path = util.urlsplit(url)
|
||||
if proto == 'smb':
|
||||
proto = 'cifs'
|
||||
rq = {'Option': '', 'Domain': '', 'Write': 0}
|
||||
# nfs == 1, cifs == 0
|
||||
if proto == 'nfs':
|
||||
rq['Protocol'] = 1
|
||||
rq['Url'] = '{0}:{1}'.format(host, path)
|
||||
elif proto == 'cifs':
|
||||
rq['Protocol'] = 0
|
||||
rq['Credential'] = '{0}:{1}'.format(user, password)
|
||||
rq['Url'] = '//{0}{1}'.format(host, path)
|
||||
elif proto in ('http', 'https'):
|
||||
rq['Protocol'] = 7
|
||||
rq['Url'] = url
|
||||
else:
|
||||
raise Exception('TODO')
|
||||
rt = self.wc.grab_json_response('/api/providers/rp_vm_remote_connect',
|
||||
json.dumps(rq))
|
||||
if 'return' not in rt or rt['return'] != 0:
|
||||
raise Exception('Unhandled return: ' + repr(rt))
|
||||
rt = self.wc.grab_json_response('/api/providers/rp_vm_remote_mountall',
|
||||
'{}')
|
||||
if 'return' not in rt or rt['return'] != 0:
|
||||
raise Exception('Unhandled return: ' + repr(rt))
|
||||
|
||||
def get_firmware_inventory(self, bmcver):
|
||||
# First we fetch the system firmware found in imm properties
|
||||
# then check for agentless, if agentless, get adapter info using
|
||||
# https, using the caller TLS verification scheme
|
||||
rsp = self.ipmicmd.xraw_command(netfn=0x3a, command=0x50)
|
||||
immverdata = self.parse_imm_buildinfo(rsp['data'])
|
||||
bdata = {
|
||||
'version': bmcver, 'build': immverdata[0], 'date': immverdata[1]}
|
||||
yield (self.bmcname, bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/dm/fw/imm3/backup_pending_build_id',
|
||||
'version': '/v2/ibmc/dm/fw/imm3/backup_pending_build_version',
|
||||
'date': '/v2/ibmc/dm/fw/imm3/backup_pending_build_date'})
|
||||
if bdata:
|
||||
yield ('{0} Backup'.format(self.bmcname), bdata)
|
||||
else:
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/dm/fw/imm3/backup_build_id',
|
||||
'version': '/v2/ibmc/dm/fw/imm3/backup_build_version',
|
||||
'date': '/v2/ibmc/dm/fw/imm3/backup_build_date'})
|
||||
if bdata:
|
||||
yield ('{0} Backup'.format(self.bmcname), bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/trusted_buildid',
|
||||
})
|
||||
if bdata:
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/ibmc/trusted_buildid',
|
||||
})
|
||||
if bdata:
|
||||
yield ('{0} Trusted Image'.format(self.bmcname), bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/bios/build_id',
|
||||
'version': '/v2/bios/build_version',
|
||||
'date': '/v2/bios/build_date'})
|
||||
if bdata:
|
||||
yield ('UEFI', bdata)
|
||||
# Note that the next pending could be pending for either primary
|
||||
# or backup, so can't promise where it will go
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/bios/pending_build_id'})
|
||||
if bdata:
|
||||
yield ('UEFI Pending Update', bdata)
|
||||
bdata = self.fetch_grouped_properties({
|
||||
'build': '/v2/tdm/build_id',
|
||||
'version': '/v2/tdm/build_version',
|
||||
'date': '/v2/tdm/build_date'})
|
||||
if bdata:
|
||||
yield ('LXPM', bdata)
|
||||
fpga = self.ipmicmd.xraw_command(netfn=0x3a, command=0x6b, data=(0,))
|
||||
fpga = '{0}.{1}.{2}'.format(*[ord(x) for x in fpga['data']])
|
||||
yield ('FPGA', {'version': fpga})
|
||||
for firm in self.fetch_agentless_firmware():
|
||||
yield firm
|
||||
|
||||
def detach_remote_media(self):
|
||||
rt = self.wc.grab_json_response('/api/providers/rp_vm_remote_getdisk')
|
||||
if 'items' in rt:
|
||||
slots = []
|
||||
for mount in rt['items']:
|
||||
slots.append(mount['slotId'])
|
||||
for slot in slots:
|
||||
rt = self.wc.grab_json_response(
|
||||
'/api/providers/rp_vm_remote_unmount',
|
||||
json.dumps({'Slot': slot}))
|
||||
if 'return' not in rt or rt['return'] != 0:
|
||||
raise Exception("Unrecognized return: " + repr(rt))
|
||||
|
||||
def update_firmware(self, filename, data=None, progress=None):
|
||||
try:
|
||||
self.update_firmware_backend(filename, data, progress)
|
||||
except Exception:
|
||||
self.wc.grab_json_response('/api/providers/fwupdate', json.dumps(
|
||||
{'UPD_WebCancel': 1}))
|
||||
raise
|
||||
|
||||
def update_firmware_backend(self, filename, data=None, progress=None):
|
||||
rsv = self.wc.grab_json_response('/api/providers/fwupdate', json.dumps(
|
||||
{'UPD_WebReserve': 1}))
|
||||
if rsv['return'] != 0:
|
||||
raise Exception('Unexpected return to reservation: ' + repr(rsv))
|
||||
xid = random.randint(0, 1000000000)
|
||||
uploadthread = FileUploader(self.wc.dupe(),
|
||||
'/upload?X-Progress-ID={0}'.format(xid),
|
||||
filename, data)
|
||||
uploadthread.start()
|
||||
uploadstate = None
|
||||
while uploadthread.isAlive():
|
||||
uploadthread.join(3)
|
||||
rsp = self.wc.grab_json_response(
|
||||
'/upload/progress?X-Progress-ID={0}'.format(xid))
|
||||
if rsp['state'] == 'uploading':
|
||||
progress({'phase': 'upload',
|
||||
'progress': 100.0 * rsp['received'] / rsp['size']})
|
||||
elif rsp['state'] != 'done':
|
||||
raise Exception('Unexpected result:' + repr(rsp))
|
||||
uploadstate = rsp['state']
|
||||
self.wc.grab_json_response('/api/providers/identity')
|
||||
while uploadstate != 'done':
|
||||
rsp = self.wc.grab_json_response(
|
||||
'/upload/progress?X-Progress-ID={0}'.format(xid))
|
||||
uploadstate = rsp['state']
|
||||
self.wc.grab_json_response('/api/providers/identity')
|
||||
rsp = json.loads(uploadthread.rsp)
|
||||
if rsp['items'][0]['name'] != filename:
|
||||
raise Exception('Unexpected response: ' + repr(rsp))
|
||||
progress({'phase': 'upload',
|
||||
'progress': 100.0})
|
||||
self.wc.grab_json_response('/api/providers/identity')
|
||||
if '_csrf_token' in self.wc.cookies:
|
||||
self.wc.set_header('X-XSRF-TOKEN', self.wc.cookies['_csrf_token'])
|
||||
rsp = self.wc.grab_json_response('/api/providers/fwupdate', json.dumps(
|
||||
{'UPD_WebSetFileName': rsp['items'][0]['path']}))
|
||||
if rsp['return'] != 0:
|
||||
raise Exception('Unexpected return to set filename: ' + repr(rsp))
|
||||
rsp = self.wc.grab_json_response('/api/providers/fwupdate', json.dumps(
|
||||
{'UPD_WebVerifyUploadFile': 1}))
|
||||
if rsp['return'] != 0:
|
||||
raise Exception('Unexpected return to verify: ' + repr(rsp))
|
||||
self.wc.grab_json_response('/api/providers/identity')
|
||||
rsp = self.wc.grab_json_response(
|
||||
'/upload/progress?X-Progress-ID={0}'.format(xid))
|
||||
if rsp['state'] != 'done':
|
||||
raise Exception('Unexpected progress: ' + repr(rsp))
|
||||
rsp = self.wc.grab_json_response('/api/dataset/imm_firmware_success')
|
||||
if len(rsp['items']) != 1:
|
||||
raise Exception('Unexpected result: ' + repr(rsp))
|
||||
rsp = self.wc.grab_json_response('/api/dataset/imm_firmware_update')
|
||||
if rsp['items'][0]['upgrades'][0]['id'] != 1:
|
||||
raise Exception('Unexpected answer: ' + repr(rsp))
|
||||
if '_csrf_token' in self.wc.cookies:
|
||||
self.wc.set_header('X-XSRF-TOKEN', self.wc.cookies['_csrf_token'])
|
||||
rsp = self.wc.grab_json_response('/api/providers/fwupdate', json.dumps(
|
||||
{'UPD_WebStartDefaultAction': 1}))
|
||||
if rsp['return'] != 0:
|
||||
raise Exception('Unexpected result starting update: ' +
|
||||
rsp['return'])
|
||||
complete = False
|
||||
while not complete:
|
||||
ipmisession.Session.pause(3)
|
||||
rsp = self.wc.grab_json_response(
|
||||
'/api/dataset/imm_firmware_progress')
|
||||
progress({'phase': 'apply',
|
||||
'progress': rsp['items'][0]['action_percent_complete']})
|
||||
if rsp['items'][0]['action_state'] == 'Idle':
|
||||
complete = True
|
||||
break
|
||||
if rsp['items'][0]['action_state'] == 'Complete OK':
|
||||
complete = True
|
||||
if rsp['items'][0]['action_status'] != 0:
|
||||
raise Exception('Unexpected failure: ' + repr(rsp))
|
||||
break
|
||||
if (rsp['items'][0]['action_state'] == 'In Progress' and
|
||||
rsp['items'][0]['action_status'] == 2):
|
||||
raise Exception('Unexpected failure: ' + repr(rsp))
|
||||
if rsp['items'][0]['action_state'] != 'In Progress':
|
||||
raise Exception(
|
||||
'Unknown condition waiting for '
|
||||
'firmware update: ' + repr(rsp))
|
|
@ -1,147 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import struct
|
||||
|
||||
categories = {}
|
||||
|
||||
|
||||
def register_inventory_category(module):
|
||||
c = module.get_categories()
|
||||
for id in c:
|
||||
categories[id] = c[id]
|
||||
|
||||
|
||||
class EntryField(object):
|
||||
"""Store inventory field parsing options.
|
||||
|
||||
Represents an inventory field and its options for the custom requests to a
|
||||
ThinkServer's BMC.
|
||||
|
||||
:param name: the name of the field
|
||||
:param fmt: the format of the field (see struct module for details)
|
||||
:param include: whether to include the field in the parse output
|
||||
:param mapper: a dictionary mapping values to new values for the parse
|
||||
output
|
||||
:param valuefunc: a function to be called to change the value in the last
|
||||
step of the build process.
|
||||
:param presence: whether the field indicates presence. In this case, the
|
||||
field will not be included. If the value is false, the
|
||||
item will be discarded.
|
||||
"""
|
||||
def __init__(self, name, fmt, include=True, mapper=None, valuefunc=None,
|
||||
multivaluefunc=False, presence=False):
|
||||
self.name = name
|
||||
self.fmt = fmt
|
||||
self.include = include
|
||||
self.mapper = mapper
|
||||
self.valuefunc = valuefunc
|
||||
self.multivaluefunc = multivaluefunc
|
||||
self.presence = presence
|
||||
|
||||
|
||||
# General parameter parsing functions
|
||||
def parse_inventory_category(name, info, countable=True):
|
||||
"""Parses every entry in an inventory category (CPU, memory, PCI, drives,
|
||||
etc).
|
||||
|
||||
Expects the first byte to be a count of the number of entries, followed
|
||||
by a list of elements to be parsed by a dedicated parser (below).
|
||||
|
||||
:param name: the name of the parameter (e.g.: "cpu")
|
||||
:param info: a list of integers with raw data read from an IPMI requests
|
||||
:param countable: whether the data have an entries count field
|
||||
|
||||
:returns: dict -- a list of entries in the category.
|
||||
"""
|
||||
raw = info["data"][1:]
|
||||
|
||||
cur = 0
|
||||
if countable:
|
||||
count = struct.unpack("B", raw[cur])[0]
|
||||
cur += 1
|
||||
else:
|
||||
count = 0
|
||||
discarded = 0
|
||||
|
||||
entries = []
|
||||
while cur < len(raw):
|
||||
read, cpu = categories[name]["parser"](raw[cur:])
|
||||
cur = cur + read
|
||||
# Account for discarded entries (because they are not present)
|
||||
if cpu is None:
|
||||
discarded += 1
|
||||
continue
|
||||
if not countable:
|
||||
# count by myself
|
||||
count += 1
|
||||
cpu["index"] = count
|
||||
entries.append(cpu)
|
||||
|
||||
# TODO(avidal): raise specific exception to point that there's data left in
|
||||
# the buffer
|
||||
if cur != len(raw):
|
||||
raise Exception
|
||||
# TODO(avidal): raise specific exception to point that the number of
|
||||
# entries is different than the expected
|
||||
if count - discarded != len(entries):
|
||||
raise Exception
|
||||
return entries
|
||||
|
||||
|
||||
def parse_inventory_category_entry(raw, fields):
|
||||
"""Parses one entry in an inventory category.
|
||||
|
||||
:param raw: the raw data to the entry. May contain more than one entry,
|
||||
only one entry will be read in that case.
|
||||
:param fields: an iterable of EntryField objects to be used for parsing the
|
||||
entry.
|
||||
|
||||
:returns: dict -- a tuple with the number of bytes read and a dictionary
|
||||
representing the entry.
|
||||
"""
|
||||
r = raw
|
||||
|
||||
obj = {}
|
||||
bytes_read = 0
|
||||
discard = False
|
||||
for field in fields:
|
||||
value = struct.unpack_from(field.fmt, r)[0]
|
||||
read = struct.calcsize(field.fmt)
|
||||
bytes_read += read
|
||||
r = r[read:]
|
||||
# If this entry is not actually present, just parse and then discard it
|
||||
if field.presence and not bool(value):
|
||||
discard = True
|
||||
if not field.include:
|
||||
continue
|
||||
|
||||
if (field.fmt[-1] == "s"):
|
||||
value = value.rstrip("\x00")
|
||||
if (field.mapper and value in field.mapper):
|
||||
value = field.mapper[value]
|
||||
if (field.valuefunc):
|
||||
value = field.valuefunc(value)
|
||||
|
||||
if not field.multivaluefunc:
|
||||
obj[field.name] = value
|
||||
else:
|
||||
for key in value:
|
||||
obj[key] = value[key]
|
||||
|
||||
if discard:
|
||||
obj = None
|
||||
return bytes_read, obj
|
|
@ -1,226 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2016-2017 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pyghmi.constants as pygconst
|
||||
import pyghmi.exceptions as pygexc
|
||||
import pyghmi.ipmi.sdr as sdr
|
||||
import struct
|
||||
|
||||
try:
|
||||
range = xrange
|
||||
except NameError:
|
||||
pass
|
||||
|
||||
|
||||
def fpc_read_ac_input(ipmicmd):
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0x90, data=(1,))
|
||||
rsp = rsp['data']
|
||||
if len(rsp) == 6:
|
||||
rsp = b'\x00' + bytes(rsp)
|
||||
return struct.unpack_from('<H', rsp[3:5])[0]
|
||||
|
||||
|
||||
def fpc_read_dc_output(ipmicmd):
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0x90, data=(2,))
|
||||
rsp = rsp['data']
|
||||
if len(rsp) == 6:
|
||||
rsp = b'\x00' + bytes(rsp)
|
||||
return struct.unpack_from('<H', rsp[3:5])[0]
|
||||
|
||||
|
||||
def fpc_read_fan_power(ipmicmd):
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0x90, data=(3,))
|
||||
rsp = rsp['data']
|
||||
rsp += '\x00'
|
||||
return struct.unpack_from('<I', rsp[1:])[0] / 100.0
|
||||
|
||||
|
||||
def fpc_read_psu_fan(ipmicmd, number, sz):
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0xa5, data=(number,))
|
||||
rsp = rsp['data']
|
||||
return struct.unpack_from('<H', rsp[:2])[0]
|
||||
|
||||
|
||||
def fpc_get_psustatus(ipmicmd, number, sz):
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0x91)
|
||||
mask = 1 << (number - 1)
|
||||
if len(rsp['data']) == 6:
|
||||
statdata = bytearray([0])
|
||||
else:
|
||||
statdata = bytearray()
|
||||
statdata += bytearray(rsp['data'])
|
||||
presence = statdata[3] & mask == mask
|
||||
pwrgood = statdata[4] & mask == mask
|
||||
throttle = (statdata[6] | statdata[2]) & mask == mask
|
||||
health = pygconst.Health.Ok
|
||||
states = []
|
||||
if presence and not pwrgood:
|
||||
health = pygconst.Health.Critical
|
||||
states.append('Power input lost')
|
||||
if throttle:
|
||||
health = pygconst.Health.Critical
|
||||
states.append('Throttled')
|
||||
if presence:
|
||||
states.append('Present')
|
||||
else:
|
||||
states.append('Absent')
|
||||
health = pygconst.Health.Critical
|
||||
return (health, states)
|
||||
|
||||
|
||||
def fpc_get_nodeperm(ipmicmd, number, sz):
|
||||
try:
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0xa7, data=(number,))
|
||||
except pygexc.IpmiException as ie:
|
||||
if ie.ipmicode == 0xd5: # no node present
|
||||
return (pygconst.Health.Ok, ['Absent'])
|
||||
raise
|
||||
perminfo = ord(rsp['data'][1])
|
||||
health = pygconst.Health.Ok
|
||||
states = []
|
||||
if len(rsp['data']) == 4: # different gens handled rc differently
|
||||
rsp['data'] = b'\x00' + bytes(rsp['data'])
|
||||
if sz == 6: # FPC
|
||||
permfail = ('\x02', '\x03')
|
||||
elif sz == 2: # SMM
|
||||
permfail = ('\x02',)
|
||||
if rsp['data'][4] in permfail:
|
||||
states.append('Insufficient Power')
|
||||
health = pygconst.Health.Failed
|
||||
if perminfo & 0x40:
|
||||
states.append('Node Fault')
|
||||
health = pygconst.Health.Failed
|
||||
return (health, states)
|
||||
|
||||
|
||||
def fpc_read_powerbank(ipmicmd):
|
||||
rsp = ipmicmd.xraw_command(netfn=0x32, command=0xa2)
|
||||
return struct.unpack_from('<H', rsp['data'][3:])[0]
|
||||
|
||||
|
||||
fpc_sensors = {
|
||||
'AC Power': {
|
||||
'type': 'Power',
|
||||
'units': 'W',
|
||||
'provider': fpc_read_ac_input,
|
||||
},
|
||||
'DC Power': {
|
||||
'type': 'Power',
|
||||
'units': 'W',
|
||||
'provider': fpc_read_dc_output,
|
||||
},
|
||||
'Fan Power': {
|
||||
'type': 'Power',
|
||||
'units': 'W',
|
||||
'provider': fpc_read_fan_power
|
||||
},
|
||||
'PSU Fan Speed': {
|
||||
'type': 'Fan',
|
||||
'units': 'RPM',
|
||||
'provider': fpc_read_psu_fan,
|
||||
'elements': 1,
|
||||
},
|
||||
'Total Power Capacity': {
|
||||
'type': 'Power',
|
||||
'units': 'W',
|
||||
'provider': fpc_read_powerbank,
|
||||
},
|
||||
'Node Power Permission': {
|
||||
'type': 'Management Subsystem Health',
|
||||
'returns': 'tuple',
|
||||
'units': None,
|
||||
'provider': fpc_get_nodeperm,
|
||||
'elements': 2,
|
||||
},
|
||||
'Power Supply': {
|
||||
'type': 'Power Supply',
|
||||
'returns': 'tuple',
|
||||
'units': None,
|
||||
'provider': fpc_get_psustatus,
|
||||
'elements': 1,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def get_sensor_names(size):
|
||||
global fpc_sensors
|
||||
for name in fpc_sensors:
|
||||
if size == 2 and name in ('Fan Power', 'Total Power Capacity'):
|
||||
continue
|
||||
sensor = fpc_sensors[name]
|
||||
if 'elements' in sensor:
|
||||
for elemidx in range(sensor['elements'] * size):
|
||||
elemidx += 1
|
||||
yield '{0} {1}'.format(name, elemidx)
|
||||
else:
|
||||
yield name
|
||||
|
||||
|
||||
def get_sensor_descriptions(size):
|
||||
global fpc_sensors
|
||||
for name in fpc_sensors:
|
||||
if size == 2 and name in ('Fan Power', 'Total Power Capacity'):
|
||||
continue
|
||||
sensor = fpc_sensors[name]
|
||||
if 'elements' in sensor:
|
||||
for elemidx in range(sensor['elements'] * size):
|
||||
elemidx += 1
|
||||
yield {'name': '{0} {1}'.format(name, elemidx),
|
||||
'type': sensor['type']}
|
||||
else:
|
||||
yield {'name': name, 'type': sensor['type']}
|
||||
|
||||
|
||||
def get_fpc_firmware(bmcver, ipmicmd, fpcorsmm):
|
||||
mymsg = ipmicmd.xraw_command(netfn=0x32, command=0xa8)
|
||||
builddata = bytearray(mymsg['data'])
|
||||
name = None
|
||||
if fpcorsmm == 2: # SMM
|
||||
name = 'SMM'
|
||||
buildid = '{0}{1}{2}{3}{4}{5}{6}'.format(
|
||||
*[chr(x) for x in builddata[-7:]])
|
||||
elif len(builddata) == 8:
|
||||
builddata = builddata[1:] # discard the 'completion code'
|
||||
name = 'FPC'
|
||||
buildid = '{0}{1}'.format(builddata[-2], chr(builddata[-1]))
|
||||
yield (name, {'version': bmcver, 'build': buildid})
|
||||
yield ('PSOC', {'version': '{0}.{1}'.format(builddata[2], builddata[3])})
|
||||
|
||||
|
||||
def get_sensor_reading(name, ipmicmd, sz):
|
||||
value = None
|
||||
sensor = None
|
||||
health = pygconst.Health.Ok
|
||||
states = []
|
||||
if name in fpc_sensors and 'elements' not in fpc_sensors[name]:
|
||||
sensor = fpc_sensors[name]
|
||||
value = sensor['provider'](ipmicmd)
|
||||
else:
|
||||
bnam, _, idx = name.rpartition(' ')
|
||||
idx = int(idx)
|
||||
if bnam in fpc_sensors and idx <= fpc_sensors[bnam]['elements'] * sz:
|
||||
sensor = fpc_sensors[bnam]
|
||||
if 'returns' in sensor:
|
||||
health, states = sensor['provider'](ipmicmd, idx, sz)
|
||||
else:
|
||||
value = sensor['provider'](ipmicmd, idx, sz)
|
||||
if sensor is not None:
|
||||
return sdr.SensorReading({'name': name, 'imprecision': None,
|
||||
'value': value, 'states': states,
|
||||
'state_ids': [], 'health': health,
|
||||
'type': sensor['type']},
|
||||
sensor['units'])
|
||||
raise Exception('Sensor not found: ' + name)
|
|
@ -1,64 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
pci_fields = (
|
||||
EntryField("index", "B"),
|
||||
EntryField("PCIType", "B", mapper={
|
||||
0x0: "On board slot",
|
||||
0x1: "Riser Type 1",
|
||||
0x2: "Riser Type 2",
|
||||
0x3: "Riser Type 3",
|
||||
0x4: "Riser Type 4",
|
||||
0x5: "Riser Type 5",
|
||||
0x6: "Riser Type 6a",
|
||||
0x7: "Riser Type 6b",
|
||||
0x8: "ROC",
|
||||
0x9: "Mezz"
|
||||
}),
|
||||
EntryField("BusNumber", "B"),
|
||||
EntryField("DeviceFunction", "B"),
|
||||
EntryField("VendorID", "<H"),
|
||||
EntryField("DeviceID", "<H"),
|
||||
EntryField("SubSystemVendorID", "<H"),
|
||||
EntryField("SubSystemID", "<H"),
|
||||
EntryField("InterfaceType", "B"),
|
||||
EntryField("SubClassCode", "B"),
|
||||
EntryField("BaseClassCode", "B"),
|
||||
EntryField("LinkSpeed", "B"),
|
||||
EntryField("LinkWidth", "B"),
|
||||
EntryField("Reserved", "h")
|
||||
)
|
||||
|
||||
|
||||
def parse_pci_info(raw):
|
||||
return parse_inventory_category_entry(raw, pci_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"pci": {
|
||||
"idstr": "PCI {0}",
|
||||
"parser": parse_pci_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc1, 0x03, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,123 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
psu_type = {
|
||||
0b0001: "Other",
|
||||
0b0010: "Unknown",
|
||||
0b0011: "Linear",
|
||||
0b0100: "Switching",
|
||||
0b0101: "Battery",
|
||||
0b0110: "UPS",
|
||||
0b0111: "Converter",
|
||||
0b1000: "Regulator",
|
||||
}
|
||||
psu_status = {
|
||||
0b001: "Other",
|
||||
0b010: "Unknown",
|
||||
0b011: "OK",
|
||||
0b100: "Non-critical",
|
||||
0b101: "Critical; power supply has failed and has been taken off-line"
|
||||
}
|
||||
psu_voltage_range_switch = {
|
||||
0b0001: "Other",
|
||||
0b0010: "Unknown",
|
||||
0b0011: "Manual",
|
||||
0b0100: "Auto-switch",
|
||||
0b0101: "Wide range",
|
||||
0b0110: "Not applicable"
|
||||
}
|
||||
|
||||
|
||||
def psu_status_word_slice(w, s, e):
|
||||
return int(w[-e-1:-s], 2)
|
||||
|
||||
|
||||
def psu_status_word_bit(w, b):
|
||||
return int(w[-b-1])
|
||||
|
||||
|
||||
def psu_status_word_parser(word):
|
||||
fields = {}
|
||||
word = "{0:016b}".format(word)
|
||||
|
||||
fields["DMTF Power Supply Type"] = \
|
||||
psu_type.get(psu_status_word_slice(word, 10, 13), "Invalid")
|
||||
|
||||
# fields["Status"] = \
|
||||
# psu_status.get(psu_status_word_slice(word, 7, 9), "Invalid")
|
||||
|
||||
fields["DMTF Input Voltage Range"] = \
|
||||
psu_voltage_range_switch.get(
|
||||
psu_status_word_slice(word, 3, 6),
|
||||
"Invalid"
|
||||
)
|
||||
|
||||
# Power supply is unplugged from the wall
|
||||
fields["Unplugged"] = \
|
||||
bool(psu_status_word_bit(word, 2))
|
||||
|
||||
# fields["Power supply is present"] = \
|
||||
# bool(psu_status_word_bit(word, 1))
|
||||
|
||||
# Power supply is hot-replaceable
|
||||
fields["Hot Replaceable"] = \
|
||||
bool(psu_status_word_bit(word, 0))
|
||||
|
||||
return fields
|
||||
|
||||
psu_fields = (
|
||||
EntryField("index", "B"),
|
||||
EntryField("Presence State", "B", presence=True),
|
||||
EntryField("Capacity W", "<H"),
|
||||
EntryField("Board manufacturer", "18s"),
|
||||
EntryField("Board model", "18s"),
|
||||
EntryField("Board manufacture date", "10s"),
|
||||
EntryField("Board serial number", "34s"),
|
||||
EntryField("Board manufacturer revision", "5s"),
|
||||
EntryField("Board product name", "10s"),
|
||||
EntryField("PSU Asset Tag", "10s"),
|
||||
EntryField(
|
||||
"PSU Redundancy Status",
|
||||
"B",
|
||||
valuefunc=lambda v: "Not redundant" if v == 0x00 else "Redundant"
|
||||
),
|
||||
EntryField(
|
||||
"PSU Status Word",
|
||||
"<H",
|
||||
valuefunc=psu_status_word_parser, multivaluefunc=True
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def parse_psu_info(raw):
|
||||
return parse_inventory_category_entry(raw, psu_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"psu": {
|
||||
"idstr": "Power Supply {0}",
|
||||
"parser": parse_psu_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc6, 0x00, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,65 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
raid_controller_fields = (
|
||||
EntryField("ControllerID", "I"),
|
||||
EntryField("AdapterType", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "RAIDController"
|
||||
}),
|
||||
EntryField("SupercapPresence", "B", mapper={
|
||||
0x00: "Absent",
|
||||
0x01: "Present"
|
||||
}),
|
||||
EntryField("FlashComponent1Name", "16s"),
|
||||
EntryField("FlashComponent1Version", "64s"),
|
||||
EntryField("FlashComponent2Name", "16s"),
|
||||
EntryField("FlashComponent2Version", "64s"),
|
||||
EntryField("FlashComponent3Name", "16s"),
|
||||
EntryField("FlashComponent3Version", "64s"),
|
||||
EntryField("FlashComponent4Name", "16s"),
|
||||
EntryField("FlashComponent4Version", "64s"),
|
||||
EntryField("FlashComponent5Name", "16s"),
|
||||
EntryField("FlashComponent5Version", "64s"),
|
||||
EntryField("FlashComponent6Name", "16s"),
|
||||
EntryField("FlashComponent6Version", "64s"),
|
||||
EntryField("FlashComponent7Name", "16s"),
|
||||
EntryField("FlashComponent7Version", "64s"),
|
||||
EntryField("FlashComponent8Name", "16s"),
|
||||
EntryField("FlashComponent8Version", "64s")
|
||||
)
|
||||
|
||||
|
||||
def parse_raid_controller_info(raw):
|
||||
return parse_inventory_category_entry(raw, raid_controller_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"raid_controller": {
|
||||
"idstr": "RAID Controller {0}",
|
||||
"parser": parse_raid_controller_info,
|
||||
"countable": False,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc4, 0x00, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,73 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from pyghmi.ipmi.oem.lenovo.inventory import EntryField, \
|
||||
parse_inventory_category_entry
|
||||
|
||||
raid_drive_fields = (
|
||||
EntryField("index", "B"),
|
||||
EntryField("VendorID", "64s"),
|
||||
EntryField("Size", "I",
|
||||
valuefunc=lambda v: str(v) + " MB"),
|
||||
EntryField("MediaType", "B", mapper={
|
||||
0x00: "HDD",
|
||||
0x01: "SSD",
|
||||
0x02: "SSM_FLASH"
|
||||
}),
|
||||
EntryField("InterfaceType", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "ParallelSCSI",
|
||||
0x02: "SAS",
|
||||
0x03: "SATA",
|
||||
0x04: "FC"
|
||||
}),
|
||||
EntryField("FormFactor", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "2.5in",
|
||||
0x02: "3.5in"
|
||||
}),
|
||||
EntryField("LinkSpeed", "B", mapper={
|
||||
0x00: "Unknown",
|
||||
0x01: "1.5 Gb/s",
|
||||
0x02: "3.0 Gb/s",
|
||||
0x03: "6.0 Gb/s",
|
||||
0x04: "12.0 Gb/s"
|
||||
}),
|
||||
EntryField("SlotNumber", "B"),
|
||||
EntryField("ControllerIndex", "B"),
|
||||
EntryField("DeviceState", "B", mapper={
|
||||
0x00: "active",
|
||||
0x01: "stopped",
|
||||
0xff: "transitioning"
|
||||
}))
|
||||
|
||||
|
||||
def parse_raid_drive_info(raw):
|
||||
return parse_inventory_category_entry(raw, raid_drive_fields)
|
||||
|
||||
|
||||
def get_categories():
|
||||
return {
|
||||
"raid_raid_drive": {
|
||||
"idstr": "RAID Drive {0}",
|
||||
"parser": parse_raid_drive_info,
|
||||
"command": {
|
||||
"netfn": 0x06,
|
||||
"command": 0x59,
|
||||
"data": (0x00, 0xc5, 0x00, 0x00)
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,31 +0,0 @@
|
|||
# Copyright 2015 Lenovo Corporation
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import pyghmi.ipmi.oem.generic as generic
|
||||
import pyghmi.ipmi.oem.lenovo.handler as lenovo
|
||||
|
||||
# The mapping comes from
|
||||
# http://www.iana.org/assignments/enterprise-numbers/enterprise-numbers
|
||||
# Only mapping the ones with known backends
|
||||
oemmap = {
|
||||
20301: lenovo, # IBM x86 (and System X at Lenovo)
|
||||
19046: lenovo, # Lenovo x86 (e.g. Thinkserver)
|
||||
}
|
||||
|
||||
|
||||
def get_oem_handler(oemid, ipmicmd):
|
||||
try:
|
||||
return oemmap[oemid['manufacturer_id']].OEMHandler(oemid, ipmicmd)
|
||||
except KeyError:
|
||||
return generic.OEMHandler(oemid, ipmicmd)
|
File diff suppressed because it is too large
Load Diff
|
@ -1,135 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2017 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from ctypes import addressof, c_int, c_long, c_short, c_ubyte, c_uint
|
||||
from ctypes import cast, create_string_buffer, POINTER, pointer, sizeof
|
||||
from ctypes import Structure
|
||||
import fcntl
|
||||
import pyghmi.ipmi.private.util as iutil
|
||||
from select import select
|
||||
|
||||
|
||||
class IpmiMsg(Structure):
|
||||
_fields_ = [('netfn', c_ubyte),
|
||||
('cmd', c_ubyte),
|
||||
('data_len', c_short),
|
||||
('data', POINTER(c_ubyte))]
|
||||
|
||||
|
||||
class IpmiSystemInterfaceAddr(Structure):
|
||||
_fields_ = [('addr_type', c_int),
|
||||
('channel', c_short),
|
||||
('lun', c_ubyte)]
|
||||
|
||||
|
||||
class IpmiRecv(Structure):
|
||||
_fields_ = [('recv_type', c_int),
|
||||
('addr', POINTER(IpmiSystemInterfaceAddr)),
|
||||
('addr_len', c_uint),
|
||||
('msgid', c_long),
|
||||
('msg', IpmiMsg)]
|
||||
|
||||
|
||||
class IpmiReq(Structure):
|
||||
_fields_ = [('addr', POINTER(IpmiSystemInterfaceAddr)),
|
||||
('addr_len', c_uint),
|
||||
('msgid', c_long),
|
||||
('msg', IpmiMsg)]
|
||||
|
||||
|
||||
_IONONE = 0
|
||||
_IOWRITE = 1
|
||||
_IOREAD = 2
|
||||
IPMICTL_SET_MY_ADDRESS_CMD = _IOREAD << 30 | sizeof(c_uint) << 16 | \
|
||||
ord('i') << 8 | 17 # from ipmi.h
|
||||
IPMICTL_SEND_COMMAND = _IOREAD << 30 | sizeof(IpmiReq) << 16 | \
|
||||
ord('i') << 8 | 13 # from ipmi.h
|
||||
# next is really IPMICTL_RECEIVE_MSG_TRUNC, but will only use that
|
||||
IPMICTL_RECV = (_IOWRITE | _IOREAD) << 30 | sizeof(IpmiRecv) << 16 | \
|
||||
ord('i') << 8 | 11 # from ipmi.h
|
||||
BMC_SLAVE_ADDR = c_uint(0x20)
|
||||
CURRCHAN = 0xf
|
||||
ADDRTYPE = 0xc
|
||||
|
||||
|
||||
class Session(object):
|
||||
def __init__(self, devnode='/dev/ipmi0'):
|
||||
"""Create a local session inband
|
||||
|
||||
:param: devnode: The path to the ipmi device
|
||||
"""
|
||||
self.ipmidev = open(devnode, 'rw')
|
||||
fcntl.ioctl(self.ipmidev, IPMICTL_SET_MY_ADDRESS_CMD, BMC_SLAVE_ADDR)
|
||||
# the interface is initted, create some reusable memory for our session
|
||||
self.databuffer = create_string_buffer(4096)
|
||||
self.req = IpmiReq()
|
||||
self.rsp = IpmiRecv()
|
||||
self.addr = IpmiSystemInterfaceAddr()
|
||||
self.req.msg.data = cast(addressof(self.databuffer), POINTER(c_ubyte))
|
||||
self.rsp.msg.data = self.req.msg.data
|
||||
self.userid = None
|
||||
self.password = None
|
||||
|
||||
def await_reply(self):
|
||||
rd, _, _ = select((self.ipmidev,), (), (), 1)
|
||||
while not rd:
|
||||
rd, _, _ = select((self.ipmidev,), (), (), 1)
|
||||
|
||||
@property
|
||||
def parsed_rsp(self):
|
||||
response = {'netfn': self.rsp.msg.netfn, 'command': self.rsp.msg.cmd,
|
||||
'code': ord(self.databuffer.raw[0]),
|
||||
'data': list(bytearray(
|
||||
self.databuffer.raw[1:self.rsp.msg.data_len]))}
|
||||
errorstr = iutil.get_ipmi_error(response)
|
||||
if errorstr:
|
||||
response['error'] = errorstr
|
||||
return response
|
||||
|
||||
def raw_command(self,
|
||||
netfn,
|
||||
command,
|
||||
data=(),
|
||||
bridge_request=None,
|
||||
retry=True,
|
||||
delay_xmit=None,
|
||||
timeout=None,
|
||||
waitall=False):
|
||||
self.addr.channel = CURRCHAN
|
||||
self.addr.addr_type = ADDRTYPE
|
||||
self.req.addr_len = sizeof(IpmiSystemInterfaceAddr)
|
||||
self.req.addr = pointer(self.addr)
|
||||
self.req.msg.netfn = netfn
|
||||
self.req.msg.cmd = command
|
||||
data = buffer(bytearray(data))
|
||||
self.databuffer[:len(data)] = data[:len(data)]
|
||||
self.req.msg.data_len = len(data)
|
||||
fcntl.ioctl(self.ipmidev, IPMICTL_SEND_COMMAND, self.req)
|
||||
self.await_reply()
|
||||
self.rsp.msg.data_len = 4096
|
||||
self.rsp.addr = pointer(self.addr)
|
||||
self.rsp.addr_len = sizeof(IpmiSystemInterfaceAddr)
|
||||
fcntl.ioctl(self.ipmidev, IPMICTL_RECV, self.rsp)
|
||||
return self.parsed_rsp
|
||||
|
||||
|
||||
def main():
|
||||
a = Session('/dev/ipmi0')
|
||||
print(repr(a.raw_command(0, 1)))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,367 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This represents the server side of a session object
|
||||
# Split into a separate file to avoid overly manipulating the as-yet
|
||||
# client-centered session object
|
||||
import collections
|
||||
import hashlib
|
||||
import hmac
|
||||
import os
|
||||
import pyghmi.ipmi.private.constants as constants
|
||||
import pyghmi.ipmi.private.session as ipmisession
|
||||
import socket
|
||||
import struct
|
||||
import uuid
|
||||
|
||||
|
||||
class ServerSession(ipmisession.Session):
|
||||
def __new__(cls, authdata, kg, clientaddr, netsocket, request, uuid,
|
||||
bmc):
|
||||
# Need to do default new type behavior. The normal session
|
||||
# takes measures to assure the caller shares even when they
|
||||
# didn't try. We don't have that operational mode to contend
|
||||
# with in the server case (one file descriptor per bmc)
|
||||
return object.__new__(cls)
|
||||
|
||||
def create_open_session_response(self, request):
|
||||
clienttag = request[0]
|
||||
# role = request[1]
|
||||
self.clientsessionid = request[4:8]
|
||||
# TODO(jbjohnso): intelligently handle integrity/auth/conf
|
||||
# for now, forcibly do cipher suite 3
|
||||
self.managedsessionid = os.urandom(4)
|
||||
# table 13-17, 1 for now (hmac-sha1), 3 should also be supported
|
||||
# table 13-18, integrity, 1 for now is hmac-sha1-96, 4 is sha256
|
||||
# confidentiality: 1 is aes-cbc-128, the only one
|
||||
self.privlevel = 4
|
||||
response = (bytearray([clienttag, 0, self.privlevel, 0]) +
|
||||
self.clientsessionid + self.managedsessionid +
|
||||
bytearray([
|
||||
0, 0, 0, 8, 1, 0, 0, 0, # auth
|
||||
1, 0, 0, 8, 1, 0, 0, 0, # integrity
|
||||
2, 0, 0, 8, 1, 0, 0, 0, # privacy
|
||||
]))
|
||||
return response
|
||||
|
||||
def __init__(self, authdata, kg, clientaddr, netsocket, request, uuid,
|
||||
bmc):
|
||||
# begin conversation per RMCP+ open session request
|
||||
self.uuid = uuid
|
||||
self.rqaddr = constants.IPMI_BMC_ADDRESS
|
||||
self.authdata = authdata
|
||||
self.servermode = True
|
||||
self.ipmiversion = 2.0
|
||||
self.sequencenumber = 0
|
||||
self.sessionid = 0
|
||||
self.bmc = bmc
|
||||
self.lastpayload = None
|
||||
self.broken = False
|
||||
self.authtype = 6
|
||||
self.integrityalgo = 0
|
||||
self.confalgo = 0
|
||||
self.kg = kg
|
||||
self.socket = netsocket
|
||||
self.sockaddr = clientaddr
|
||||
self.pendingpayloads = collections.deque([])
|
||||
self.pktqueue = collections.deque([])
|
||||
if clientaddr not in ipmisession.Session.bmc_handlers:
|
||||
ipmisession.Session.bmc_handlers[clientaddr] = {bmc.port: self}
|
||||
else:
|
||||
ipmisession.Session.bmc_handlers[clientaddr][bmc.port] = self
|
||||
response = self.create_open_session_response(bytearray(request))
|
||||
self.send_payload(response,
|
||||
constants.payload_types['rmcpplusopenresponse'],
|
||||
retry=False)
|
||||
|
||||
def _got_rmcp_openrequest(self, data):
|
||||
response = self.create_open_session_response(
|
||||
struct.pack('B' * len(data), *data))
|
||||
self.send_payload(response,
|
||||
constants.payload_types['rmcpplusopenresponse'],
|
||||
retry=False)
|
||||
|
||||
def _got_rakp1(self, data):
|
||||
clienttag = data[0]
|
||||
self.Rm = data[8:24]
|
||||
self.rolem = data[24]
|
||||
self.maxpriv = self.rolem & 0b111
|
||||
namepresent = data[27]
|
||||
if namepresent == 0:
|
||||
# ignore null username for now
|
||||
return
|
||||
self.username = bytes(data[28:])
|
||||
if self.username.decode('utf-8') not in self.authdata:
|
||||
# don't think about invalid usernames for now
|
||||
return
|
||||
uuidbytes = self.uuid.bytes
|
||||
self.uuiddata = uuidbytes
|
||||
self.Rc = os.urandom(16)
|
||||
hmacdata = (self.clientsessionid + self.managedsessionid +
|
||||
self.Rm + self.Rc + uuidbytes +
|
||||
bytearray([self.rolem, len(self.username)]))
|
||||
hmacdata += self.username
|
||||
self.kuid = self.authdata[self.username.decode('utf-8')].encode(
|
||||
'utf-8')
|
||||
if self.kg is None:
|
||||
self.kg = self.kuid
|
||||
authcode = hmac.new(
|
||||
self.kuid, bytes(hmacdata), hashlib.sha1).digest()
|
||||
# regretably, ipmi mandates the server send out an hmac first
|
||||
# akin to a leak of /etc/shadow, not too worrisome if the secret
|
||||
# is complex, but terrible for most likely passwords selected by
|
||||
# a human
|
||||
newmessage = (bytearray([clienttag, 0, 0, 0]) + self.clientsessionid +
|
||||
self.Rc + uuidbytes + authcode)
|
||||
self.send_payload(newmessage, constants.payload_types['rakp2'],
|
||||
retry=False)
|
||||
|
||||
def _got_rakp2(self, data):
|
||||
# stub, server should not think about rakp2
|
||||
pass
|
||||
|
||||
def _got_rakp3(self, data):
|
||||
# for now drop rakp3 with bad authcode
|
||||
# respond correctly a TODO(jjohnson2), since Kg being used
|
||||
# yet incorrect is a scenario why rakp3 could be bad
|
||||
# even if rakp2 was good
|
||||
RmRc = self.Rm + self.Rc
|
||||
self.sik = hmac.new(self.kg,
|
||||
bytes(RmRc) +
|
||||
struct.pack("2B", self.rolem,
|
||||
len(self.username)) +
|
||||
self.username, hashlib.sha1).digest()
|
||||
self.k1 = hmac.new(self.sik, b'\x01' * 20, hashlib.sha1).digest()
|
||||
self.k2 = hmac.new(self.sik, b'\x02' * 20, hashlib.sha1).digest()
|
||||
self.aeskey = self.k2[0:16]
|
||||
hmacdata = self.Rc +\
|
||||
self.clientsessionid +\
|
||||
struct.pack("2B", self.rolem,
|
||||
len(self.username)) +\
|
||||
self.username
|
||||
expectedauthcode = hmac.new(self.kuid, bytes(hmacdata), hashlib.sha1
|
||||
).digest()
|
||||
authcode = struct.pack("%dB" % len(data[8:]), *data[8:])
|
||||
if expectedauthcode != authcode:
|
||||
# TODO(jjohnson2): RMCP error back at invalid rakp3
|
||||
return
|
||||
clienttag = data[0]
|
||||
if data[1] != 0:
|
||||
# client did not like our response, so ignore the rakp3
|
||||
return
|
||||
self.localsid = struct.unpack('<I', self.managedsessionid)[0]
|
||||
self.ipmicallback = self.handle_client_request
|
||||
self._send_rakp4(clienttag, 0)
|
||||
|
||||
def handle_client_request(self, request):
|
||||
if request['netfn'] == 6 and request['command'] == 0x3b:
|
||||
pendingpriv = request['data'][0]
|
||||
returncode = 0
|
||||
if pendingpriv > 1:
|
||||
if pendingpriv > self.maxpriv:
|
||||
returncode = 0x81
|
||||
else:
|
||||
self.clientpriv = request['data'][0]
|
||||
self._send_ipmi_net_payload(code=returncode,
|
||||
data=[self.clientpriv])
|
||||
elif request['netfn'] == 6 and request['command'] == 0x3c:
|
||||
self.send_ipmi_response()
|
||||
self.close_server_session()
|
||||
else:
|
||||
self.bmc.handle_raw_request(request, self)
|
||||
|
||||
def close_server_session(self):
|
||||
pass
|
||||
|
||||
def _send_rakp4(self, tagvalue, statuscode):
|
||||
payload = bytearray(
|
||||
[tagvalue, statuscode, 0, 0]) + self.clientsessionid
|
||||
hmacdata = self.Rm + self.managedsessionid + self.uuiddata
|
||||
hmacdata = struct.pack('%dB' % len(hmacdata), *hmacdata)
|
||||
authdata = hmac.new(self.sik, hmacdata, hashlib.sha1).digest()[:12]
|
||||
payload += authdata
|
||||
self.send_payload(payload, constants.payload_types['rakp4'],
|
||||
retry=False)
|
||||
self.confalgo = 'aes'
|
||||
self.integrityalgo = 'sha1'
|
||||
self.sequencenumber = 1
|
||||
self.sessionid = struct.unpack(
|
||||
'<I', struct.pack('4B', *self.clientsessionid))[0]
|
||||
|
||||
def _got_rakp4(self, data):
|
||||
# stub, server should not think about rakp4
|
||||
pass
|
||||
|
||||
def _timedout(self):
|
||||
"""Expire a client session after a period of inactivity
|
||||
|
||||
After the session inactivity timeout, this invalidate the client
|
||||
session.
|
||||
"""
|
||||
# for now, we will have a non-configurable 60 second timeout
|
||||
pass
|
||||
|
||||
def _handle_channel_auth_cap(self, request):
|
||||
"""Handle incoming channel authentication capabilities request
|
||||
|
||||
This is used when serving as an IPMI target to service client
|
||||
requests for client authentication capabilities
|
||||
"""
|
||||
pass
|
||||
|
||||
def send_ipmi_response(self, data=[], code=0):
|
||||
self._send_ipmi_net_payload(data=data, code=code)
|
||||
|
||||
def logout(self):
|
||||
pass
|
||||
|
||||
|
||||
class IpmiServer(object):
|
||||
# auth capabilities for now is a static payload
|
||||
# for now always completion code 0, otherwise ignore
|
||||
# authentication type fixed to ipmi2, ipmi1 forbidden
|
||||
# 0b10000000
|
||||
|
||||
def __init__(self, authdata, port=623, bmcuuid=None, address='::'):
|
||||
"""Create a new ipmi bmc instance.
|
||||
|
||||
:param authdata: A dict or object with .get() to provide password
|
||||
lookup by username. This does not support the full
|
||||
complexity of what IPMI can support, only a
|
||||
reasonable subset.
|
||||
:param port: The default port number to bind to. Defaults to the
|
||||
standard 623
|
||||
:param address: The IP address to bind to. Defaults to '::' (all
|
||||
zeroes)
|
||||
"""
|
||||
self.revision = 0
|
||||
self.deviceid = 0
|
||||
self.firmwaremajor = 1
|
||||
self.firmwareminor = 0
|
||||
self.ipmiversion = 2
|
||||
self.additionaldevices = 0
|
||||
self.mfgid = 0
|
||||
self.prodid = 0
|
||||
self.pktqueue = collections.deque([])
|
||||
if bmcuuid is None:
|
||||
self.uuid = uuid.uuid4()
|
||||
else:
|
||||
self.uuid = bmcuuid
|
||||
lanchannel = 1
|
||||
authtype = 0b10000000 # ipmi2 only
|
||||
authstatus = 0b00000100 # change based on authdata/kg
|
||||
chancap = 0b00000010 # ipmi2 only
|
||||
oemdata = (0, 0, 0, 0)
|
||||
self.authdata = authdata
|
||||
self.authcap = struct.pack('BBBBBBBBB', 0, lanchannel, authtype,
|
||||
authstatus, chancap, *oemdata)
|
||||
self.kg = None
|
||||
self.timeout = 60
|
||||
self.port = port
|
||||
addrinfo = socket.getaddrinfo(address, port, 0,
|
||||
socket.SOCK_DGRAM)[0]
|
||||
self.serversocket = ipmisession.Session._assignsocket(addrinfo)
|
||||
ipmisession.Session.bmc_handlers[self.serversocket] = {0: self}
|
||||
|
||||
def send_auth_cap(self, myaddr, mylun, clientaddr, clientlun, clientseq,
|
||||
sockaddr):
|
||||
header = bytearray(
|
||||
b'\x06\x00\xff\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10')
|
||||
headerdata = [clientaddr, clientlun | (7 << 2)]
|
||||
headersum = ipmisession._checksum(*headerdata)
|
||||
header += bytearray(headerdata + [headersum, myaddr,
|
||||
mylun | (clientseq << 2), 0x38])
|
||||
header += self.authcap
|
||||
bodydata = struct.unpack('B' * len(header[17:]), bytes(header[17:]))
|
||||
header.append(ipmisession._checksum(*bodydata))
|
||||
ipmisession._io_sendto(self.serversocket, header, sockaddr)
|
||||
|
||||
def process_pktqueue(self):
|
||||
while self.pktqueue:
|
||||
pkt = self.pktqueue.popleft()
|
||||
self.sessionless_data(pkt[0], pkt[1])
|
||||
|
||||
def sessionless_data(self, data, sockaddr):
|
||||
"""Examines unsolocited packet and decides appropriate action.
|
||||
|
||||
For a listening IpmiServer, a packet without an active session
|
||||
comes here for examination. If it is something that is utterly
|
||||
sessionless (e.g. get channel authentication), send the appropriate
|
||||
response. If it is a get session challenge or open rmcp+ request,
|
||||
spawn a session to handle the context.
|
||||
"""
|
||||
if len(data) < 22:
|
||||
return
|
||||
data = bytearray(data)
|
||||
if not (data[0] == 6 and data[2:4] == b'\xff\x07'): # not ipmi
|
||||
return
|
||||
if data[4] == 6: # ipmi 2 payload...
|
||||
payloadtype = data[5]
|
||||
if payloadtype not in (0, 16):
|
||||
return
|
||||
if payloadtype == 16: # new session to handle conversation
|
||||
ServerSession(self.authdata, self.kg, sockaddr,
|
||||
self.serversocket, data[16:], self.uuid,
|
||||
bmc=self)
|
||||
return
|
||||
# ditch two byte, because ipmi2 header is two
|
||||
# bytes longer than ipmi1 (payload type added, payload length 2).
|
||||
data = data[2:]
|
||||
myaddr, netfnlun = struct.unpack('2B', bytes(data[14:16]))
|
||||
netfn = (netfnlun & 0b11111100) >> 2
|
||||
mylun = netfnlun & 0b11
|
||||
if netfn == 6: # application request
|
||||
if data[19] == 0x38: # cmd = get channel auth capabilities
|
||||
verchannel, level = struct.unpack('2B', bytes(data[20:22]))
|
||||
version = verchannel & 0b10000000
|
||||
if version != 0b10000000:
|
||||
return
|
||||
channel = verchannel & 0b1111
|
||||
if channel != 0xe:
|
||||
return
|
||||
(clientaddr, clientlun) = struct.unpack(
|
||||
'BB', bytes(data[17:19]))
|
||||
clientseq = clientlun >> 2
|
||||
clientlun &= 0b11 # Lun is only the least significant bits
|
||||
level &= 0b1111
|
||||
self.send_auth_cap(myaddr, mylun, clientaddr, clientlun,
|
||||
clientseq, sockaddr)
|
||||
|
||||
def set_kg(self, kg):
|
||||
"""Sets the Kg for the BMC to use
|
||||
|
||||
In RAKP, Kg is a BMC-specific integrity key that can be set. If not
|
||||
set, Kuid is used for the integrity key
|
||||
"""
|
||||
try:
|
||||
self.kg = kg.encode('utf-8')
|
||||
except AttributeError:
|
||||
self.kg = kg
|
||||
|
||||
def send_device_id(self, session):
|
||||
response = [self.deviceid, self.revision, self.firmwaremajor,
|
||||
self.firmwareminor, self.ipmiversion,
|
||||
self.additionaldevices]
|
||||
response += struct.unpack('4B', struct.pack('<I', self.mfgid))
|
||||
response += struct.unpack('4B', struct.pack('<I', self.prodid))
|
||||
session.send_ipmi_response(data=response)
|
||||
|
||||
def handle_raw_request(self, request, session):
|
||||
# per table 5-2, completion code 0xc1 is 'unrecognized'
|
||||
session.send_ipmi_response(code=0xc1)
|
||||
|
||||
def logout(self):
|
||||
pass
|
File diff suppressed because it is too large
Load Diff
|
@ -1,765 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf8
|
||||
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This implements parsing of DDR SPD data. This is offered up in a pass
|
||||
# through fashion by some service processors.
|
||||
|
||||
# For now, just doing DDR3 and DDR4
|
||||
|
||||
# In many cases, astute readers will note that some of the lookup tables
|
||||
# should be a matter of math rather than lookup. However the SPD
|
||||
# specification explicitly reserves values not in the lookup tables for
|
||||
# future use. It has happened, for example, that a spec was amended
|
||||
# with discontinuous values for a field that was until that point
|
||||
# possible to derive in a formulaic way
|
||||
|
||||
import struct
|
||||
|
||||
jedec_ids = [
|
||||
{
|
||||
0x01: "AMD",
|
||||
0x02: "AMI",
|
||||
0x83: "Fairchild",
|
||||
0x04: "Fujitsu",
|
||||
0x85: "GTE",
|
||||
0x86: "Harris",
|
||||
0x07: "Hitachi",
|
||||
0x08: "Inmos",
|
||||
0x89: "Intel",
|
||||
0x8a: "I.T.T.",
|
||||
0x0b: "Intersil",
|
||||
0x8c: "Monolithic Memories",
|
||||
0x0d: "Mostek",
|
||||
0x0e: "Motorola",
|
||||
0x8f: "National",
|
||||
0x10: "NEC",
|
||||
0x91: "RCA",
|
||||
0x92: "Raytheon",
|
||||
0x13: "Conexant (Rockwell)",
|
||||
0x94: "Seeq",
|
||||
0x15: "Philips Semi. (Signetics)",
|
||||
0x16: "Synertek",
|
||||
0x97: "Texas Instruments",
|
||||
0x98: "Toshiba",
|
||||
0x19: "Xicor",
|
||||
0x1a: "Zilog",
|
||||
0x9b: "Eurotechnique",
|
||||
0x1c: "Mitsubishi",
|
||||
0x9d: "Lucent (AT&T)",
|
||||
0x9e: "Exel",
|
||||
0x1f: "Atmel",
|
||||
0x20: "SGS/Thomson",
|
||||
0xa1: "Lattice Semi.",
|
||||
0xa2: "NCR",
|
||||
0x23: "Wafer Scale Integration",
|
||||
0xa4: "IBM",
|
||||
0x25: "Tristar",
|
||||
0x26: "Visic",
|
||||
0xa7: "Intl. CMOS Technology",
|
||||
0xa8: "SSSI",
|
||||
0x29: "Microchip Technology",
|
||||
0x2a: "Ricoh Ltd.",
|
||||
0xab: "VLSI",
|
||||
0x2c: "Micron Technology",
|
||||
0xad: "Hyundai Electronics",
|
||||
0xae: "OKI Semiconductor",
|
||||
0x2f: "ACTEL",
|
||||
0xb0: "Sharp",
|
||||
0x31: "Catalyst",
|
||||
0x32: "Panasonic",
|
||||
0xb3: "IDT",
|
||||
0x34: "Cypress",
|
||||
0xb5: "DEC",
|
||||
0xb6: "LSI Logic",
|
||||
0x37: "Zarlink",
|
||||
0x38: "UTMC",
|
||||
0xb9: "Thinking Machine",
|
||||
0xba: "Thomson CSF",
|
||||
0x3b: "Integrated CMOS(Vertex)",
|
||||
0xbc: "Honeywell",
|
||||
0x3d: "Tektronix",
|
||||
0x3e: "Sun Microsystems",
|
||||
0xbf: "SST",
|
||||
0x40: "MOSEL",
|
||||
0xc1: "Infineon",
|
||||
0xc2: "Macronix",
|
||||
0x43: "Xerox",
|
||||
0xc4: "Plus Logic",
|
||||
0x45: "SunDisk",
|
||||
0x46: "Elan Circuit Tech.",
|
||||
0xc7: "European Silicon Str.",
|
||||
0xc8: "Apple Computer",
|
||||
0xc9: "Xilinx",
|
||||
0x4a: "Compaq",
|
||||
0xcb: "Protocol Engines",
|
||||
0x4c: "SCI",
|
||||
0xcd: "Seiko Instruments",
|
||||
0xce: "Samsung",
|
||||
0x4f: "I3 Design System",
|
||||
0xd0: "Klic",
|
||||
0x51: "Crosspoint Solutions",
|
||||
0x52: "Alliance Semiconductor",
|
||||
0xd3: "Tandem",
|
||||
0x54: "Hewlett-Packard",
|
||||
0xd5: "Intg. Silicon Solutions",
|
||||
0xd6: "Brooktree",
|
||||
0x57: "New Media",
|
||||
0x58: "MHS Electronic",
|
||||
0xd9: "Performance Semi.",
|
||||
0xda: "Winbond Electronic",
|
||||
0x5b: "Kawasaki Steel",
|
||||
0xdc: "Bright Micro",
|
||||
0x5d: "TECMAR",
|
||||
0x5e: "Exar",
|
||||
0xdf: "PCMCIA",
|
||||
0xe0: "LG Semiconductor",
|
||||
0x61: "Northern Telecom",
|
||||
0x62: "Sanyo",
|
||||
0xe3: "Array Microsystems",
|
||||
0x64: "Crystal Semiconductor",
|
||||
0xe5: "Analog Devices",
|
||||
0xe6: "PMC-Sierra",
|
||||
0x67: "Asparix",
|
||||
0x68: "Convex Computer",
|
||||
0xe9: "Quality Semiconductor",
|
||||
0xea: "Nimbus Technology",
|
||||
0x6b: "Transwitch",
|
||||
0xec: "Micronas (ITT Intermetall)",
|
||||
0x6d: "Cannon",
|
||||
0x6e: "Altera",
|
||||
0xef: "NEXCOM",
|
||||
0x70: "QUALCOMM",
|
||||
0xf1: "Sony",
|
||||
0xf2: "Cray Research",
|
||||
0x73: "AMS (Austria Micro)",
|
||||
0xf4: "Vitesse",
|
||||
0x75: "Aster Electronics",
|
||||
0x76: "Bay Networks (Synoptic)",
|
||||
0xf7: "Zentrum",
|
||||
0xf8: "TRW",
|
||||
0x79: "Thesys",
|
||||
0x7a: "Solbourne Computer",
|
||||
0xfb: "Allied-Signal",
|
||||
0x7c: "Dialog",
|
||||
0xfd: "Media Vision",
|
||||
0xfe: "Level One Communication",
|
||||
},
|
||||
{
|
||||
0x01: "Cirrus Logic",
|
||||
0x02: "National Instruments",
|
||||
0x83: "ILC Data Device",
|
||||
0x04: "Alcatel Mietec",
|
||||
0x85: "Micro Linear",
|
||||
0x86: "Univ. of NC",
|
||||
0x07: "JTAG Technologies",
|
||||
0x08: "Loral",
|
||||
0x89: "Nchip",
|
||||
0x8A: "Galileo Tech",
|
||||
0x0B: "Bestlink Systems",
|
||||
0x8C: "Graychip",
|
||||
0x0D: "GENNUM",
|
||||
0x0E: "VideoLogic",
|
||||
0x8F: "Robert Bosch",
|
||||
0x10: "Chip Express",
|
||||
0x91: "DATARAM",
|
||||
0x92: "United Microelec Corp.",
|
||||
0x13: "TCSI",
|
||||
0x94: "Smart Modular",
|
||||
0x15: "Hughes Aircraft",
|
||||
0x16: "Lanstar Semiconductor",
|
||||
0x97: "Qlogic",
|
||||
0x98: "Kingston",
|
||||
0x19: "Music Semi",
|
||||
0x1A: "Ericsson Components",
|
||||
0x9B: "SpaSE",
|
||||
0x1C: "Eon Silicon Devices",
|
||||
0x9D: "Programmable Micro Corp",
|
||||
0x9E: "DoD",
|
||||
0x1F: "Integ. Memories Tech.",
|
||||
0x20: "Corollary Inc.",
|
||||
0xA1: "Dallas Semiconductor",
|
||||
0xA2: "Omnivision",
|
||||
0x23: "EIV(Switzerland)",
|
||||
0xA4: "Novatel Wireless",
|
||||
0x25: "Zarlink (formerly Mitel)",
|
||||
0x26: "Clearpoint",
|
||||
0xA7: "Cabletron",
|
||||
0xA8: "Silicon Technology",
|
||||
0x29: "Vanguard",
|
||||
0x2A: "Hagiwara Sys-Com",
|
||||
0xAB: "Vantis",
|
||||
0x2C: "Celestica",
|
||||
0xAD: "Century",
|
||||
0xAE: "Hal Computers",
|
||||
0x2F: "Rohm Company Ltd.",
|
||||
0xB0: "Juniper Networks",
|
||||
0x31: "Libit Signal Processing",
|
||||
0x32: "Enhanced Memories Inc.",
|
||||
0xB3: "Tundra Semiconductor",
|
||||
0x34: "Adaptec Inc.",
|
||||
0xB5: "LightSpeed Semi.",
|
||||
0xB6: "ZSP Corp.",
|
||||
0x37: "AMIC Technology",
|
||||
0x38: "Adobe Systems",
|
||||
0xB9: "Dynachip",
|
||||
0xBA: "PNY Electronics",
|
||||
0x3B: "Newport Digital",
|
||||
0xBC: "MMC Networks",
|
||||
0x3D: "T Square",
|
||||
0x3E: "Seiko Epson",
|
||||
0xBF: "Broadcom",
|
||||
0x40: "Viking Components",
|
||||
0xC1: "V3 Semiconductor",
|
||||
0xC2: "Flextronics (formerly Orbit)",
|
||||
0x43: "Suwa Electronics",
|
||||
0xC4: "Transmeta",
|
||||
0x45: "Micron CMS",
|
||||
0x46: "American Computer & Digital Components Inc",
|
||||
0xC7: "Enhance 3000 Inc",
|
||||
0xC8: "Tower Semiconductor",
|
||||
0x49: "CPU Design",
|
||||
0x4A: "Price Point",
|
||||
0xCB: "Maxim Integrated Product",
|
||||
0x4C: "Tellabs",
|
||||
0xCD: "Centaur Technology",
|
||||
0xCE: "Unigen Corporation",
|
||||
0x4F: "Transcend Information",
|
||||
0xD0: "Memory Card Technology",
|
||||
0x51: "CKD Corporation Ltd.",
|
||||
0x52: "Capital Instruments, Inc.",
|
||||
0xD3: "Aica Kogyo, Ltd.",
|
||||
0x54: "Linvex Technology",
|
||||
0xD5: "MSC Vertriebs GmbH",
|
||||
0xD6: "AKM Company, Ltd.",
|
||||
0x57: "Dynamem, Inc.",
|
||||
0x58: "NERA ASA",
|
||||
0xD9: "GSI Technology",
|
||||
0xDA: "Dane-Elec (C Memory)",
|
||||
0x5B: "Acorn Computers",
|
||||
0xDC: "Lara Technology",
|
||||
0x5D: "Oak Technology, Inc.",
|
||||
0x5E: "Itec Memory",
|
||||
0xDF: "Tanisys Technology",
|
||||
0xE0: "Truevision",
|
||||
0x61: "Wintec Industries",
|
||||
0x62: "Super PC Memory",
|
||||
0xE3: "MGV Memory",
|
||||
0x64: "Galvantech",
|
||||
0xE5: "Gadzoox Nteworks",
|
||||
0xE6: "Multi Dimensional Cons.",
|
||||
0x67: "GateField",
|
||||
0x68: "Integrated Memory System",
|
||||
0xE9: "Triscend",
|
||||
0xEA: "XaQti",
|
||||
0x6B: "Goldenram",
|
||||
0xEC: "Clear Logic",
|
||||
0x6D: "Cimaron Communications",
|
||||
0x6E: "Nippon Steel Semi. Corp.",
|
||||
0xEF: "Advantage Memory",
|
||||
0x70: "AMCC",
|
||||
0xF1: "LeCroy",
|
||||
0xF2: "Yamaha Corporation",
|
||||
0x73: "Digital Microwave",
|
||||
0xF4: "NetLogic Microsystems",
|
||||
0x75: "MIMOS Semiconductor",
|
||||
0x76: "Advanced Fibre",
|
||||
0xF7: "BF Goodrich Data.",
|
||||
0xF8: "Epigram",
|
||||
0x79: "Acbel Polytech Inc.",
|
||||
0x7A: "Apacer Technology",
|
||||
0xFB: "Admor Memory",
|
||||
0x7C: "FOXCONN",
|
||||
0xFD: "Quadratics Superconductor",
|
||||
0xFE: "3COM",
|
||||
},
|
||||
{
|
||||
0x01: "Camintonn Corporation",
|
||||
0x02: "ISOA Incorporated",
|
||||
0x83: "Agate Semiconductor",
|
||||
0x04: "ADMtek Incorporated",
|
||||
0x85: "HYPERTEC",
|
||||
0x86: "Adhoc Technologies",
|
||||
0x07: "MOSAID Technologies",
|
||||
0x08: "Ardent Technologies",
|
||||
0x89: "Switchcore",
|
||||
0x8A: "Cisco Systems, Inc.",
|
||||
0x0B: "Allayer Technologies",
|
||||
0x8C: "WorkX AG",
|
||||
0x0D: "Oasis Semiconductor",
|
||||
0x0E: "Novanet Semiconductor",
|
||||
0x8F: "E-M Solutions",
|
||||
0x10: "Power General",
|
||||
0x91: "Advanced Hardware Arch.",
|
||||
0x92: "Inova Semiconductors GmbH",
|
||||
0x13: "Telocity",
|
||||
0x94: "Delkin Devices",
|
||||
0x15: "Symagery Microsystems",
|
||||
0x16: "C-Port Corporation",
|
||||
0x97: "SiberCore Technologies",
|
||||
0x98: "Southland Microsystems",
|
||||
0x19: "Malleable Technologies",
|
||||
0x1A: "Kendin Communications",
|
||||
0x9B: "Great Technology Microcomputer",
|
||||
0x1C: "Sanmina Corporation",
|
||||
0x9D: "HADCO Corporation",
|
||||
0x9E: "Corsair",
|
||||
0x1F: "Actrans System Inc.",
|
||||
0x20: "ALPHA Technologies",
|
||||
0xA1: "Cygnal Integrated Products Incorporated",
|
||||
0xA2: "Artesyn Technologies",
|
||||
0x23: "Align Manufacturing",
|
||||
0xA4: "Peregrine Semiconductor",
|
||||
0x25: "Chameleon Systems",
|
||||
0x26: "Aplus Flash Technology",
|
||||
0xA7: "MIPS Technologies",
|
||||
0xA8: "Chrysalis ITS",
|
||||
0x29: "ADTEC Corporation",
|
||||
0x2A: "Kentron Technologies",
|
||||
0xAB: "Win Technologies",
|
||||
0x2C: "ASIC Designs Inc",
|
||||
0xAD: "Extreme Packet Devices",
|
||||
0xAE: "RF Micro Devices",
|
||||
0x2F: "Siemens AG",
|
||||
0xB0: "Sarnoff Corporation",
|
||||
0x31: "Itautec Philco SA",
|
||||
0x32: "Radiata Inc.",
|
||||
0xB3: "Benchmark Elect. (AVEX)",
|
||||
0x34: "Legend",
|
||||
0xB5: "SpecTek Incorporated",
|
||||
0xB6: "Hi/fn",
|
||||
0x37: "Enikia Incorporated",
|
||||
0x38: "SwitchOn Networks",
|
||||
0xB9: "AANetcom Incorporated",
|
||||
0xBA: "Micro Memory Bank",
|
||||
0x3B: "ESS Technology",
|
||||
0xBC: "Virata Corporation",
|
||||
0x3D: "Excess Bandwidth",
|
||||
0x3E: "West Bay Semiconductor",
|
||||
0xBF: "DSP Group",
|
||||
0x40: "Newport Communications",
|
||||
0xC1: "Chip2Chip Incorporated",
|
||||
0xC2: "Phobos Corporation",
|
||||
0x43: "Intellitech Corporation",
|
||||
0xC4: "Nordic VLSI ASA",
|
||||
0x45: "Ishoni Networks",
|
||||
0x46: "Silicon Spice",
|
||||
0xC7: "Alchemy Semiconductor",
|
||||
0xC8: "Agilent Technologies",
|
||||
0x49: "Centillium Communications",
|
||||
0x4A: "W.L. Gore",
|
||||
0xCB: "HanBit Electronics",
|
||||
0x4C: "GlobeSpan",
|
||||
0xCD: "Element 14",
|
||||
0xCE: "Pycon",
|
||||
0x4F: "Saifun Semiconductors",
|
||||
0xD0: "Sibyte, Incorporated",
|
||||
0x51: "MetaLink Technologies",
|
||||
0x52: "Feiya Technology",
|
||||
0xD3: "I & C Technology",
|
||||
0x54: "Shikatronics",
|
||||
0xD5: "Elektrobit",
|
||||
0xD6: "Megic",
|
||||
0x57: "Com-Tier",
|
||||
0x58: "Malaysia Micro Solutions",
|
||||
0xD9: "Hyperchip",
|
||||
0xDA: "Gemstone Communications",
|
||||
0x5B: "Anadyne Microelectronics",
|
||||
0xDC: "3ParData",
|
||||
0x5D: "Mellanox Technologies",
|
||||
0x5E: "Tenx Technologies",
|
||||
0xDF: "Helix AG",
|
||||
0xE0: "Domosys",
|
||||
0x61: "Skyup Technology",
|
||||
0x62: "HiNT Corporation",
|
||||
0xE3: "Chiaro",
|
||||
0x64: "MCI Computer GMBH",
|
||||
0xE5: "Exbit Technology A/S",
|
||||
0xE6: "Integrated Technology Express",
|
||||
0x67: "AVED Memory",
|
||||
0x68: "Legerity",
|
||||
0xE9: "Jasmine Networks",
|
||||
0xEA: "Caspian Networks",
|
||||
0x6B: "nCUBE",
|
||||
0xEC: "Silicon Access Networks",
|
||||
0x6D: "FDK Corporation",
|
||||
0x6E: "High Bandwidth Access",
|
||||
0xEF: "MultiLink Technology",
|
||||
0x70: "BRECIS",
|
||||
0xF1: "World Wide Packets",
|
||||
0xF2: "APW",
|
||||
0x73: "Chicory Systems",
|
||||
0xF4: "Xstream Logic",
|
||||
0x75: "Fast-Chip",
|
||||
0x76: "Zucotto Wireless",
|
||||
0xF7: "Realchip",
|
||||
0xF8: "Galaxy Power",
|
||||
0x79: "eSilicon",
|
||||
0x7A: "Morphics Technology",
|
||||
0xFB: "Accelerant Networks",
|
||||
0x7C: "Silicon Wave",
|
||||
0xFD: "SandCraft",
|
||||
0xFE: "Elpida",
|
||||
},
|
||||
{
|
||||
0x01: "Solectron",
|
||||
0x02: "Optosys Technologies",
|
||||
0x83: "Buffalo (Formerly Melco)",
|
||||
0x04: "TriMedia Technologies",
|
||||
0x85: "Cyan Technologies",
|
||||
0x86: "Global Locate",
|
||||
0x07: "Optillion",
|
||||
0x08: "Terago Communications",
|
||||
0x89: "Ikanos Communications",
|
||||
0x8A: "Princeton Technology",
|
||||
0x0B: "Nanya Technology",
|
||||
0x8C: "Elite Flash Storage",
|
||||
0x0D: "Mysticom",
|
||||
0x0E: "LightSand Communications",
|
||||
0x8F: "ATI Technologies",
|
||||
0x10: "Agere Systems",
|
||||
0x91: "NeoMagic",
|
||||
0x92: "AuroraNetics",
|
||||
0x13: "Golden Empire",
|
||||
0x94: "Muskin",
|
||||
0x15: "Tioga Technologies",
|
||||
0x16: "Netlist",
|
||||
0x97: "TeraLogic",
|
||||
0x98: "Cicada Semiconductor",
|
||||
0x19: "Centon Electronics",
|
||||
0x1A: "Tyco Electronics",
|
||||
0x9B: "Magis Works",
|
||||
0x1C: "Zettacom",
|
||||
0x9D: "Cogency Semiconductor",
|
||||
0x9E: "Chipcon AS",
|
||||
0x1F: "Aspex Technology",
|
||||
0x20: "F5 Networks",
|
||||
0xA1: "Programmable Silicon Solutions",
|
||||
0xA2: "ChipWrights",
|
||||
0x23: "Acorn Networks",
|
||||
0xA4: "Quicklogic",
|
||||
0x25: "Kingmax Semiconductor",
|
||||
0x26: "BOPS",
|
||||
0xA7: "Flasys",
|
||||
0xA8: "BitBlitz Communications",
|
||||
0x29: "eMemory Technology",
|
||||
0x2A: "Procket Networks",
|
||||
0xAB: "Purple Ray",
|
||||
0x2C: "Trebia Networks",
|
||||
0xAD: "Delta Electronics",
|
||||
0xAE: "Onex Communications",
|
||||
0x2F: "Ample Communications",
|
||||
0xB0: "Memory Experts Intl",
|
||||
0x31: "Astute Networks",
|
||||
0x32: "Azanda Network Devices",
|
||||
0xB3: "Dibcom",
|
||||
0x34: "Tekmos",
|
||||
0xB5: "API NetWorks",
|
||||
0xB6: "Bay Microsystems",
|
||||
0x37: "Firecron Ltd",
|
||||
0x38: "Resonext Communications",
|
||||
0xB9: "Tachys Technologies",
|
||||
0xBA: "Equator Technology",
|
||||
0x3B: "Concept Computer",
|
||||
0xBC: "SILCOM",
|
||||
0x3D: "3Dlabs",
|
||||
0x3E: "ct Magazine",
|
||||
0xBF: "Sanera Systems",
|
||||
0x40: "Silicon Packets",
|
||||
0xC1: "Viasystems Group",
|
||||
0xC2: "Simtek",
|
||||
0x43: "Semicon Devices Singapore",
|
||||
0xC4: "Satron Handelsges",
|
||||
0x45: "Improv Systems",
|
||||
0x46: "INDUSYS GmbH",
|
||||
0xC7: "Corrent",
|
||||
0xC8: "Infrant Technologies",
|
||||
0x49: "Ritek Corp",
|
||||
0x4A: "empowerTel Networks",
|
||||
0xCB: "Hypertec",
|
||||
0x4C: "Cavium Networks",
|
||||
0xCD: "PLX Technology",
|
||||
0xCE: "Massana Design",
|
||||
0x4F: "Intrinsity",
|
||||
0xD0: "Valence Semiconductor",
|
||||
0x51: "Terawave Communications",
|
||||
0x52: "IceFyre Semiconductor",
|
||||
0xD3: "Primarion",
|
||||
0x54: "Picochip Designs Ltd",
|
||||
0xD5: "Silverback Systems",
|
||||
0xD6: "Jade Star Technologies",
|
||||
0x57: "Pijnenburg Securealink",
|
||||
0x58: "MemorySolutioN",
|
||||
0xD9: "Cambridge Silicon Radio",
|
||||
0xDA: "Swissbit",
|
||||
0x5B: "Nazomi Communications",
|
||||
0xDC: "eWave System",
|
||||
0x5D: "Rockwell Collins",
|
||||
0x5E: "PAION",
|
||||
0xDF: "Alphamosaic Ltd",
|
||||
0xE0: "Sandburst",
|
||||
0x61: "SiCon Video",
|
||||
0x62: "NanoAmp Solutions",
|
||||
0xE3: "Ericsson Technology",
|
||||
0x64: "PrairieComm",
|
||||
0xE5: "Mitac International",
|
||||
0xE6: "Layer N Networks",
|
||||
0x67: "Atsana Semiconductor",
|
||||
0x68: "Allegro Networks",
|
||||
0xE9: "Marvell Semiconductors",
|
||||
0xEA: "Netergy Microelectronic",
|
||||
0x6B: "NVIDIA",
|
||||
0xEC: "Internet Machines",
|
||||
0x6D: "Peak Electronics",
|
||||
0xEF: "Accton Technology",
|
||||
0x70: "Teradiant Networks",
|
||||
0xF1: "Europe Technologies",
|
||||
0xF2: "Cortina Systems",
|
||||
0x73: "RAM Components",
|
||||
0xF4: "Raqia Networks",
|
||||
0x75: "ClearSpeed",
|
||||
0x76: "Matsushita Battery",
|
||||
0xF7: "Xelerated",
|
||||
0xF8: "SimpleTech",
|
||||
0x79: "Utron Technology",
|
||||
0x7A: "Astec International",
|
||||
0xFB: "AVM gmbH",
|
||||
0x7C: "Redux Communications",
|
||||
0xFD: "Dot Hill Systems",
|
||||
0xFE: "TeraChip",
|
||||
},
|
||||
{
|
||||
0x01: "T-RAM Incorporated",
|
||||
0x02: "Innovics Wireless",
|
||||
0x83: "Teknovus",
|
||||
0x04: "KeyEye Communications",
|
||||
0x85: "Runcom Technologies",
|
||||
0x86: "RedSwitch",
|
||||
0x07: "Dotcast",
|
||||
0x08: "Silicon Mountain Memory",
|
||||
0x89: "Signia Technologies",
|
||||
0x8A: "Pixim",
|
||||
0x0B: "Galazar Networks",
|
||||
0x8C: "White Electronic Designs",
|
||||
0x0D: "Patriot Scientific",
|
||||
0x0E: "Neoaxiom Corporation",
|
||||
0x8F: "3Y Power Technology",
|
||||
0x10: "Europe Technologies",
|
||||
0x91: "Potentia Power Systems",
|
||||
0x92: "C-guys Incorporated",
|
||||
0x13: "Digital Communications Technology Incorporated",
|
||||
0x94: "Silicon-Based Technology",
|
||||
0x15: "Fulcrum Microsystems",
|
||||
0x16: "Positivo Informatica Ltd",
|
||||
0x97: "XIOtech Corporation",
|
||||
0x98: "PortalPlayer",
|
||||
0x19: "Zhiying Software",
|
||||
0x1A: "Direct2Data",
|
||||
0x9B: "Phonex Broadband",
|
||||
0x1C: "Skyworks Solutions",
|
||||
0x9D: "Entropic Communications",
|
||||
0x9E: "Pacific Force Technology",
|
||||
0x1F: "Zensys A/S",
|
||||
0x20: "Legend Silicon Corp.",
|
||||
0xA1: "sci-worx GmbH",
|
||||
0xA2: "Oasis Silicon Systems",
|
||||
0x23: "Renesas Technology",
|
||||
0xA4: "Raza Microelectronics",
|
||||
0x25: "Phyworks",
|
||||
0x26: "MediaTek",
|
||||
0xA7: "Non-cents Productions",
|
||||
0xA8: "US Modular",
|
||||
0x29: "Wintegra Ltd",
|
||||
0x2A: "Mathstar",
|
||||
0xAB: "StarCore",
|
||||
0x2C: "Oplus Technologies",
|
||||
0xAD: "Mindspeed",
|
||||
0xAE: "Just Young Computer",
|
||||
0x2F: "Radia Communications",
|
||||
0xB0: "OCZ",
|
||||
0x31: "Emuzed",
|
||||
0x32: "LOGIC Devices",
|
||||
0xB3: "Inphi Corporation",
|
||||
0x34: "Quake Technologies",
|
||||
0xB5: "Vixel",
|
||||
0xB6: "SolusTek",
|
||||
0x37: "Kongsberg Maritime",
|
||||
0x38: "Faraday Technology",
|
||||
0xB9: "Altium Ltd.",
|
||||
0xBA: "Insyte",
|
||||
0x3B: "ARM Ltd.",
|
||||
0xBC: "DigiVision",
|
||||
0x3D: "Vativ Technologies",
|
||||
0x3E: "Endicott Interconnect Technologies",
|
||||
0xBF: "Pericom",
|
||||
0x40: "Bandspeed",
|
||||
0xC1: "LeWiz Communications",
|
||||
0xC2: "CPU Technology",
|
||||
0x43: "Ramaxel Technology",
|
||||
0xC4: "DSP Group",
|
||||
0x45: "Axis Communications",
|
||||
0x46: "Legacy Electronics",
|
||||
0xC7: "Chrontel",
|
||||
0xC8: "Powerchip Semiconductor",
|
||||
0x49: "MobilEye Technologies",
|
||||
0x4A: "Excel Semiconductor",
|
||||
0xCB: "A-DATA Technology",
|
||||
0x4C: "VirtualDigm",
|
||||
},
|
||||
]
|
||||
|
||||
memory_types = {
|
||||
1: "STD FPM DRAM",
|
||||
2: "EDO",
|
||||
3: "Pipelined Nibble",
|
||||
4: "SDRAM",
|
||||
5: "ROM",
|
||||
6: "DDR SGRAM",
|
||||
7: "DDR SDRAM",
|
||||
8: "DDR2 SDRAM",
|
||||
9: "DDR2 SDRAM FB-DIMM",
|
||||
10: "DDR2 SDRAM FB-DIMM PROBE",
|
||||
11: "DDR3 SDRAM",
|
||||
12: "DDR4 SDRAM",
|
||||
}
|
||||
|
||||
module_types = {
|
||||
1: "RDIMM",
|
||||
2: "UDIMM",
|
||||
3: "SODIMM",
|
||||
4: "Micro-DIMM",
|
||||
5: "Mini-RDIMM",
|
||||
6: "Mini-UDIMM",
|
||||
}
|
||||
|
||||
ddr3_module_capacity = {
|
||||
0: 256,
|
||||
1: 512,
|
||||
2: 1024,
|
||||
3: 2048,
|
||||
4: 4096,
|
||||
5: 8192,
|
||||
6: 16384,
|
||||
7: 32768,
|
||||
}
|
||||
|
||||
ddr3_dev_width = {
|
||||
0: 4,
|
||||
1: 8,
|
||||
2: 16,
|
||||
3: 32,
|
||||
}
|
||||
|
||||
ddr3_ranks = {
|
||||
0: 1,
|
||||
1: 2,
|
||||
2: 3,
|
||||
3: 4
|
||||
}
|
||||
|
||||
ddr3_bus_width = {
|
||||
0: 8,
|
||||
1: 16,
|
||||
2: 32,
|
||||
3: 64,
|
||||
}
|
||||
|
||||
|
||||
def speed_from_clock(clock):
|
||||
return int(clock * 8 - (clock * 8 % 100))
|
||||
|
||||
|
||||
def decode_manufacturer(index, mfg):
|
||||
index &= 0x7f
|
||||
try:
|
||||
return jedec_ids[index][mfg]
|
||||
except (KeyError, IndexError):
|
||||
return 'Unknown ({0}, {1})'.format(index, mfg)
|
||||
|
||||
|
||||
def decode_spd_date(year, week):
|
||||
if year == 0 and week == 0:
|
||||
return 'Unknown'
|
||||
return '20{0:02x}-W{1:x}'.format(year, week)
|
||||
|
||||
|
||||
class SPD(object):
|
||||
def __init__(self, bytedata):
|
||||
"""Parsed memory information
|
||||
|
||||
Parse bytedata input and provide a structured detail about the
|
||||
described memory component
|
||||
|
||||
:param bytedata: A bytearray of data to decode
|
||||
:return:
|
||||
"""
|
||||
self.rawdata = bytearray(bytedata)
|
||||
spd = self.rawdata
|
||||
self.info = {'memory_type': memory_types.get(spd[2], 'Unknown')}
|
||||
if spd[2] == 11:
|
||||
self._decode_ddr3()
|
||||
elif spd[2] == 12:
|
||||
self._decode_ddr4()
|
||||
|
||||
def _decode_ddr3(self):
|
||||
spd = self.rawdata
|
||||
finetime = (spd[9] >> 4) / (spd[9] & 0xf)
|
||||
fineoffset = spd[34]
|
||||
if fineoffset & 0b10000000:
|
||||
# Take two's complement for negative offset
|
||||
fineoffset = 0 - ((fineoffset ^ 0xff) + 1)
|
||||
fineoffset = (finetime * fineoffset) * 10**-3
|
||||
mtb = spd[10] / float(spd[11])
|
||||
clock = 2 // ((mtb * spd[12] + fineoffset)*10**-3)
|
||||
self.info['speed'] = speed_from_clock(clock)
|
||||
self.info['ecc'] = (spd[8] & 0b11000) != 0
|
||||
self.info['module_type'] = module_types.get(spd[3] & 0xf, 'Unknown')
|
||||
sdramcap = ddr3_module_capacity[spd[4] & 0xf]
|
||||
buswidth = ddr3_bus_width[spd[8] & 0b111]
|
||||
sdramwidth = ddr3_dev_width[spd[7] & 0b111]
|
||||
ranks = ddr3_ranks[(spd[7] & 0b111000) >> 3]
|
||||
self.info['capacity_mb'] = sdramcap / 8 * buswidth / sdramwidth * ranks
|
||||
self.info['manufacturer'] = decode_manufacturer(spd[117], spd[118])
|
||||
self.info['manufacture_location'] = spd[119]
|
||||
self.info['manufacture_date'] = decode_spd_date(spd[120], spd[121])
|
||||
self.info['serial'] = hex(struct.unpack(
|
||||
'>I', struct.pack('4B', *spd[122:126]))[0])[2:].rjust(8, '0')
|
||||
self.info['model'] = struct.pack('18B', *spd[128:146])
|
||||
|
||||
def _decode_ddr4(self):
|
||||
spd = self.rawdata
|
||||
if spd[17] == 0:
|
||||
fineoffset = spd[125]
|
||||
if fineoffset & 0b10000000:
|
||||
fineoffset = 0 - ((fineoffset ^ 0xff) + 1)
|
||||
clock = 2 // ((0.125 * spd[18] + fineoffset * 0.001) * 0.001)
|
||||
self.info['speed'] = speed_from_clock(clock)
|
||||
else:
|
||||
self.info['speed'] = 'Unknown'
|
||||
self.info['ecc'] = (spd[13] & 0b11000) == 0b1000
|
||||
self.info['module_type'] = module_types.get(spd[3] & 0xf,
|
||||
'Unknown')
|
||||
sdramcap = ddr3_module_capacity[spd[4] & 0xf]
|
||||
buswidth = ddr3_bus_width[spd[13] & 0b111]
|
||||
sdramwidth = ddr3_dev_width[spd[12] & 0b111]
|
||||
ranks = ddr3_ranks[(spd[12] & 0b111000) >> 3]
|
||||
self.info['capacity_mb'] = sdramcap / 8 * buswidth / sdramwidth * ranks
|
||||
self.info['manufacturer'] = decode_manufacturer(spd[320], spd[321])
|
||||
self.info['manufacture_location'] = spd[322]
|
||||
self.info['manufacture_date'] = decode_spd_date(spd[323], spd[324])
|
||||
self.info['serial'] = hex(struct.unpack(
|
||||
'>I', struct.pack('4B', *spd[325:329]))[0])[2:].rjust(8, '0')
|
||||
self.info['model'] = struct.pack('18B', *spd[329:347])
|
|
@ -1,132 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
|
||||
# Copyright 2015-2017 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import ctypes
|
||||
import functools
|
||||
import os
|
||||
import socket
|
||||
import struct
|
||||
|
||||
from pyghmi.ipmi.private import constants
|
||||
|
||||
try:
|
||||
range = xrange
|
||||
except NameError:
|
||||
pass
|
||||
try:
|
||||
buffer
|
||||
except NameError:
|
||||
buffer = memoryview
|
||||
|
||||
|
||||
wintime = None
|
||||
try:
|
||||
wintime = ctypes.windll.kernel32.GetTickCount64
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
|
||||
def decode_wireformat_uuid(rawguid):
|
||||
"""Decode a wire format UUID
|
||||
|
||||
It handles the rather particular scheme where half is little endian
|
||||
and half is big endian. It returns a string like dmidecode would output.
|
||||
"""
|
||||
if isinstance(rawguid, list):
|
||||
rawguid = bytearray(rawguid)
|
||||
lebytes = struct.unpack_from('<IHH', buffer(rawguid[:8]))
|
||||
bebytes = struct.unpack_from('>HHI', buffer(rawguid[8:]))
|
||||
return '{0:08X}-{1:04X}-{2:04X}-{3:04X}-{4:04X}{5:08X}'.format(
|
||||
lebytes[0], lebytes[1], lebytes[2], bebytes[0], bebytes[1], bebytes[2])
|
||||
|
||||
|
||||
def urlsplit(url):
|
||||
"""Split an arbitrary url into protocol, host, rest
|
||||
|
||||
The standard urlsplit does not want to provide 'netloc' for arbitrary
|
||||
protocols, this works around that.
|
||||
|
||||
:param url: The url to split into component parts
|
||||
"""
|
||||
proto, rest = url.split(':', 1)
|
||||
host = ''
|
||||
if rest[:2] == '//':
|
||||
host, rest = rest[2:].split('/', 1)
|
||||
rest = '/' + rest
|
||||
return proto, host, rest
|
||||
|
||||
|
||||
def get_ipv4(hostname):
|
||||
"""Get list of ipv4 addresses for hostname
|
||||
|
||||
"""
|
||||
addrinfo = socket.getaddrinfo(hostname, None, socket.AF_INET,
|
||||
socket.SOCK_STREAM)
|
||||
return [addrinfo[x][4][0] for x in range(len(addrinfo))]
|
||||
|
||||
|
||||
def get_ipmi_error(response, suffix=""):
|
||||
if 'error' in response:
|
||||
return response['error'] + suffix
|
||||
code = response['code']
|
||||
if code == 0:
|
||||
return False
|
||||
command = response['command']
|
||||
netfn = response['netfn']
|
||||
if ((netfn, command) in constants.command_completion_codes and
|
||||
code in constants.command_completion_codes[(netfn, command)]):
|
||||
res = constants.command_completion_codes[(netfn, command)][code]
|
||||
res += suffix
|
||||
elif code in constants.ipmi_completion_codes:
|
||||
res = constants.ipmi_completion_codes[code] + suffix
|
||||
else:
|
||||
res = "Unknown code 0x%2x encountered" % code
|
||||
return res
|
||||
|
||||
|
||||
def _monotonic_time():
|
||||
"""Provides a monotonic timer
|
||||
|
||||
This code is concerned with relative, not absolute time.
|
||||
This function facilitates that prior to python 3.3
|
||||
"""
|
||||
# Python does not provide one until 3.3, so we make do
|
||||
# for most OSes, os.times()[4] works well.
|
||||
# for microsoft, GetTickCount64
|
||||
if wintime:
|
||||
return wintime() / 1000.0
|
||||
return os.times()[4]
|
||||
|
||||
|
||||
class protect(object):
|
||||
|
||||
def __init__(self, lock):
|
||||
self.lock = lock
|
||||
|
||||
def __call__(self, func):
|
||||
@functools.wraps(func)
|
||||
def _wrapper(*args, **kwargs):
|
||||
self.lock.acquire()
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
finally:
|
||||
self.lock.release()
|
||||
return _wrapper
|
||||
|
||||
def __enter__(self):
|
||||
self.lock.acquire()
|
||||
|
||||
def __exit__(self, exc_type, exc_value, traceback):
|
||||
self.lock.release()
|
|
@ -1,750 +0,0 @@
|
|||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
||||
# coding=utf8
|
||||
|
||||
# Copyright 2014 IBM Corporation
|
||||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This module provides access to SDR offered by a BMC
|
||||
# This data is common between 'sensors' and 'inventory' modules since SDR
|
||||
# is both used to enumerate sensors for sensor commands and FRU ids for FRU
|
||||
# commands
|
||||
|
||||
# For now, we will not offer persistent SDR caching as we do in xCAT's IPMI
|
||||
# code. Will see if it is adequate to advocate for high object reuse in a
|
||||
# persistent process for the moment.
|
||||
|
||||
# Focus is at least initially on the aspects that make the most sense for a
|
||||
# remote client to care about. For example, smbus information is being
|
||||
# skipped for now
|
||||
|
||||
import math
|
||||
import pyghmi.constants as const
|
||||
import pyghmi.exceptions as exc
|
||||
import pyghmi.ipmi.private.constants as ipmiconst
|
||||
import struct
|
||||
import weakref
|
||||
|
||||
TYPE_UNKNOWN = 0
|
||||
TYPE_SENSOR = 1
|
||||
TYPE_FRU = 2
|
||||
|
||||
|
||||
def ones_complement(value, bits):
|
||||
# utility function to help with the large amount of 2s
|
||||
# complement prevalent in ipmi spec
|
||||
signbit = 0b1 << (bits - 1)
|
||||
if value & signbit:
|
||||
# if negative, subtract 1, then take 1s
|
||||
# complement given bits width
|
||||
return 0 - (value ^ ((0b1 << bits) - 1))
|
||||
else:
|
||||
return value
|
||||
|
||||
|
||||
def twos_complement(value, bits):
|
||||
# utility function to help with the large amount of 2s
|
||||
# complement prevalent in ipmi spec
|
||||
signbit = 0b1 << (bits - 1)
|
||||
if value & signbit:
|
||||
# if negative, subtract 1, then take 1s
|
||||
# complement given bits width
|
||||
return 0 - ((value - 1) ^ ((0b1 << bits) - 1))
|
||||
else:
|
||||
return value
|
||||
|
||||
|
||||
unit_types = {
|
||||
# table 43-15 'sensor unit type codes'
|
||||
0: '',
|
||||
1: '°C',
|
||||
2: '°F',
|
||||
3: 'K',
|
||||
4: 'V',
|
||||
5: 'A',
|
||||
6: 'W',
|
||||
7: 'J',
|
||||
8: 'C',
|
||||
9: 'VA',
|
||||
10: 'nt',
|
||||
11: 'lm',
|
||||
12: 'lx',
|
||||
13: 'cd',
|
||||
14: 'kPa',
|
||||
15: 'PSI',
|
||||
16: 'N',
|
||||
17: 'CFM',
|
||||
18: 'RPM',
|
||||
19: 'Hz',
|
||||
20: 'μs',
|
||||
21: 'ms',
|
||||
22: 's',
|
||||
23: 'min',
|
||||
24: 'hr',
|
||||
25: 'd',
|
||||
26: 'week(s)',
|
||||
27: 'mil',
|
||||
28: 'inches',
|
||||
29: 'ft',
|
||||
30: 'cu in',
|
||||
31: 'cu feet',
|
||||
32: 'mm',
|
||||
33: 'cm',
|
||||
34: 'm',
|
||||
35: 'cu cm',
|
||||
36: 'cu m',
|
||||
37: 'L',
|
||||
38: 'fl. oz.',
|
||||
39: 'radians',
|
||||
40: 'steradians',
|
||||
41: 'revolutions',
|
||||
42: 'cycles',
|
||||
43: 'g',
|
||||
44: 'ounce',
|
||||
45: 'lb',
|
||||
46: 'ft-lb',
|
||||
47: 'oz-in',
|
||||
48: 'gauss',
|
||||
49: 'gilberts',
|
||||
50: 'henry',
|
||||
51: 'millihenry',
|
||||
52: 'farad',
|
||||
53: 'microfarad',
|
||||
54: 'ohms',
|
||||
55: 'siemens',
|
||||
56: 'mole',
|
||||
57: 'becquerel',
|
||||
58: 'ppm',
|
||||
60: 'dB',
|
||||
61: 'dBA',
|
||||
62: 'dBC',
|
||||
63: 'Gy',
|
||||
64: 'sievert',
|
||||
65: 'color temp deg K',
|
||||
66: 'bit',
|
||||
67: 'kb',
|
||||
68: 'mb',
|
||||
69: 'gb',
|
||||
70: 'byte',
|
||||
71: 'kB',
|
||||
72: 'mB',
|
||||
73: 'gB',
|
||||
74: 'word',
|
||||
75: 'dword',
|
||||
76: 'qword',
|
||||
77: 'line',
|
||||
78: 'hit',
|
||||
79: 'miss',
|
||||
80: 'retry',
|
||||
81: 'reset',
|
||||
82: 'overrun/overflow',
|
||||
83: 'underrun',
|
||||
84: 'collision',
|
||||
85: 'packets',
|
||||
86: 'messages',
|
||||
87: 'characters',
|
||||
88: 'error',
|
||||
89: 'uncorrectable error',
|
||||
90: 'correctable error',
|
||||
91: 'fatal error',
|
||||
92: 'grams',
|
||||
}
|
||||
|
||||
sensor_rates = {
|
||||
0: '',
|
||||
1: ' per us',
|
||||
2: ' per ms',
|
||||
3: ' per s',
|
||||
4: ' per minute',
|
||||
5: ' per hour',
|
||||
6: ' per day',
|
||||
}
|
||||
|
||||
|
||||
class SensorReading(object):
|
||||
"""Representation of the state of a sensor.
|
||||
|
||||
It is initialized by pyghmi internally, it does not make sense for
|
||||
a developer to create one of these objects directly.
|
||||
|
||||
It provides the following properties:
|
||||
name: UTF-8 string describing the sensor
|
||||
units: UTF-8 string describing the units of the sensor (if numeric)
|
||||
value: Value of the sensor if numeric
|
||||
imprecision: The amount by which the actual measured value may deviate from
|
||||
'value' due to limitations in the resolution of the given sensor.
|
||||
"""
|
||||
|
||||
def __init__(self, reading, suffix):
|
||||
self.broken_sensor_ids = {}
|
||||
self.health = const.Health.Ok
|
||||
self.type = reading['type']
|
||||
self.value = None
|
||||
self.imprecision = None
|
||||
self.states = []
|
||||
self.state_ids = []
|
||||
self.unavailable = 0
|
||||
try:
|
||||
self.health = reading['health']
|
||||
self.states = reading['states']
|
||||
self.state_ids = reading['state_ids']
|
||||
self.value = reading['value']
|
||||
self.imprecision = reading['imprecision']
|
||||
except KeyError:
|
||||
pass
|
||||
if 'unavailable' in reading:
|
||||
self.unavailable = 1
|
||||
self.units = suffix
|
||||
self.name = reading['name']
|
||||
|
||||
def __repr__(self):
|
||||
return repr({
|
||||
'value': self.value,
|
||||
'states': self.states,
|
||||
'state_ids': self.state_ids,
|
||||
'units': self.units,
|
||||
'imprecision': self.imprecision,
|
||||
'name': self.name,
|
||||
'type': self.type,
|
||||
'unavailable': self.unavailable,
|
||||
'health': self.health
|
||||
})
|
||||
|
||||
def simplestring(self):
|
||||
"""Return a summary string of the reading.
|
||||
|
||||
This is intended as a sampling of how the data could be presented by
|
||||
a UI. It's intended to help a developer understand the relation
|
||||
between the attributes of a sensor reading if it is not quite clear
|
||||
"""
|
||||
repr = self.name + ": "
|
||||
if self.value is not None:
|
||||
repr += str(self.value)
|
||||
repr += " ± " + str(self.imprecision)
|
||||
repr += self.units
|
||||
for state in self.states:
|
||||
repr += state + ","
|
||||
if self.health >= const.Health.Failed:
|
||||
repr += '(Failed)'
|
||||
elif self.health >= const.Health.Critical:
|
||||
repr += '(Critical)'
|
||||
elif self.health >= const.Health.Warning:
|
||||
repr += '(Warning)'
|
||||
return repr
|
||||
|
||||
|
||||
class SDREntry(object):
|
||||
"""Represent a single entry in the IPMI SDR.
|
||||
|
||||
This is created and consumed by pyghmi internally, there is no reason for
|
||||
external code to pay attention to this class.
|
||||
"""
|
||||
|
||||
def __init__(self, entrybytes, ipmicmd, reportunsupported=False):
|
||||
# ignore record id for now, we only care about the sensor number for
|
||||
# moment
|
||||
self.reportunsupported = reportunsupported
|
||||
self.ipmicmd = ipmicmd
|
||||
if entrybytes[2] != 0x51:
|
||||
# only recognize '1.5', the only version defined at time of writing
|
||||
raise NotImplementedError
|
||||
self.rectype = entrybytes[3]
|
||||
self.linearization = None
|
||||
# most important to get going are 1, 2, and 11
|
||||
self.sdrtype = TYPE_SENSOR # assume a sensor
|
||||
if self.rectype == 1: # full sdr
|
||||
self.full_decode(entrybytes[5:])
|
||||
elif self.rectype == 2: # full sdr
|
||||
self.compact_decode(entrybytes[5:])
|
||||
elif self.rectype == 8: # entity association
|
||||
self.association_decode(entrybytes[5:])
|
||||
elif self.rectype == 0x11: # FRU locator
|
||||
self.fru_decode(entrybytes[5:])
|
||||
elif self.rectype == 0x12: # Management controller
|
||||
self.mclocate_decode(entrybytes[5:])
|
||||
elif self.rectype == 0xc0: # OEM format
|
||||
self.sdrtype = TYPE_UNKNOWN # assume undefined
|
||||
self.oem_decode(entrybytes[5:])
|
||||
elif self.reportunsupported:
|
||||
raise NotImplementedError
|
||||
else:
|
||||
self.sdrtype = TYPE_UNKNOWN
|
||||
|
||||
@property
|
||||
def name(self):
|
||||
if self.sdrtype == TYPE_SENSOR:
|
||||
return self.sensor_name
|
||||
elif self.sdrtype == TYPE_FRU:
|
||||
return self.fru_name
|
||||
else:
|
||||
return "UNKNOWN"
|
||||
|
||||
def oem_decode(self, entry):
|
||||
mfgid = entry[0] + (entry[1] << 8) + (entry[2] << 16)
|
||||
if self.reportunsupported:
|
||||
raise NotImplementedError("No support for mfgid %X" % mfgid)
|
||||
|
||||
def mclocate_decode(self, entry):
|
||||
# For now, we don't have use for MC locator records
|
||||
# we'll ignore them at the moment
|
||||
self.sdrtype = TYPE_UNKNOWN
|
||||
pass
|
||||
|
||||
def fru_decode(self, entry):
|
||||
# table 43-7 FRU Device Locator
|
||||
self.sdrtype = TYPE_FRU
|
||||
self.fru_name = self.tlv_decode(entry[10], entry[11:])
|
||||
self.fru_number = entry[1]
|
||||
self.fru_logical = (entry[2] & 0b10000000) == 0b10000000
|
||||
# 0x8 to 0x10.. 0 unspecified except on 0x10, 1 is dimm
|
||||
self.fru_type_and_modifier = (entry[5] << 8) + entry[6]
|
||||
|
||||
def association_decode(self, entry):
|
||||
# table 43-4 Entity Associaition Record
|
||||
# TODO(jbjohnso): actually represent this data
|
||||
self.sdrtype = TYPE_UNKNOWN
|
||||
|
||||
def compact_decode(self, entry):
|
||||
# table 43-2 compact sensor record
|
||||
self._common_decode(entry)
|
||||
self.sensor_name = self.tlv_decode(entry[26], entry[27:])
|
||||
|
||||
def assert_trap_value(self, offset):
|
||||
trapval = (self.sensor_type_number << 16) + (self.reading_type << 8)
|
||||
return trapval + offset
|
||||
|
||||
def _common_decode(self, entry):
|
||||
# compact and full are very similar
|
||||
# this function handles the common aspects of compact and full
|
||||
# offsets from spec, minus 6
|
||||
self.sensor_number = entry[2]
|
||||
self.entity = ipmiconst.entity_ids.get(
|
||||
entry[3], 'Unknown entity {0}'.format(entry[3]))
|
||||
self.sensor_type_number = entry[7]
|
||||
try:
|
||||
self.sensor_type = ipmiconst.sensor_type_codes[entry[7]]
|
||||
except KeyError:
|
||||
self.sensor_type = "UNKNOWN type " + str(entry[7])
|
||||
self.reading_type = entry[8] # table 42-1
|
||||
# 0: unspecified
|
||||
# 1: generic threshold based
|
||||
# 0x6f: discrete sensor-specific from table 42-3, sensor offsets
|
||||
# all others per table 42-2, generic discrete
|
||||
# numeric format is one of:
|
||||
# 0 - unsigned, 1 - 1s complement, 2 - 2s complement, 3 - ignore number
|
||||
# compact records are supposed to always write it as '3', presumably
|
||||
# to allow for the concept of a compact record with a numeric format
|
||||
# even though numerics are not allowed today. Some implementations
|
||||
# violate the spec and do something other than 3 today. Tolerate
|
||||
# the violation under the assumption that things are not so hard up
|
||||
# that there will ever be a need for compact sensors supporting numeric
|
||||
# values
|
||||
if self.rectype == 2:
|
||||
self.numeric_format = 3
|
||||
else:
|
||||
self.numeric_format = (entry[15] & 0b11000000) >> 6
|
||||
self.sensor_rate = sensor_rates[(entry[15] & 0b111000) >> 3]
|
||||
self.unit_mod = ""
|
||||
if (entry[15] & 0b110) == 0b10: # unit1 by unit2
|
||||
self.unit_mod = "/"
|
||||
elif (entry[15] & 0b110) == 0b100:
|
||||
# combine the units by multiplying, SI nomenclature is either spac
|
||||
# or hyphen, so go with space
|
||||
self.unit_mod = " "
|
||||
self.percent = ''
|
||||
if entry[15] & 1 == 1:
|
||||
self.percent = '% '
|
||||
self.baseunit = unit_types[entry[16]]
|
||||
self.modunit = unit_types[entry[17]]
|
||||
self.unit_suffix = self.percent + self.baseunit + self.unit_mod + \
|
||||
self.modunit
|
||||
|
||||
def full_decode(self, entry):
|
||||
# offsets are table from spec, minus 6
|
||||
# TODO(jbjohnso): table 43-13, put in constants to interpret entry[3]
|
||||
self._common_decode(entry)
|
||||
# now must extract the formula data to transform values
|
||||
# entry[18 to entry[24].
|
||||
# if not linear, must use get sensor reading factors
|
||||
# TODO(jbjohnso): the various other values
|
||||
self.sensor_name = self.tlv_decode(entry[42], entry[43:])
|
||||
self.linearization = entry[18] & 0b1111111
|
||||
if self.linearization <= 11:
|
||||
# the enumuration of linear sensors goes to 11,
|
||||
# static formula parameters are applicable, decode them
|
||||
# if 0x70, then the sesor reading will have to get the
|
||||
# factors on the fly.
|
||||
# the formula could apply if we bother with nominal
|
||||
# reading interpretation
|
||||
self.decode_formula(entry[19:25])
|
||||
|
||||
def _decode_state(self, state):
|
||||
mapping = ipmiconst.generic_type_offsets
|
||||
try:
|
||||
if self.reading_type in mapping:
|
||||
desc = mapping[self.reading_type][state]['desc']
|
||||
health = mapping[self.reading_type][state]['severity']
|
||||
elif self.reading_type == 0x6f:
|
||||
mapping = ipmiconst.sensor_type_offsets
|
||||
desc = mapping[self.sensor_type_number][state]['desc']
|
||||
health = mapping[self.sensor_type_number][state]['severity']
|
||||
else:
|
||||
desc = "Unknown state %d" % state
|
||||
health = const.Health.Warning
|
||||
except KeyError:
|
||||
desc = "Unknown state %d for reading type %d/sensor type %d" % (
|
||||
state, self.reading_type, self.sensor_type_number)
|
||||
health = const.Health.Warning
|
||||
return desc, health
|
||||
|
||||
def decode_sensor_reading(self, reading):
|
||||
numeric = None
|
||||
output = {
|
||||
'name': self.sensor_name,
|
||||
'type': self.sensor_type,
|
||||
'id': self.sensor_number,
|
||||
}
|
||||
if reading[1] & 0b100000:
|
||||
output['unavailable'] = 1
|
||||
return SensorReading(output, self.unit_suffix)
|
||||
if self.numeric_format == 2:
|
||||
numeric = twos_complement(reading[0], 8)
|
||||
elif self.numeric_format == 1:
|
||||
numeric = ones_complement(reading[0], 8)
|
||||
elif self.numeric_format == 0:
|
||||
numeric = reading[0]
|
||||
discrete = True
|
||||
if numeric is not None:
|
||||
lowerbound = numeric - (0.5 + (self.tolerance / 2.0))
|
||||
upperbound = numeric + (0.5 + (self.tolerance / 2.0))
|
||||
lowerbound = self.decode_value(lowerbound)
|
||||
upperbound = self.decode_value(upperbound)
|
||||
output['value'] = (lowerbound + upperbound) / 2.0
|
||||
output['imprecision'] = output['value'] - lowerbound
|
||||
discrete = False
|
||||
upper = 'upper'
|
||||
lower = 'lower'
|
||||
if self.linearization == 7:
|
||||
# if the formula is 1/x, then the intuitive sense of upper and
|
||||
# lower are backwards
|
||||
upper = 'lower'
|
||||
lower = 'upper'
|
||||
output['states'] = []
|
||||
output['state_ids'] = []
|
||||
output['health'] = const.Health.Ok
|
||||
if discrete:
|
||||
for state in range(8):
|
||||
if reading[2] & (0b1 << state):
|
||||
statedesc, health = self._decode_state(state)
|
||||
output['health'] |= health
|
||||
output['states'].append(statedesc)
|
||||
output['state_ids'].append(self.assert_trap_value(state))
|
||||
if len(reading) > 3:
|
||||
for state in range(7):
|
||||
if reading[3] & (0b1 << state):
|
||||
statedesc, health = self._decode_state(state + 8)
|
||||
output['health'] |= health
|
||||
output['states'].append(statedesc)
|
||||
output['state_ids'].append(
|
||||
self.assert_trap_value(state + 8))
|
||||
else:
|
||||
if reading[2] & 0b1:
|
||||
output['health'] |= const.Health.Warning
|
||||
output['states'].append(lower + " non-critical threshold")
|
||||
output['state_ids'].append(self.assert_trap_value(1))
|
||||
if reading[2] & 0b10:
|
||||
output['health'] |= const.Health.Critical
|
||||
output['states'].append(lower + " critical threshold")
|
||||
output['state_ids'].append(self.assert_trap_value(2))
|
||||
if reading[2] & 0b100:
|
||||
output['health'] |= const.Health.Failed
|
||||
output['states'].append(lower + " non-recoverable threshold")
|
||||
output['state_ids'].append(self.assert_trap_value(3))
|
||||
if reading[2] & 0b1000:
|
||||
output['health'] |= const.Health.Warning
|
||||
output['states'].append(upper + " non-critical threshold")
|
||||
output['state_ids'].append(self.assert_trap_value(4))
|
||||
if reading[2] & 0b10000:
|
||||
output['health'] |= const.Health.Critical
|
||||
output['states'].append(upper + " critical threshold")
|
||||
output['state_ids'].append(self.assert_trap_value(5))
|
||||
if reading[2] & 0b100000:
|
||||
output['health'] |= const.Health.Failed
|
||||
output['states'].append(upper + " non-recoverable threshold")
|
||||
output['state_ids'].append(self.assert_trap_value(6))
|
||||
return SensorReading(output, self.unit_suffix)
|
||||
|
||||
def _set_tmp_formula(self, value):
|
||||
rsp = self.ipmicmd.raw_command(netfn=4, command=0x23,
|
||||
data=(self.sensor_number, value))
|
||||
# skip next reading field, not used in on-demand situation
|
||||
self.decode_formula(rsp['data'][1:])
|
||||
|
||||
def decode_value(self, value):
|
||||
# Take the input value and return meaningful value
|
||||
linearization = self.linearization
|
||||
if linearization > 11: # direct calling code to get factors
|
||||
# for now, we will get the factors on demand
|
||||
# the facility is engineered such that at construction
|
||||
# time the entire BMC table should be fetchable in a reasonable
|
||||
# fashion. However for now opt for retrieving rows as needed
|
||||
# rather than tracking all that information for a relatively
|
||||
# rare behavior
|
||||
self._set_tmp_formula(value)
|
||||
linearization = 0
|
||||
# time to compute the pre-linearization value.
|
||||
decoded = float((value * self.m + self.b) *
|
||||
(10 ** self.resultexponent))
|
||||
if linearization == 0:
|
||||
return decoded
|
||||
elif linearization == 1:
|
||||
return math.log(decoded)
|
||||
elif linearization == 2:
|
||||
return math.log(decoded, 10)
|
||||
elif linearization == 3:
|
||||
return math.log(decoded, 2)
|
||||
elif linearization == 4:
|
||||
return math.exp(decoded)
|
||||
elif linearization == 5:
|
||||
return 10 ** decoded
|
||||
elif linearization == 6:
|
||||
return 2 ** decoded
|
||||
elif linearization == 7:
|
||||
return 1 / decoded
|
||||
elif linearization == 8:
|
||||
return decoded ** 2
|
||||
elif linearization == 9:
|
||||
return decoded ** 3
|
||||
elif linearization == 10:
|
||||
return math.sqrt(decoded)
|
||||
elif linearization == 11:
|
||||
return decoded ** (1.0/3)
|
||||
else:
|
||||
raise NotImplementedError
|
||||
|
||||
def decode_formula(self, entry):
|
||||
self.m = \
|
||||
twos_complement(entry[0] + ((entry[1] & 0b11000000) << 2), 10)
|
||||
self.tolerance = entry[1] & 0b111111
|
||||
self.b = \
|
||||
twos_complement(entry[2] + ((entry[3] & 0b11000000) << 2), 10)
|
||||
self.accuracy = (entry[3] & 0b111111) + \
|
||||
(entry[4] & 0b11110000) << 2
|
||||
self.accuracyexp = (entry[4] & 0b1100) >> 2
|
||||
self.direction = entry[4] & 0b11
|
||||
# 0 = n/a, 1 = input, 2 = output
|
||||
self.resultexponent = twos_complement((entry[5] & 0b11110000) >> 4, 4)
|
||||
bexponent = twos_complement(entry[5] & 0b1111, 4)
|
||||
# might as well do the math to 'b' now rather than wait for later
|
||||
self.b = self.b * (10**bexponent)
|
||||
|
||||
def tlv_decode(self, tlv, data):
|
||||
# Per IPMI 'type/length byte format
|
||||
ipmitype = (tlv & 0b11000000) >> 6
|
||||
if not len(data):
|
||||
return ""
|
||||
if ipmitype == 0: # Unicode per 43.15 in ipmi 2.0 spec
|
||||
# the spec is not specific about encoding, assuming utf8
|
||||
return unicode(struct.pack("%dB" % len(data), *data), "utf_8")
|
||||
elif ipmitype == 1: # BCD '+'
|
||||
tmpl = "%02X" * len(data)
|
||||
tstr = tmpl % tuple(data)
|
||||
tstr = tstr.replace("A", " ").replace("B", "-").replace("C", ".")
|
||||
return tstr.replace("D", ":").replace("E", ",").replace("F", "_")
|
||||
elif ipmitype == 2: # 6 bit ascii, start at 0x20
|
||||
# the ordering is very peculiar and is best understood from
|
||||
# IPMI SPEC "6-bit packed ascii example
|
||||
tstr = ""
|
||||
while len(data) >= 3: # the packing only works with 3 byte chunks
|
||||
tstr += chr((data[0] & 0b111111) + 0x20)
|
||||
tstr += chr(((data[1] & 0b1111) << 2) +
|
||||
(data[0] >> 6) + 0x20)
|
||||
tstr += chr(((data[2] & 0b11) << 4) +
|
||||
(data[1] >> 4) + 0x20)
|
||||
tstr += chr((data[2] >> 2) + 0x20)
|
||||
return tstr
|
||||
elif ipmitype == 3: # ACSII+LATIN1
|
||||
return struct.pack("%dB" % len(data), *data)
|
||||
|
||||
|
||||
class SDR(object):
|
||||
"""Examine the state of sensors managed by a BMC
|
||||
|
||||
Presents the data from sensor read commands as directed by the SDR in a
|
||||
reasonable format. This module is used by the command module, and is not
|
||||
intended for consumption by external code directly
|
||||
|
||||
:param ipmicmd: A Command class object
|
||||
"""
|
||||
def __init__(self, ipmicmd):
|
||||
self.ipmicmd = weakref.proxy(ipmicmd)
|
||||
self.sensors = {}
|
||||
self.fru = {}
|
||||
self.read_info()
|
||||
|
||||
def read_info(self):
|
||||
# first, we want to know the device id
|
||||
rsp = self.ipmicmd.xraw_command(netfn=6, command=1)
|
||||
rsp['data'] = bytearray(rsp['data'])
|
||||
self.device_id = rsp['data'][0]
|
||||
self.device_rev = rsp['data'][1] & 0b111
|
||||
# Going to ignore device available until get sdr command
|
||||
# since that provides usefully distinct state and this does not
|
||||
self.fw_major = rsp['data'][2] & 0b1111111
|
||||
self.fw_minor = "%02X" % rsp['data'][3] # BCD encoding, oddly enough
|
||||
self.ipmiversion = rsp['data'][4] # 51h = 1.5, 02h = 2.0
|
||||
self.mfg_id = rsp['data'][8] << 16 + rsp['data'][7] << 8 + \
|
||||
rsp['data'][6]
|
||||
self.prod_id = rsp['data'][10] << 8 + rsp['data'][9]
|
||||
if len(rsp['data']) > 11:
|
||||
self.aux_fw = self.decode_aux(rsp['data'][11:15])
|
||||
if rsp['data'][1] & 0b10000000 and rsp['data'][5] & 0b10 == 0:
|
||||
# The device has device sdrs, also does not support SDR repository
|
||||
# device, so we are meant to use an alternative mechanism to get
|
||||
# SDR data
|
||||
if rsp['data'][5] & 1:
|
||||
# The device has sensor device support, so in theory we should
|
||||
# be able to proceed
|
||||
# However at the moment, we haven't done so
|
||||
raise NotImplementedError
|
||||
return
|
||||
# We have Device SDR, without SDR Repository device, but
|
||||
# also without sensor device support, no idea how to
|
||||
# continue
|
||||
self.get_sdr()
|
||||
|
||||
def get_sdr_reservation(self):
|
||||
rsp = self.ipmicmd.raw_command(netfn=0x0a, command=0x22)
|
||||
if rsp['code'] != 0:
|
||||
raise exc.IpmiException(rsp['error'])
|
||||
return rsp['data'][0] + (rsp['data'][1] << 8)
|
||||
|
||||
def get_sdr(self):
|
||||
repinfo = self.ipmicmd.xraw_command(netfn=0x0a, command=0x20)
|
||||
repinfo['data'] = bytearray(repinfo['data'])
|
||||
if (repinfo['data'][0] != 0x51):
|
||||
# we only understand SDR version 51h, the only version defined
|
||||
# at time of this writing
|
||||
raise NotImplementedError
|
||||
# NOTE(jbjohnso): we actually don't need to care about 'numrecords'
|
||||
# since FFFF marks the end explicitly
|
||||
# numrecords = (rsp['data'][2] << 8) + rsp['data'][1]
|
||||
# NOTE(jbjohnso): don't care about 'free space' at the moment
|
||||
# NOTE(jbjohnso): most recent timstamp data for add and erase could be
|
||||
# handy to detect cache staleness, but for now will assume invariant
|
||||
# over life of session
|
||||
# NOTE(jbjohnso): not looking to support the various options in op
|
||||
# support, ignore those for now, reservation if some BMCs can't read
|
||||
# full SDR in one slurp
|
||||
recid = 0
|
||||
rsvid = 0 # partial 'get sdr' will require this
|
||||
offset = 0
|
||||
size = 0xff
|
||||
chunksize = 128
|
||||
self.broken_sensor_ids = {}
|
||||
while recid != 0xffff: # per 33.12 Get SDR command, 0xffff marks end
|
||||
newrecid = 0
|
||||
currlen = 0
|
||||
sdrdata = bytearray()
|
||||
while True: # loop until SDR fetched wholly
|
||||
if size != 0xff and rsvid == 0:
|
||||
rsvid = self.get_sdr_reservation()
|
||||
rqdata = [rsvid & 0xff, rsvid >> 8,
|
||||
recid & 0xff, recid >> 8,
|
||||
offset, size]
|
||||
sdrrec = self.ipmicmd.raw_command(netfn=0x0a, command=0x23,
|
||||
data=rqdata)
|
||||
if sdrrec['code'] == 0xca:
|
||||
if size == 0xff: # get just 5 to get header to know length
|
||||
size = 5
|
||||
elif size > 5:
|
||||
size /= 2
|
||||
# push things over such that it's less
|
||||
# likely to be just 1 short of a read
|
||||
# and incur a whole new request
|
||||
size += 2
|
||||
chunksize = size
|
||||
continue
|
||||
if sdrrec['code'] == 0xc5: # need a new reservation id
|
||||
rsvid = 0
|
||||
continue
|
||||
if sdrrec['code'] != 0:
|
||||
raise exc.IpmiException(sdrrec['error'])
|
||||
if newrecid == 0:
|
||||
newrecid = (sdrrec['data'][1] << 8) + sdrrec['data'][0]
|
||||
if currlen == 0:
|
||||
currlen = sdrrec['data'][6] + 5 # compensate for header
|
||||
sdrdata.extend(sdrrec['data'][2:])
|
||||
# determine next offset to use based on current offset and the
|
||||
# size used last time.
|
||||
offset += size
|
||||
if offset >= currlen:
|
||||
break
|
||||
if size == 5 and offset == 5:
|
||||
# bump up size after header retrieval
|
||||
size = chunksize
|
||||
if (offset + size) > currlen:
|
||||
size = currlen - offset
|
||||
self.add_sdr(sdrdata)
|
||||
offset = 0
|
||||
if size != 0xff:
|
||||
size = 5
|
||||
if newrecid == recid:
|
||||
raise exc.BmcErrorException("Incorrect SDR record id from BMC")
|
||||
recid = newrecid
|
||||
for sid in self.broken_sensor_ids:
|
||||
try:
|
||||
del self.sensors[sid]
|
||||
except KeyError:
|
||||
pass
|
||||
|
||||
def get_sensor_numbers(self):
|
||||
return self.sensors.iterkeys()
|
||||
|
||||
def add_sdr(self, sdrbytes):
|
||||
newent = SDREntry(sdrbytes, self.ipmicmd)
|
||||
if newent.sdrtype == TYPE_SENSOR:
|
||||
id = newent.sensor_number
|
||||
if id in self.sensors:
|
||||
self.broken_sensor_ids[id] = True
|
||||
return
|
||||
self.sensors[id] = newent
|
||||
elif newent.sdrtype == TYPE_FRU:
|
||||
id = newent.fru_number
|
||||
if id in self.fru:
|
||||
self.broken_sensor_ids[id] = True
|
||||
return
|
||||
self.fru[id] = newent
|
||||
|
||||
def decode_aux(self, auxdata):
|
||||
# This is where manufacturers can add their own
|
||||
# decode information
|
||||
return "".join(hex(x) for x in auxdata)
|
||||
|
||||
if __name__ == "__main__": # test code
|
||||
import os
|
||||
import pyghmi.ipmi.command as ipmicmd
|
||||
import sys
|
||||
password = os.environ['IPMIPASSWORD']
|
||||
bmc = sys.argv[1]
|
||||
user = sys.argv[2]
|
||||
ipmicmd = ipmicmd.Command(bmc=bmc, userid=user, password=password)
|
||||
sdr = SDR(ipmicmd)
|
||||
for number in sdr.get_sensor_numbers():
|
||||
rsp = ipmicmd.raw_command(command=0x2d, netfn=4, data=(number,))
|
||||
if 'error' in rsp:
|
||||
continue
|
||||
reading = sdr.sensors[number].decode_sensor_reading(rsp['data'])
|
||||
if reading is not None:
|
||||
print(repr(reading))
|
|
@ -1,21 +0,0 @@
|
|||
# Copyright 2017 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from oslotest import base
|
||||
|
||||
|
||||
class TestCase(base.BaseTestCase):
|
||||
|
||||
"""Test case base class for all unit tests."""
|
|
@ -1,23 +0,0 @@
|
|||
# Copyright 2017 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
from pyghmi.ipmi import sdr
|
||||
from pyghmi.tests.unit import base
|
||||
|
||||
|
||||
class SDRTestCase(base.TestCase):
|
||||
|
||||
def test_ones_complement(self):
|
||||
self.assertEqual(sdr.ones_complement(127, 8), 127)
|
|
@ -1 +0,0 @@
|
|||
__author__ = 'jjohnson2'
|
|
@ -1,148 +0,0 @@
|
|||
# Copyright 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# This provides ability to do HTTPS in a manner like ssh host keys for the
|
||||
# sake of typical internal management devices. Compatibility back to python
|
||||
# 2.6 as is found in commonly used enterprise linux distributions.
|
||||
|
||||
import json
|
||||
import pyghmi.exceptions as pygexc
|
||||
import socket
|
||||
import ssl
|
||||
|
||||
try:
|
||||
import Cookie
|
||||
import httplib
|
||||
except ImportError:
|
||||
import http.client as httplib
|
||||
import http.cookies as Cookie
|
||||
|
||||
__author__ = 'jjohnson2'
|
||||
|
||||
|
||||
# Used as the separator for form data
|
||||
BND = 'TbqbLUSn0QFjx9gxiQLtgBK4Zu6ehLqtLs4JOBS50EgxXJ2yoRMhTrmRXxO1lkoAQdZx16'
|
||||
|
||||
# We will frequently be dealing with the same data across many instances,
|
||||
# consolidate forms to single memory location to get benefits..
|
||||
uploadforms = {}
|
||||
|
||||
|
||||
def get_upload_form(filename, data):
|
||||
try:
|
||||
return uploadforms[filename]
|
||||
except KeyError:
|
||||
form = '--' + BND + '\r\nContent-Disposition: form-data; ' \
|
||||
'name="{0}"; filename="{0}"\r\n'.format(filename)
|
||||
form += 'Content-Type: application/octet-stream\r\n\r\n' + data
|
||||
form += '\r\n--' + BND + '--\r\n'
|
||||
uploadforms[filename] = form
|
||||
return form
|
||||
|
||||
|
||||
class SecureHTTPConnection(httplib.HTTPConnection, object):
|
||||
default_port = httplib.HTTPS_PORT
|
||||
|
||||
def __init__(self, host, port=None, key_file=None, cert_file=None,
|
||||
ca_certs=None, strict=None, verifycallback=None, clone=None,
|
||||
**kwargs):
|
||||
if 'timeout' not in kwargs:
|
||||
kwargs['timeout'] = 60
|
||||
self.thehost = host
|
||||
self.theport = port
|
||||
httplib.HTTPConnection.__init__(self, host, port, strict, **kwargs)
|
||||
self.cert_reqs = ssl.CERT_NONE # verification will be done ssh style..
|
||||
if clone:
|
||||
self._certverify = clone._certverify
|
||||
self.cookies = clone.cookies.copy()
|
||||
self.stdheaders = clone.stdheaders.copy()
|
||||
else:
|
||||
self._certverify = verifycallback
|
||||
self.cookies = {}
|
||||
self.stdheaders = {}
|
||||
|
||||
def dupe(self):
|
||||
return SecureHTTPConnection(self.thehost, self.theport, clone=self)
|
||||
|
||||
def set_header(self, key, value):
|
||||
self.stdheaders[key] = value
|
||||
|
||||
def connect(self):
|
||||
plainsock = socket.create_connection((self.host, self.port), 60)
|
||||
self.sock = ssl.wrap_socket(plainsock, cert_reqs=self.cert_reqs)
|
||||
# txtcert = self.sock.getpeercert() # currently not possible
|
||||
bincert = self.sock.getpeercert(binary_form=True)
|
||||
if not self._certverify(bincert):
|
||||
raise pygexc.UnrecognizedCertificate('Unknown certificate',
|
||||
bincert)
|
||||
|
||||
def getresponse(self):
|
||||
rsp = super(SecureHTTPConnection, self).getresponse()
|
||||
for hdr in rsp.msg.headers:
|
||||
if hdr.startswith('Set-Cookie:'):
|
||||
c = Cookie.BaseCookie(hdr[11:])
|
||||
for k in c:
|
||||
self.cookies[k] = c[k].value
|
||||
return rsp
|
||||
|
||||
def grab_json_response(self, url, data=None):
|
||||
if data:
|
||||
self.request('POST', url, data)
|
||||
else:
|
||||
self.request('GET', url)
|
||||
rsp = self.getresponse()
|
||||
if rsp.status == 200:
|
||||
return json.loads(rsp.read())
|
||||
rsp.read()
|
||||
return {}
|
||||
|
||||
def upload(self, url, filename, data=None):
|
||||
"""Upload a file to the url
|
||||
|
||||
:param url:
|
||||
:param filename: The name of the file
|
||||
:param data: A file object or data to use rather than reading from
|
||||
the file.
|
||||
:return:
|
||||
"""
|
||||
if data is None:
|
||||
data = open(filename, 'rb')
|
||||
if isinstance(data, file):
|
||||
data = data.read()
|
||||
form = get_upload_form(filename, data)
|
||||
ulheaders = self.stdheaders.copy()
|
||||
ulheaders['Content-Type'] = 'multipart/form-data; boundary=' + BND
|
||||
self.request('POST', url, form, ulheaders)
|
||||
rsp = self.getresponse()
|
||||
# peer updates in progress should already have pointers,
|
||||
# subsequent transactions will cause memory to needlessly double,
|
||||
# but easiest way to keep memory relatively low
|
||||
del uploadforms[filename]
|
||||
if rsp.status != 200:
|
||||
raise Exception('Unexpected response in file upload: ' +
|
||||
rsp.read())
|
||||
return rsp.read()
|
||||
|
||||
def request(self, method, url, body=None, headers=None):
|
||||
if headers is None:
|
||||
headers = self.stdheaders.copy()
|
||||
if method == 'GET' and 'Content-Type' in headers:
|
||||
del headers['Content-Type']
|
||||
if self.cookies:
|
||||
cookies = []
|
||||
for ckey in self.cookies:
|
||||
cookies.append('{0}={1}'.format(ckey, self.cookies[ckey]))
|
||||
headers['Cookie'] = '; '.join(cookies)
|
||||
return super(SecureHTTPConnection, self).request(method, url, body,
|
||||
headers)
|
|
@ -1,18 +0,0 @@
|
|||
# Copyright 2017 Red Hat, Inc.
|
||||
# All Rights Reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||
# not use this file except in compliance with the License. You may obtain
|
||||
# a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||
# License for the specific language governing permissions and limitations
|
||||
# under the License.
|
||||
|
||||
import pbr.version
|
||||
|
||||
version_info = pbr.version.VersionInfo('pyghmi')
|
|
@ -1,37 +0,0 @@
|
|||
Summary: Python General Hardware Management Initiative (IPMI and others)
|
||||
Name: python-pyghmi
|
||||
Version: %{?version:%{version}}%{!?version:%(python setup.py --version)}
|
||||
Release: %{?release:%{release}}%{!?release:1}
|
||||
Source0: pyghmi-%{version}.tar.gz
|
||||
License: Apache License, Version 2.0
|
||||
Group: Development/Libraries
|
||||
BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot
|
||||
Prefix: %{_prefix}
|
||||
BuildArch: noarch
|
||||
Vendor: Jarrod Johnson <jjohnson2@lenovo.com>
|
||||
Url: https://git.openstack.org/cgit/openstack/pyghmi
|
||||
|
||||
|
||||
%description
|
||||
This is a pure python implementation of IPMI protocol.
|
||||
|
||||
pyghmicons and pyghmiutil are example scripts to show how one may incorporate
|
||||
this library into python code
|
||||
|
||||
|
||||
|
||||
%prep
|
||||
%setup -n pyghmi-%{version}
|
||||
|
||||
%build
|
||||
python setup.py build
|
||||
|
||||
%install
|
||||
python setup.py install --single-version-externally-managed -O1 --root=$RPM_BUILD_ROOT --record=INSTALLED_FILES --prefix=/usr
|
||||
|
||||
%clean
|
||||
rm -rf $RPM_BUILD_ROOT
|
||||
|
||||
%files -f INSTALLED_FILES
|
||||
%defattr(-,root,root)
|
||||
|
|
@ -1 +0,0 @@
|
|||
pycrypto>=2.6
|
30
setup.cfg
30
setup.cfg
|
@ -1,30 +0,0 @@
|
|||
[metadata]
|
||||
name = pyghmi
|
||||
summary = Python General Hardware Management Initiative (IPMI and others)
|
||||
description-file =
|
||||
README
|
||||
author = Jarrod Johnson
|
||||
author-email = jjohnson2@lenovo.com
|
||||
home-page = http://github.com/openstack/pyghmi/
|
||||
classifier =
|
||||
Intended Audience :: Information Technology
|
||||
Intended Audience :: System Administrators
|
||||
License :: OSI Approved :: Apache Software License
|
||||
Operating System :: POSIX :: Linux
|
||||
Programming Language :: Python
|
||||
Programming Language :: Python :: 2
|
||||
Programming Language :: Python :: 2.7
|
||||
Programming Language :: Python :: 2.6
|
||||
|
||||
[build_sphinx]
|
||||
all_files = 1
|
||||
build-dir = doc/build
|
||||
source-dir = doc/source
|
||||
|
||||
[files]
|
||||
packages =
|
||||
pyghmi
|
||||
|
||||
[global]
|
||||
setup-hooks =
|
||||
pbr.hooks.setup_hook
|
25
setup.py
25
setup.py
|
@ -1,25 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
|
||||
# Copyright (c) 2015 Lenovo
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
# implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# THIS FILE IS MANAGED BY THE GLOBAL REQUIREMENTS REPO - DO NOT EDIT
|
||||
import setuptools
|
||||
|
||||
setuptools.setup(
|
||||
license='Apache License, Version 2.0',
|
||||
scripts=['bin/pyghmicons', 'bin/pyghmiutil', 'bin/virshbmc'],
|
||||
setup_requires=['pbr'],
|
||||
pbr=True)
|
|
@ -1,12 +0,0 @@
|
|||
hacking>=0.5.6,<0.8
|
||||
|
||||
coverage>=3.6
|
||||
discover
|
||||
fixtures>=0.3.14
|
||||
python-subunit
|
||||
sphinx>=1.1.2
|
||||
testrepository>=0.0.17
|
||||
testscenarios>=0.4
|
||||
testtools>=0.9.32
|
||||
os-testr>=0.8.0 # Apache-2.0
|
||||
oslotest>=1.10.0 # Apache-2.0
|
31
tox.ini
31
tox.ini
|
@ -1,31 +0,0 @@
|
|||
[tox]
|
||||
envlist = py35,py27,pep8
|
||||
|
||||
[testenv]
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
LANG=en_US.UTF-8
|
||||
LANGUAGE=en_US:en
|
||||
LC_ALL=C
|
||||
TESTS_DIR=./pyghmi/tests/unit/
|
||||
deps = -r{toxinidir}/requirements.txt
|
||||
-r{toxinidir}/test-requirements.txt
|
||||
commands = ostestr {posargs}
|
||||
|
||||
[tox:jenkins]
|
||||
sitepackages = True
|
||||
|
||||
[testenv:pep8]
|
||||
whitelist_externals = bash
|
||||
commands = bash -c 'flake8 pyghmi bin/*'
|
||||
|
||||
[testenv:cover]
|
||||
setenv = VIRTUAL_ENV={envdir}
|
||||
commands =
|
||||
python setup.py testr --coverage
|
||||
|
||||
[testenv:venv]
|
||||
commands = {posargs}
|
||||
|
||||
[flake8]
|
||||
exclude = .venv,.tox,dist,doc,*.egg,build
|
||||
show-source = true
|
Loading…
Reference in New Issue